Posted in Uncategorized

Visual Language Programming (May-2017) Unit-IV

Question: – 8(a) what are the steps of saving and opening files in rich text box? Explain various file operations supported by VB?

Answer: – I looked at how to load and save plain text between a RichTextBox  and a txt file.   This time I want to look at a different case where text formatting has been applied to the content.   For the purposes of this sample I’m going to take a document that has been created in MS Word.  One of the many formats that both MS Word and WordPad support is RTF.   You can view and edit the RTF document in the RichTextBox.

The WPF Window

Here’s the markup to create the Window I’m using:

<Window x:Class=”RTFContent”



Title=”RTF Rich Content” Height=”450″ Width=”378″>



<RowDefinition Height=”55″ />

<RowDefinition Height=”224*” />



<StackPanel Margin=”12″ Name=”StackPanel1″ VerticalAlignment=”Top”


<Button Name=”btnLoad” Margin=”4″ Padding=”2″

Content=”Load RTF” Click=”btnLoad_Click” />

<Button Name=”btnSave” Margin=”24,4″ Padding=”2″

Content=”Save RTF” Click=”btnSave_Click” />

<Button Height=”23″ Name=”btnClear” Click=”btnClear_Click”

Content=”Clear” Width=”75″ />



<RichTextBox Grid.Row=”1″ Margin=”7″ Name=”RichTextBox1″ Background=”#FFEDEAEA”>








Loading RTF File Content into the RichTextBox

Here is the code needed for loading the content:

1 Imports System.IO


3 Partial Public Class RTFContent


5     Dim DemoRTFFile As String = “C:\Temp\RTFDocument.rtf”

6     Dim fs As FileStream


8     Private Sub btnLoad_Click(ByVal sender As System.Object, ByVal e As System.Windows.RoutedEventArgs)

9         ‘  Load RTB with text from RTF file

10         If File.Exists(DemoRTFFile) Then

11             fs = New FileStream(DemoRTFFile, FileMode.Open, FileAccess.Read)

12             Using fs

13                 ‘ Create a TextRange that comprises the start and end points of the RichTextBox text

14                 Dim RTBText As New TextRange(RichTextBox1.Document.ContentStart, RichTextBox1.Document.ContentEnd)

15                 RTBText.Load(fs, DataFormats.Rtf)

16             End Using

17         End If

18     End Sub


You’ll see that I’ve used an Imports statement for the System.IO namespace and that I’ve set up a couple of variables for the file I want to load/edit/save and for a FileStream.  The rest of the code works as follows:

  • In the click event of btnLoad, I first check that the file does exist to save an embarrassing runtime crash.
  • A FileStream is then used to access the file in preparation for reading it.
  • I’ve used the ‘Using’ statement on the next line, as this is a useful feature that saves me having to remember to close and dispose of resources each time.
  • Line 14 might look a bit strange.  Bearing in mind that there is no content yet, it might seem weird to be creating a range that spans the content start to the content end.  However this is just a way of identifying the boundaries of the object that is going to hold the content.
  • Line 15 uses the Load method of the TextRange class.   It takes a FileStream and a Data Format as its parameters.
  • Once this code has completed, all the rich content of the file will be contained in the RichTextBox.

The result, with the particular file I used, looks like this:


Editing the Content

I can make changes to this document.  If you have used the Windows Forms RichTextBox much in the past, you’ll be impressed by the additional features available by default in the WPF version.

WPF includes a feature known as Commands that enable particular tasks to be carried out as a result of various user actions.    In the case of the RichTextBox, several commands are built in to its default feature set.  These include the following common text editing tasks:

  • Copy
  • Cut
  • Paste
  • Bold
  • Italic
  • Underline
  • Undo

There are several more, not listed above.  I will be writing an item that takes this feature further, but for now you know you can use the familiar shortcut key combinations to implement the tasks – Ctrl & C for Copy, and so on.

Saving Content to an RTF File

Once you’ve made any changes you want, you can then save the edited version back to file.    Saving is almost exactly the same procedure that I used in the earlier plain text (.txt) file save :

Private Sub btnSave_Click(ByVal sender As System.Object, ByVal e As System.Windows.RoutedEventArgs) Handles btnSave.Click

If File.Exists(DemoRTFFile) Then

fs = New FileStream(DemoRTFFile, FileMode.Open, FileAccess.Write)

Using fs

Dim RTBText As New TextRange(RichTextBox1.Document.ContentStart, RichTextBox1.Document.ContentEnd)

RTBText.Save(fs, DataFormats.Rtf)

End Using

End If

End Sub

You’ll see that the key is the now familiar TextRange object that represents all the content we are interested in.     The Save method of the TextRange this time uses the Rtf data format.


Apart from some subtle differences, the overall approach to using the RTF format is very similar to that for plain text.   The built-in commands I’ve listed above will work with both formats in the RichTextBox.  The difference is that when you save back to a plain text file you will lose any of the bold, italic, underline features.   In the next blog, I’ll pull the two strands together and look at loading a txt file, making some formatting changes and saving as rich text.  At the same time, we’ll look at some further benefits of using commands in the RichTextBox.

Question: – 8(b) Differentiate DAO, RDO and ADO by discussing their important properties.

Answer: – Data Access
Data access is a feature of Visual Basic that allows you to access and manipulate any database from Visual Basic. The database may be either MS-Access database or FoxPro database or it may also be any of the relational databases such as Oracle. You can develop applications in Visual Basic that can access data in a database. The database may be on the same machine as the application or it may be on database server that is far away from Visual Basic application. Whatever may be the case, you can access and manipulate the data using applications written in Visual Basic.

The following are the various ways of access database.

Data Access Objects (DAO)
This is an object model that has a collection of objects using which you can access a database. This model gives complete control on the database. This model uses Jet Engine, which is the native database engine used by Visual Basic and MS-Access. This was the first model to be used in Visual Basic. Though it is possible to access any database using this, it is particularly suitable for MS-Access database and not suitable for ODBC data sources such as Oracle and MS-SQL Server. So, Microsoft later introduced RDO.

Remote Data Objects (RDO)
These objects are only used to access ODBC data sources such as Oracle. These objects access databases that are on remote machine (database server). This object model has less number of objects compared with DAO and more suitable for accessing remote databases. We will discuss more about RDOs in chapter 18.

ActiveX Data Objects (ADO)
Microsoft has introduced a new object model called ActiveX Data Objects (ADO), which is based on ActiveX technology, for the first time in Visual Basic 6.0. This object model has very few objects and it is based on OLE DB interface. OLE DB interface is a new interface (replacing ODBC and others), through which you can access data of all formats in the same manner. ADO uses OLE DB providers to access the data. That means each database is accessed through OLE DB provider. And ADO provides the programming framework to access OLE DB. ADO is also much easier to deal with.

Differentiate DAO, RDO and ADO:-

DAO: Database Access Object came out with Visual Basic 4 today it’s recommended you use ADO instead for, among other things, performance gains in most of your database operations. :-). First to come out, and slowest..correlation? I think so ;-).

RDO: Remote Database Object came out with Visual Basic 5 It was supposed to be designed to allow remote connections to other PC’s database and database servers, which it did, however there was a performance cost involved (in most projects I’ve worked with). However it did do the job. But ADO is faster at it :-).

ADO: ActiveX Database Object. The best available to VB (aside direct ODBC which is faster) but this object hierarchy is the fastest out there and the safest for the data as well. At least it’s what I’ve noticed. Of course other server setups might report differently ;-). but in my experience, ADO is the best of these 3 technologies.

Question: – 9(a) Explain the steps of creating and using ActiveX document with an example.
Answer: – ActiveX Document Overview

ActiveX documents enable you to develop Visual Basic 5 applications that can be used in container applications, such as Microsoft Internet Explorer or Microsoft Outlook. An ActiveX document is not a traditional standard word processing document, such as a Microsoft Word document or a Lotus Word Pro document. Instead, think of an ActiveX document as a Visual Basic Internet application. ActiveX documents are designed and created in the Visual Basic 5 development environment and can use ActiveX controls and forms.

You might be confused by the word document in ActiveX document instead of application. Step back for a moment and think about a Microsoft Excel spreadsheet used to compute a tax return. The spreadsheet contains the formulas and data required to generate a tax return. The spreadsheet offers application-like functionality because it can be used to perform tasks such as computing tax returns. The spreadsheet alone is not an application, however; it requires the application Microsoft Excel to create and use the spreadsheet. You can give your friends a copy of the tax return spreadsheet, and they can do their taxes as long as they have Microsoft Excel. One other point: The Excel spreadsheet also can be viewed in other applications that act as containers–for example, a Visual Basic application using the OLE2 custom control. An ActiveX document created in Visual Basic 5 is similar to the Excel spreadsheet. By itself, the ActiveX document is not an application. With a valid container application and ActiveX component, the ActiveX document performs like an application.

When and Why to Use ActiveX Documents

ActiveX documents are a really cool Visual Basic 5 feature, but those of you who program for a living and want to try out this new technology will ultimately face the manager afraid of technology or the internal software police who must study every technology to death (until it’s old technology) and officially approve it before anyone is allowed to use it. How do you justify using ActiveX documents? I believe that the strength of ActiveX documents will prove to be in an intranet environment as opposed to an Internet environment. In an intranet environment, ActiveX documents enable you to perform these tasks:

  • Viewing and modifying corporate databases using ODBC, DAO, RDO, and data-bound ActiveX controls
  • Providing the user interface tier in a three-tier architecture application

If you are thinking that these tasks are identical to the tasks you currently use Visual Basic and other development tools to perform, you are correct. But the advantage that ActiveX documents offer over current development tools is the buzzword of the late 1990s: lower-cost ownership. ActiveX documents are easier to install and maintain than existing client/server applications, which results in a lower cost of ownership. ActiveX documents install themselves from a central code base, for example. No information system personnel must go from PC to PC (or use software such as Microsoft SMS) to perform software installations. When a new employee starts work, instead of ensuring that he has all the proper applications, you can have them access a URL, and the application installs itself. When the ActiveX document requires a revision, you just have to change the software in one location. No need to worry about upgrading a user to a newer version or that a user is working with an older version. When a user accesses the revision via a Web browser, the ActiveX document automatically upgrades itself.

The ease of software distribution via ActiveX documents makes ActiveX documents very attractive in an intranet environment. If your company executes all your applications from a central directory on a LAN instead of installing them on users’ PCs, you probably are thinking that ActiveX documents will not cut your cost of ownership because your applications are installed in only one location anyway. In this type of environment, ActiveX documents offer better performance because running applications from a LAN is always slower than executing them from a client PC. Also, you have the added benefit of no software distribution requirements. And even though you might have all your applications installed on a LAN, IS personnel are often required to run minimum setup and installation procedures on client machines when giving them access to new applications.

ActiveX documents are not restricted to the intranet and can be used as well on the Internet. You might want to use an ActiveX document when you find HTML and VBScript lacking in functionality for a complex client form. You might prefer to use ActiveX documents instead of a standard HTML page because of the powerful features of Visual Basic 5’s design environment and integrated debugger compared to the current HTML and scripting environment.

NOTE: You cannot use ActiveX documents over the Internet to interact with corporate databases using standard Visual Basic 5 controls and ODBC (which can be performed in an intranet). Microsoft has introduced a technology called Advanced Data Connector (ADC), which enables you to use bound data connections to a corporate database via the Internet (HTTP). ADC is covered in Chapter 23, “MS SQL Server Internet Capabilities.”

Creating an ActiveX Document

ActiveX documents are easy to build and test. This section will give you a detailed walkthrough of the steps required to create an ActiveX document. You will learn about the primary object that makes up an ActiveX document: the UserDocument object.

TIP: You will get the most out of the next section if you follow along and perform each step in Visual Basic 5. The database application created in this section is not included on the CD-ROM that accompanies this book.

The UserDocument

Before going over the steps required to create an ActiveX document, you should understand the primary object used to create ActiveX documents: the UserDocument. The UserDocument is the equivalent of a Visual Basic form in a standard client/server project. Like a standard Visual Basic form, the UserDocument object has events, properties, methods, and ActiveX controls (except the OLE2 control) that can be placed on the UserDocument object.

Steps to Create an ActiveX Document

To create an ActiveX document, follow these steps:

  1. Create a new Visual Basic project by selecting File|New Project from the Visual Basic menu. The Visual Basic New Project dialog box, shown in Figure 15.2, appears. Double-click the ActiveX Document DLL icon in the New Project dialog box.

NOTE:An ActiveX document’s associated ActiveX components can be created as DLLs or EXEs. You will learn more about the differences between an ActiveX DLL and EXE in Chapter 22, “Writing Server-Side Applications with VB 5 ActiveX Components.” When developing ActiveX documents, it is important for you to know that the ActiveX DLL will provide you with the best performance. One drawback of an ActiveX DLL is that when you use Internet Explorer as the ActiveX document container, the ActiveX document using an ActiveX DLL cannot display any modeless forms (all forms must be modal). An ActiveX document using an EXE component can display modeless forms in Internet Explorer.

The new project contains a single UserDocument object called UserDocument1.

  1. Double-click UserDocument1in the Visual Basic Project box. UserDocument1 appears, as shown in Figure 15.3. Notice that the UserDocument object resembles a standard Visual Basic form. Like a standard Visual Basic form, you can set UserDocument properties by using the standard Properties dialog box.
  2. Add the following ActiveX controls to UserDocument1: a data control and three textboxes. UserDocument1should look similar to Figure 15.4.4. Set the data control’s DatabaseName property to the Microsoft Access database that comes with Visual Basic 5, Biblio.mdb. Biblio.mdb is located in the directory in which Visual Basic 5 is installed.
  3. Set the data control’s RecordSourceproperty to Authors (the Authors table in the Biblio.mdb database).6. Set the DataSource property on all three textbox controls to Data1.

    7. Set the textbox control’s DataField property to Au_Id for text1, Author for text2, and Year Born for text3.

You now have constructed an ActiveX document.

Question: – 9(b) Explain steps of database connectivity with DAO. How a record in record set is updated and deleted.

Answer: – Data Access Object (DAO):

The Microsoft Data Access Object (DAO) is an approach of database programming which is similar to ODBC. In this approach instead of using CRecordset and CDatabase, we use CDaoRecordset and CDaoDatabase.

Features of DAO are –

  1. DAO is a set of COM interface. These interfaces are set of pure virtual functions.
  2. The required COM model is located in DAO350.DLL.
  3. It has support for Jet Database engine.
  4. The Visual Basic Application (VBA) Automation controller can use DAO object and can make use of the DAO Library.
  5. The MFC classes are used to implement the concept of DAO. The MFC database classes for DAO are –
  •  CDaoDatabase: An interface for using database.
  •  CDaoWorkspace: An interface for managing the single user connectivity with the database.
  • CDaoRecordset: An interface for accessing records.
  • CDaoTableDef: An interface for manipulating the base table.
  • CDaoQueryDef: An interface used to execute queries on database.

Types of Database that can be opened with DAO-

  1. Access Database
  2. ODBC Database Source
  3. ISAM-Type Database Source
  4. External Tables


Recordset Object

·         The Recordset object is an important concept.  When we set the RecordSource property (either select a table from the database or form a virtual table via a query), the data control (using the Jet engine) retrieves the needed records and places them in the Recordset object for our use.  We will see that this object has its own properties and methods for our use.


·         There are three types of recordsets, established via the RecordsetType property:

Table                         Representation of a native database table (not formed via a query).  You can add, change, or delete records.

Dynaset                    The default type, a Dynaset is formed as the result of a database query.  You can add, change, or delete records from the underlying table(s).  This is the most flexible Recordset type.

Snapshot                  A static copy of records that cannot be updated.  Used if you are just viewing or searching a database table.

Add new record to DAO Recordset

To add a new record a recordset has to be be available. The adding of a record requires:

  1. AddNew: Start inserting a record
  2. Set the values of the fields of the record being created
  3. Update: Finalize the adding

Dim rstCategories As RecordsetSet

rstCategories = CurrentDb.OpenRecordset(Name:=”Categories”, Type:=RecordsetTypeEnum.dbOpenDynaset)

With rstCategories


![Category Name] = “Better software”

!Description = “5 star rated”


End With


  1. If a fieldname contains a space you must put it between square brackets.
  2. If you add a record to a recordset of type dynaset, the new record will appears at the end of the Recordset, regardless how the Recordset is sorted. To force the new record to appear in its properly sorted position, you can use the Requery method.
Read values from record

To read the field values from a record you first have to make it the current. Subsequently the value of a field can either be obtained using the .Fieldsmethod or shorter equivalents

With rstCategories

lng = !CategoryID

str1 = ![Category Name]

str2 = .Fields(“Description”)

End With

Edit a record in a DAO Recordset

To edit a record in a recordset it first has to be made the current record. After that, changing the values of fields of a record requires:

  1. Edit: Indicate the current record is to be edited
  2. Set the values of the fields of the record being created
  3. Update: Finalize the adding

With rstCategories

.Edit    ![Category Name] = “Best software”


End With

Move through a DAO Recordset – make record current

Moving through a recordset changes what is the ‘current’ record.

Find a record

The most direct way to move to a specific record is using the FindFirst method.

With rstCategories

.FindFirst “CategoryName = ” & “‘better software'”

If .NoMatch Then

End If

End With

For best performance, the criteria should be in either the form “field = value” where field is an indexed field in the underlying base table, or “field LIKEprefix” where field is an indexed field in the underlying base table and prefix is a prefix search string (for example, “ART*” ).

After having found the record you can read or change the record’s field values as explained under Edit a record in a DAO Recordset.

Processing all records

Use the Move methods to move from record to record without applying a condition. When you open a Recordset, the first record is current. Using any of the Move methods (MoveFirst, MoveLast, MoveNext, or MovePrevious) causes an error if the recordset has no records, so you should test this condition before using a Move method. If, as usually is the case, you use the MoveNext in a loop as below this test is done with .EOF.

Do While Not rst.EOF



Delete a record
If you want to delete a record you first have to move to it (see above) making it the current. After that simply
When you use the Delete method, the Microsoft Access database engine immediately deletes the current record without any warning or prompting. Deleting a record does not automatically cause the next record to become the current record; to move to the next record you must use the MoveNext method.

This article is contributed by Tarun Jangra. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.

Posted in MCA

Visual Language Programming (May-2017) Unit-III

Question: -6(a) what is coolbar? How is it created and used? Explain its important features.

Answer: – CoolBar control is a container control, able to host child controls. It consists of a collection of one or more resizable regions known as bands. Each band can host a single child control.

Using the CoolBar control is the same as using most other ActiveX controls, with the following exceptions.

Adding Controls to a CoolBar

The method for adding child controls to a CoolBar control is slightly different from that of adding controls to other containers. The control must first be associated with a Band object; there is a limit of one child control per band.

To add a control to a CoolBar: with the CoolBar control selected, select a control from the ToolBox and draw it on the CoolBar. Open the Property Page for the CoolBar and select the Bands tab. Use the Index buttons to select the index of the Band object on which you want the control to appear. Select the control from the Child list. The child control will then move and resize along with the Band object at run time.

If you simply add a control without associating it to a band, it will appear as a floating control over the CoolBar at run time. To avoid this situation, set the Visible property of the control to False.

You can have additional controls contained on a CoolBar and swap them in and out at run time using the Child property of the Band object.

Displaying Pictures on a CoolBar

The CoolBar control provides several properties that allow a great deal of flexibility in displaying pictures.

When you assign an image to the Picture property of the CoolBar control, that image will be tiled seamlessly across all bands, behind any child controls. By setting the FixedBackground property of a Band object to False, the Image will be tiled within that band.

The Band object has a Picture property that allows you to display a different background picture when the UseCoolbarPicture property is set to False. The Band object also has an Image property which can be used to display an icon to the right of the move handle.

The EmbossPicture property can be used to force an image to be displayed in two colors, similar to the Internet Explorer toolbar. The two colors to be used are determined by the EmbossHighlight and EmbossShadow properties.

When the EmbossPicture property is set to True, the image assigned to the Picture property will be dithered to the EmbossHighlight and EmbossShadow colors. The dithering process uses a fixed threshold to determine which colors in the image will be mapped to the highlight and shadow colors. Choose images with a good definition between dark and light colors for best results.

Using Other Controls with the CoolBar 

There are certain limitations to the controls that can be used as child controls on a CoolBar control.

The CoolBar control can only host controls which expose a Window handle. Lightweight controls such as Label, Image, and Shape controls don’t expose a Window handle and can’t be used. Although you can place these controls on the CoolBar, they won’t appear at run time and won’t be listed in the Child list box on the Property Page.

Although each band in the CoolBar control can only host a single child control, you can get around this limitation by placing a container control (such as a PictureBox) on the band and hosting additional controls within that container. Keep in mind that if you do this, you will need to create the code to resize the nested controls in response to changes in the CoolBar.

Some controls may not behave as expected when contained within a CoolBar control. For example, a contained Toolbar control won’t paint properly when its Wrappable property is set to True and the CoolBar is resized. To avoid this problem, set the Wrappable property to False.

Question: -6(b) Explain various graphic functions available in VB with an example.

Answer: -There are four basic controls in VB6 that you can use to draw graphics on your form: the line control, the shape control, the image box and the picture box. To draw a straight line, just click on the line control and then use your mouse to draw the line on the form. After drawing the line, you can then change its color, width and style using the BorderColor, BorderWidth and BorderStyle properties.Similarly, to draw a shape, just click on the shape control and draw the shape on the form. The default shape is a rectangle, with the default shape property set at 0. You can change the shape to square, oval, circle and rounded rectangle by changing the shape property’s value to 1, 2, 3 , 4, and 5 respectively. In addition, you can change its background color using the BackColor property, its border style using the BorderStyle property, its border color using the BorderColor property as well its border width using the BorderWidth property.

The program in this example allows the user to change the shape by selecting a particular shape from a list of options from a list box, as well as changing its color through a common dialog box.

The objects to be inserted in the form are a list box, a command button, a shape control and a common dialog box. The common dialog box can be inserted by clicking on ‘project’ on the menu and then select the Microsoft Common Dialog Control 6.0 by clicking the check box. After that, the Microsoft Common Dialog Control 6.0 will appear in the toolbox; and you can drag it into the form. The list of items can be added to the list box through the AddItem method. The procedure for the common dialog box to present the standard colors is as follows:

CommonDialog1.Flags = &H1&
Shape1.BackColor = CommonDialog1.Color

The last line will change the background color of the shape by clicking on a particular color on the common dialog box as shown in the Figure 18.1 below:

The Code

Private Sub Form_Load()
List1.AddItem "Rectangle"
List1.AddItem "Square"
List1.AddItem "Oval"
List1.AddItem "Circle"
List1.AddItem "Rounded Rectangle"
List1.AddItem "Rounded Square"
End Sub
Private Sub List1_Click()
Select Case List1.ListIndex
Case 0
Shape1.Shape = 0
Case 1
Shape1.Shape = 1
Case 2
Shape1.Shape = 2
Case 3
Shape1.Shape = 3
Case 4
Shape1.Shape = 4
Case 5
Shape1.Shape = 5
End Select
End Sub
Private Sub Command1_Click()
CommonDialog1.Flags = &H1&
Shape1.BackColor = CommonDialog1.Color
End Sub
Figure 18.1 The color dialog                                             Figure 18.2: The Interface

18.3 PSet, Line and Circle Drawing Methods

Other than using the line and shape controls to draw graphics on the form, you can also use the Pset, Line and Circle methods to draw graphics on the form.

18.3.1 The Pset Method

The Pset method draw a dot on the screen, it takes the syntax

Pset (x , y ), color

(x,y) is the coordinates of the point and color is its color. To specify the color, you can use the color codes or the standard VB color constant such as VbRed, VbBlue, VbGeen and etc. For example, Pset(100,200), VbRed will display a red dot at the (100,200) coordinates.


The Pset method can also be used to draw a straight line on the form. The procedure is

For x= a to b
Next x

This procedure will draw a line starting from the point (a,a) and to the point (b,b). For example, the following procedure will draw a magenta line from the point (0,0) to the point (1000,1000).

For x= 0 to 100
 Pset(x,x) , vbMagenta
Next x

18.3.2 The Line Method

Although the Pset method can be used to draw a straight line on the form, it is a little slow. It is better to use the Line method if you want to draw a straight line faster. The format of the Line command is shown below. It draws a line from the point (x1, y1) to the point (x2, y2) and the color constant will determine the color of the line.

Line (x1, y1)-(x2, y2), color

For example, the following command will draw a red line from the point (0, 0) to the point (1000, 2000).

Line (0, 0)-(1000,2000), VbRed

The Line method can also be used to draw a rectangle. The syntax is

Line (x1-y1)-(x2, y2), color, B

The four corners of the rectangle are (x1-y1), (x2-y1), (x1-y2) and (x2, y2)

Another variation of the Line method is to fill the rectangle with a certain color. The syntax is

Line (x1, y1)-(x2, y2), color, BF

If you wish to draw the graphics in a picture box, you can use the following syntaxes

Picture1.Line (x1, y1)-(x2, y2), color
Picture1.Line (x1-y1)-(x2, y2), color, B
Picture1.Line (x1-y1)-(x2, y2), color, BF
Picture1.Circle (x1, y1), radius, color

Example 18.3 The Bar Graph Plotter

We shall use the knowledge we gained from the Line method to create a bar graph. In this program, we shall insert a picture box for drawing the bar graph. In addition, we insert six text boxes for accepted the user input. We also insert two command buttons for drawing and reseting. Besides that, we need to define a new origin using the Scale method, otherwise, the bar graph will be upside down. The code is

Picture1.Scale (0, 5000)-(5000, 0)
The Code
Private Sub Command1_Click()
Dim sale1, sale2, sale3, sale4, sale5, sale6 As Integer

sale1 = Val(Txt_Jan.Text)
sale2 = Val(Txt_Feb.Text)
sale3 = Val(Txt_Mac.Text)
sale4 = Val(Txt_Apr.Text)
sale5 = Val(Txt_May.Text)
sale6 = Val(Txt_Jun.Text)

Picture1.Line (100, 0)-(600, sale1 * 50), vbRed, BF
Picture1.Line (700, 0)-(1200, sale2 * 50), vbRed, BF
Picture1.Line (1300, 0)-(1800, sale3 * 50), vbRed, BF
Picture1.Line (1900, 0)-(2400, sale4 * 50), vbRed, BF
Picture1.Line (2500, 0)-(3000, sale5 * 50), vbRed, BF
Picture1.Line (3100, 0)-(3600, sale6 * 50), vbRed, BF
Picture1.Line (3700, 0)-(4200, sale7 * 50), vbRed, BF
Picture1.Line (4300, 0)-(4800, sale8 * 50), vbRed, BF

End Sub

Private Sub Command2_Click()
Txt_Jan.Text = ""
Txt_Feb.Text = ""
Txt_Mac.Text = ""
Txt_Apr.Text = ""
Txt_May.Text = ""
Txt_Jun.Text = ""


End Sub

Private Sub Form_Load()
'To redefine the coordinates of the origin
Picture1.Scale (0, 5000)-(5000, 0)

End Sub

The Output

18.3.3 The Circle Method

The Circle Method Draws a circle, ellipse, or arc on an object such as a form or a picture box. The syntax is :

object.Circle  (x, y), radius, color, start, end, aspect


  • (x,y) indicate the coordinates of the center of the circle
  • color indicates the color of the circle’s outline
  • start and end indicate the starting position and ending position in radian of an arc or a sector
  • aspect indicates the aspect ratio of the circle. The default value is 1.0, which yields a perfect (non-elliptical) circle.

For example, the procedure

Form1.Circle (400, 400),500, VbRed

Question: -7(a) What is the use of tab strip control in VB? Explain how tabs are added and deleted at run time.

Answer:- The TabStrip can be tricky because you maybe expect it to be a container but it’s not. If you want to design tabs on a form where each tab’s content looks different, use the SSTab control that ships with VB.

The TabStrip control, in my view, is useful mainly where you are displaying the same type of thing within the control but you need different ‘headings’.

For example, imagine you have a TabStrip and it contains boxes to enter an address. Your tabs might say “Home”, “Work”, “Parent’s”. Depending on the tab that is clicked, you display a different address, but the basic layout is always the same.

To add tabs, simply use TabStrip1.Tabs.Add …

To work with the selected tab, use TabStrip1.SelectedTab

Remember that Tabs start at 1, not 0.

Bear in mind that some VB controls, such as a Label, will always disappear behind a TabStrip and can’t be forced onto the top of it, so if you want to draw labels and things on your TabStrip you might want to place them in a PictureBox or SSPanel container first, and then stick that inside the TabStrip control.

To add a tab programmatically

  • Use the Add method of the TabPages property.


string title = “TabPage ” + (tabControl1.TabCount + 1).ToString();  TabPage myTabPage = new TabPage(title);  tabControl1.TabPages.Add(myTabPage);

To remove a tab programmatically

  • To remove selected tabs, use the Remove method of the TabPages property.


  • To remove all tabs, use the Clear method of the TabPages property.

// Removes the selected tab:  tabControl1.TabPages.Remove(tabControl1.SelectedTab);

// Removes all the tabs:  tabControl1.TabPages.Clear();

Question: -7(b) Explain RGB and other color related methods with an example.

Answer: – Returns a Long whole number representing an RGB color value. Syntax RGB( redgreenblue ) The RGB function syntax has these named arguments:

Part Description
red Required; Variant (Integer). Number in the range 0-255, inclusive, that represents the red component of the color.
green Required; Variant (Integer). Number in the range 0-255, inclusive , that represents the green component of the color.
blue Required; Variant (Integer). Number in the range 0-255, inclusive, that represents the blue component of the color.

Remarks Application methods and properties that accept a color specification expect that specification to be a number representing an RGB color value. An RGB color value specifies the relative intensity of red, green, and blue to cause a specific color to be displayed. The value for any argument to RGB that exceeds 255 is assumed to be 255. The following table lists some standard colors and the red, green, and blue values they include:

Color Red Value Green Value Blue Value
Black 0 0 0
Blue 0 0 255
Green 0 255 0
Cyan 0 255 255
Red 255 0 0
Magenta 255 0 255
Yellow 255 255 0
White 255 255 255

The RGB color values returned by this function are incompatible with those used by the Macintosh operating system. They may be used within the context of Microsoft applications for the Macintosh, but should not be used when communicating color changes directly to the Macintosh operating system.


This example shows how the RGB function is used to return a whole number representing an RGB color value. It is used for those application methods and properties that accept a color specification. The object MyObject and its property are used for illustration purposes only. If MyObject does not exist, or if it does not have a Color property, an error occurs.

Dim RED, I, RGBValue, MyObjectRed = RGB(255, 0, 0)    ‘ Return the value for Red.I = 75    ‘ Initialize offset.RGBValue = RGB(I, 64 + I, 128 + I)     ‘ Same as RGB(75, 139, 203).MyObject.Color = RGB(255, 0, 0)    ‘ Set the Color property of     ‘ MyObject to Red.

Other Color Method

For example

In VB:

textBox1.BackColor = Color.Linen

But what if you want to display a color that is not one of the named colors? Then you can use the Color.FromArgb method to define your color by specifying the red, green, and blue values.

In VB:

textBox1.BackColor = Color.FromArgb(250, 245, 235)

Ah, but what if you have the color defined as a hexadecimal value? Well, you could use Bing/Google to find a converter to convert it to RGB. Or you can use the ColorTranslator.FromHtml method.

In VB:

textBox3.BackColor = ColorTranslator.FromHtml(“#FAF9F9”)

This article is contributed by Tarun Jangra. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.


click here for solved UNIT-IV

Posted in Uncategorized

Visual Language Programming (May-2017) Unit-II

Question: -4(a) How data is coordinated between multiple forms? Also write a pseudo code to add and load new forms at run time.

Answer:- coordinated between multiple forms

If you want Form2 to be able to communicate with Form1, you need to supply a reference to Form1. Once you do this, you will be set up for two-way communication, as both forms will be holding a reference to the other. The simplest way to accomplish this is to add a public variable (of type Form1) to Form2, like this:

Public Class Form2

Inherits System.Windows.Forms.Form

Public myCaller As Form1

Then, right after you create an instance of Form2 in the button click event , you can set this property.

Listing 3.21 You Can Pass a Form Reference with a Property

Dim myForm As Form2

Private Sub Button1_Click( _   ByVal sender As System.Object, _   ByVal e As System.EventArgs) _

Handles Button1.Click

If myForm Is Nothing Then

myForm = New Form2

myForm.myCaller = Me

End sub

If  myForm.Show()=true then

End Sub

If code in Form2 needs to access Form1, it can now do so through the myCaller variable. Add a button to Form2 and put this code into it

Create a Windows Forms Application

  1. Start Visual Studio .NET or Visual Studio 2005 or a later version, and create a new Visual C# Windows Application project named WinControls. Form1 is added to the project by default.
  2. Double-click Form1 to create and view the Form1_Load event procedure.
  3. Add private instance variables to the Form1 class to work with common Windows controls. The Form1class starts as follows:

public class Form1 : System.Windows.Forms.Form
private TextBox txtBox = new TextBox();
private Button btnAdd = new Button();
private ListBox lstBox = new ListBox();
private CheckBox chkBox = new CheckBox();
private Label lblCount = new Label();

//Other code.

Note The code should be changed in Visual Studio 2005 or in Visual Studio 2008. When you create a Windows Forms project, Visual C# adds one form to the project by default. This form is named Form1. The two files that represent the form are named Form1.cs and Form1.designer.cs. You write your code in Form1.cs. The Designer.cs file is where the Windows Forms Designer writes the code that implements all the actions that you performed by adding controls.

Customize Form and Control Properties

  1. Locate the Form1_Load event procedure, and add the following code to the procedure to customize the appearance of the Form control:

//Set up the form.
this.MaximizeBox = false;
this.MinimizeBox = false;
this.BackColor = Color.White;
this.ForeColor = Color.Black;
this.Size = new System.Drawing.Size(155, 265);
this.Text = “Run-time Controls”;
this.FormBorderStyle = FormBorderStyle.FixedDialog;
this.StartPosition = FormStartPosition.CenterScreen;

2. Add the following code to the Form1_Load event procedure to customize the appearance of the Button control:

//Format controls. Note: Controls inherit color from parent form.
this.btnAdd.BackColor = Color.Gray;
this.btnAdd.Text = “Add”;
this.btnAdd.Location = new System.Drawing.Point(90, 25);
this.btnAdd.Size = new System.Drawing.Size(50, 25);

3. Add the following code to customize the appearance of the TextBox control in Form1_Load:

this.txtBox.Text = “Text”;
this.txtBox.Location = new System.Drawing.Point(10, 25);
this.txtBox.Size = new System.Drawing.Size(70, 20);

4. Add the following code to customize the appearance of the ListBox control in Form1_Load:

this.lstBox.Sorted = true;
this.lstBox.Location = new System.Drawing.Point(10, 55);
this.lstBox.Size = new System.Drawing.Size(130, 95);

5. Add the following code to customize the appearance of the CheckBox control in Form1_Load:

this.chkBox.Text = “Disable”;
this.chkBox.Location = new System.Drawing.Point(15, 190);
this.chkBox.Size = new System.Drawing.Size(110, 30);

6. Add the following code to customize the appearance of the Label control in Form1_Load:

this.lblCount.Text = lstBox.Items.Count.ToString() + ” items”;
this.lblCount.Location = new System.Drawing.Point(55, 160);
this.lblCount.Size = new System.Drawing.Size(65, 15);

Add Controls to the Form

  1. Add the following code to add each object to the Controls array of the form at the end of Form1_Load:

//Add controls to the form.

2. Save the project.

Question:- 5(b) Explain Various drag and drop operations in VB along with events related to them.

Answer:- Drag & Drop

In computer graphical user interface drag & drop is the action of clicking on an object (virtual object on screen) and dragging it to a different location (on screen) as required.

The basic sequence involved in drag & drop is

  1. Press & hold down, the button on the mouse or other pointing device to “grab” the object.
  2. “Drag” the object /cursor/ pointing device to the desired location
  3. “Drop” the object by releasing the button

The drag-n-drop is programmatically done and it is widely used in creating objective type questions.

Drag and Drop Events
A drag and drop event takes places between two objects. At one object a drag is initiated and at the second object a drop takes place.

When the drag begins, the first object’s does not experience an event. VB silently handles the creation and movement of the drag icon. When the mouse is released, signalling a drop operation, the DragDrop event of the receiving control is fired and any code in that event is executed. If the DragDrop event has no code, then nothing else happens.

A second event is possible, which is the DragOver event. After a drag is initiated and at the moment the drag icon is moved over another control, that control’s DragOver event is fired. It’s only when the mouse button is released that the DragDrop event takes place on the receiving control.

The dragdrop can be very useful, users know how to use it and users see it in most of the popular shrink wrap applications. With that background, how can you not put it into your own application? Well, first of all, not every application can be made easier to use by simply tossing in some drag and drop features. In my own experience, I use drag and drop in only 1-2 out of ten programs I write. It has to add significant value to be worth the time it takes to program.

There are only two event to worry about, the DragDrop and the DragOver, so it won’t take long to cover them both. In this example, the events of a form (Form1) are examined:

Private Sub Form1_DragDrop(Source As Control, _

X As Single, Y As Single)

Private Sub Form1_DragOver(Source As Control, _

X As Single, Y As Single, State As Integer)

Both events share the common arguments Source, X, and Y, whereas the DragOver events includes the additional argument State.

  • Source
    This is a reference to the control which is being dragged. For example, if you drag a command button onto a form, Source refers to the command button. You would use Source.caption to refer to the caption of the command button. You can access other properties or methods of the command button the same way, by using the argument in place of the actual command button’s name.
  • X/Y
    X and Y are the position at which the drop took place. You may use that information in a variety of ways, but if it is used at all, it is typically used to reposition the control from which the drag and drop operation began.
  • State
    An integer value corresponding to whether the dragged object is entering or leaving the control or has simply moved its location within the control.

Drag and Drop Properties

The only two properties of a control which affect a drag and drop operation are:

  • DragMode There are two settings – automatic or manual. If automatic is chosen, then all you do is drag a control (hold down the mouse while moving the cursor) and VB will automatically create the drag icon. If manual is selected, then you must invoke the .Drag method of the control for to begin the drag operation (including the display of the drag icon).
  • DragIcon VB provides a default icon (a small outline image) but you may also define the icon to be used while the drag is taking place.

Drag and Drop Methods

  • Drag
    The only method needed is appropriately called Drag. To use it, simply use code such as:
  • command1.drag
    This will initiate a drag operation but as in the automatic case you still must write any code that is executed when the drop action occurs.

Question:- 5(a) “VB has rich set of library function”. Justify the statement by giving a classification of various in-built function with syntax and their use.

Answer: – We will give the syntax, discussion and examples of each of the functions listed to the left as we go down the page. We didn’t create these Access database functions so most of the syntax and usage information is taken from the Microsoft Access Visual Basic help system. We have added our own notes and discussion to help clarify the functions where we can. Let’s get started!

VBA Abs Function

The Visual Basic Abs function simply removes the sign from a number returning a positive value.  This is useful for calculating the absolute difference between values and adding them differences up.  You would first use the Abs function before adding the values.

This example uses the Abs function to compute the absolute value of a number.

Dim TheNumber
TheNumber = Abs(19.2)         ‘ Returns the absolute value ‘19.2’.
TheNumber = Abs(-19.2)        ‘ Returns the absolute value ‘19.2’.

Visual Basic Array Function for Microsoft Access

The Microsoft Access Array function establishes an array (list) of values.  To create a constant array you define the array name as a variant.  See below;

Dim myArray as Variant

myArray = Array(5,10,15,21)

You can then refer to an array element by its integer position starting at 1:

Msgbox myArray(2)

The Msgbox above will display the number 10

Visual Basic Asc Function

The VBA Asc function returns the ASCII character code for the 1st letter in the Asc argument:

TheNumber = Asc(“A”)         ‘ Returns 65.
TheNumber = Asc(“a”)         ‘ Returns 97.
TheNumber = Asc(“Avalanche”)     ‘ Returns 65.

Here is a complete ASCII character chartVisual Basic Atn Function

The arctangent is used to calculate the angles of a right triangle. It works opposite of the tangent function. While tangent will find the ratio of the two sides of a right triangle when given an angle, arctangent can find the angle given the ratio.  Go here to see more information on arctangent.

This example uses the Atn function to calculate the value of pi.

Dim pi
pi = 4 * Atn(1)         ‘ Calculates the numeric result for pi.

Microsoft Access Choose Function

Choose returns a value from the list of choices based on the value of index. If index is 1, Choose returns the first choice in the list; if index is 2, it returns the second choice, and so on.

Visual Basic Chr Function

The Visual Basic Chr function returns a string containing the character associated with the specified character code.


Dim MyChar
MyChar = Chr(65)         ‘ Returns A.
MyChar = Chr(97)         ‘ Returns a.

Visual Basic Command Function

The Visual Basic Command function returns the argument part of the command line used to invoke MS Visual Basic or an program developed with Visual Basic. The Visual Basic Command function is not available in Microsoft Office applications.

When Visual Basic is launched from the command line, any portion of the command line that follows ‘ /cmd ‘ is forwarded to the program as an argument. In the example below, cmdlineargs is the argument information returned by the Command function.

VB /cmd cmdlineargs

For applications created with Visual Basic and compiled, Command returns any arguments that appear after the name of the application on the command line. For example:

MyApp cmdlineargs

Visual Basic Conversion Functions


Return Type Range for expression argument
CBool Boolean Any valid string or numeric expression.
CByte Byte 0 to 255.
CCur Currency -922,337,203,685,477.5808 to 922,337,203,685,477.5807.
CDate Date Any valid date expression.
CDbl Double -1.79769313486231E308 to
-4.94065645841247E-324 for negative values; 4.94065645841247E-324 to 1.79769313486232E308 for positive values.
CDec Decimal +/-79,228,162,514,264,337,593,543,950,335 for zero-scaled numbers, that is, numbers with no decimal places. For numbers with 28 decimal places, the range is
+/-7.9228162514264337593543950335. The smallest possible non-zero number is 0.0000000000000000000000000001.
CInt Integer -32,768 to 32,767; fractions are rounded.
CLng Long -2,147,483,648 to 2,147,483,647; fractions are rounded.
CSng Single -3.402823E38 to -1.401298E-45 for negative values; 1.401298E-45 to 3.402823E38 for positive values.
CStr String Returns for CStr depend on the expressionargument.
CVar Variant Same range as Double for numerics. Same range as String for non-numerics.

Question: – 5(b) Explain how File Open common dialog box can be used to open an image file in image control? Also write some important properties of it.

Answer: – The Open and Save As dialog boxes are commonly used for opening and

saving files in most Windows applications. VBA makes these readily available for use in your applications, via the Microsoft Common Dialog Control. All you need do is add the control to the Toolbox in the VB Editor.

VBA is extremely powerful and can use many of the controls supplied with the Windows applications you have installed on your computer. A complete list of the controls available on your PC can be viewed from the Additional Controls dialog box. You can add any of these controls into your Toolbox, from which you can then drag and drop them onto UserForms in the normal way. These controls retain their full functionality, making them extremely reusable.

If you have to distribute your application, it’s best to stick with the plain-vanilla controls provided by default. Just because a control is present on your PC doesn’t mean it will be on others—unless the user has previously installed an application that uses this control.

Here are the steps for adding the Common Dialog Control (or any other controls listed on your PC) to the Toolbox. (I hope you have this control on your PC; otherwise, you’ll need to use a macro like GetNewFile to open a file and write a macro for saving a file.)

  1. Start a new project, open the VB Editor, and add a UserForm.
  2. Right-click any control from the Toolbox and choose Additional Controls from the shortcut menu. The dialog box shown in Figure 12.1 appears, containing its long list of all the controls available on your PC that can be added to the Toolbox.
  1. Scroll down the Available Controls list and select Microsoft Common Dialog Control version 6.0, as shown in Figure 12.1. The location of the file containing this control is displayed in the frame at the bottom. (The Caption property for this frame has changed to the selected item.)

                                                           Figure 12.1 Additional Controls dialog box

  1. Click OK. The CommonDialog control’s icon appears at the bottom of the Toolbox:

If you want to remove the CommonDialog control from your Toolbox, simply right-click its Toolbox button and select Delete CommonDialog from the pop-up menu.

Using the CommonDialog Control

The CommonDialog control provides a set of common dialog boxes with appropriate controls already set up. These predefined dialog boxes are used throughout most Windows applications and will be familiar to you. In this section you’ll see how to utilize the Open and SaveAs dialog boxes contained in properties of this control.

Open Dialog Box

The Open dialog box (Figure 12.2) displays the list of files and folders available from your PC, or on a disk or CD in one of the PC’s drives.

Figure 12.2 Predefined Open dialog box

The CommonDialog control has its own set of properties and methods. For example, the ShowOpen method is used to display the Open dialog box, and the Filter property can be set to restrict the types of files that will be included in the list of files. Setting the Filter property in the Properties Window to that shown in Figure 12.3 will display “All Files (*.*)” in the Files of Type box.

The Filter property can also be set in code, as follows:

Exercise 12.2 shows the effect of setting the Filter property on the File Types list in the Open dialog box.

Exercise 12.2: Showing the Open Dialog Box

Listing 12.5 shows how to set up a more complex filtering system that contains a list of filters, and how to designate one of them to be displayed in the Files of Type box when the dialog box opens.

Here are the steps for using the CommonDialog control in an application:

  1. If you haven’t already done so, follow the steps given at the start of the section “Displaying Open and Save As Dialog Boxes” so that you have a project with a UserForm and the CommonDialog control in the Toolbox.
  2. Drag and drop a CommonDialog control onto the UserForm. You can place it anywhere you like, because the control is never visible while the application is running.

Figure 12.3 Properties window for CommonDialog1

You’ll find it impossible to resize the CommonDialog control; VBA won’t let you. When this control is placed on a UserForm during design time, it appears as a square button with its own little icon. When the UserForm is displayed to the user during run time, this control becomes invisible.

  1. Add a command button to the UserForm, make its Caption property Open, and make its Accelerator property O.
  2. Type the following statements into the command button’s Click event procedure:

GetDefaultFile Unload Me

  1. Enter the GetDefaultFile procedure (Listing 12.5) into the General Declarations section of the UserForm’s Code window.
  2. Run the application and click Open. The Open dialog box shown in Figure 12.4 appears, with the new property values. Figure 12.5 shows the list of types specified in Line 2 of the GetDefaultFile procedure.

Figure 12.4 Open dialog box with the new property settings from Listing 12.5

Figure 12.5 List of file types produced by Filter property setting from Listing 12.1

Listing 12.5: GetDefaultFile Procedure

This article is contributed by Tarun Jangra. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.

click here for solved UNIT-III

Posted in Uncategorized

Visual Language Programming (May-2017) Unit-I

Question: -2(a)  What are the types of projects created in VB? Explain them.

Answer: -If you examine the Visual Basic project types, you see that many of them are different from what you are used to with VB6. Some of the major project types are:

  • Windows Application—This is a standard executable, in VB6 terminology. It is the way to create applications with a Windows interface, using forms and controls. This is as close to “your father’s VB” as you’ll get in VB.NET.
  • Class Library—This project type allows you to create classes that will be used in other applications. Think of it as similar to the COM components that you have been building, which VB6 called the ActiveX DLL and ActiveX EXE project types.
  • Windows Control Library—This project type is for creating what used to be called ActiveX controls. This type allows you to create new controls to be used in Windows applications.
  • Web Application—Goodbye, Visual InterDev. Goodbye, scripting languages on the server. Visual Basic now has Web Application projects, which use ASP.NET to create dynamic Web applications. These projects allow you to create HTML, ASP.NET, and VB files. Your Web applications move beyond the simple request/response mode of typical Web applications to be event-driven.
  • Web Service—If you’ve used VB6 to create COM components and then made them available over HTTP with SOAP, you understand the concept of Web Services. Web Service projects are components that you make available to other applications via the Web; the underlying protocol is HTTP instead of DCOM, and you are passing requests and receiving responses behind the scenes using XML. Some of the major promises of Web Service projects are that they are all standards-based and are platform-independent. Unlike DCOM, which was tied to a COM (that is, Windows) infrastructure, Web Service projects can be placed on any platform that supports .NET, and can then be called by any application using simple HTTP calls.
  • Web Control Library—As with Web Service projects, there’s no exact match back to VB6 for the Web Control Library projects. Thanks to the new Web Application projects in VB.NET, you can add controls to Web pages just like you would in a standard Windows Application, but VB.NET makes them HTML controls at runtime. You can design your own controls that can then be used by Web applications.
  • Console Application—Many of the Windows administrative tools are still console (or command-line, or DOS) applications. Previously, you didn’t have a good way to create these tools in VB, and you instead had to rely on C++. Now console applications are natively supported by VB.NET.
  • Windows Services—As with console applications, there was no good way to create Windows services in previous versions of VB. Windows services, of course, are programs that run in the background of Windows and can automatically start when the machine is booted, even if no one logs in.

These are the basic types of applications you can create. You can also create an empty project (for Windows applications, class libraries, and services) or an empty Web Application (for Web applications).

Question: -2(b) How dynamic arrays are created and modified in VB? Give an example.

Answer: –

An array is a consecutive group of memory locations that all have the same name and the same type. To refer to a particular location or element in the array, we specify the array name and the array element position number.Arrays are a variant type variable that you can use in VBA coding to store a list of data. Think of it as a mini-spreadsheet inside of a single variable. You store data into an array by referring to a reference number that corresponds with the location that the piece of data is positioned in.

Below is an example of an array that is holding all the month names within a year. Notice that the reference number starts at zero instead of one.

Arrays occupy space in memory. The programmer specifies the array type and the number of elements required by the array so that the compiler may reserve the appropriate amount of memory. Arrays may be declared as Public (in a code module), module or local. Module arrays are declared in the general declarations using keyword Dim or Private. Local arrays are declared in a procedure using Dim or Static. Array must be declared explicitly with keyword “As”.

There are two types of arrays in Visual Basic namely:

Fixed-size array : The size of array always remains the same-size doesn’t change during the program execution.

Dynamic array : The size of the array can be changed at the run time- size changes during the program execution.

Fixed-sized Arrays

When an upper bound is specified in the declaration, a Fixed-array is created. The upper limit should always be within the range of long data type.

Declaring a fixed-array

Dim numbers(5) As Integer

In the above illustration, numbers is the name of the array, and the number 6 included in the parentheses is the upper limit of the array. The above declaration creates an array with 6 elements, with index numbers running from 0 to 5.

If we want to specify the lower limit, then the parentheses should include both the lower and upper limit along with the To keyword. An example for this is given below.

Dim numbers (1 To 6 ) As Integer

In the above statement, an array of 10 elements is declared but with indexes running from 1 to 6.

A public array can be declared using the keyword Public instead of Dim as shown below.

Public numbers(5) As Integer

Multidimensional Arrays

Arrays can have multiple dimensions. A common use of multidimensional arrays is to represent tables of values consisting of information arranged in rows and columns. To identify a particular table element, we must specify two indexes: The first (by convention) identifies the element’s row and the second (by convention) identifies the element’s column.

Tables or arrays that require two indexes to identify a particular element are called two dimensional arrays. Note that multidimensional arrays can have more than two dimensions. Visual Basic supports at least 60 array dimensions, but most people will need to use more than two or three dimensional-arrays.

The following statement declares a two-dimensional array 50 by 50 array within a procedure.

Dim AvgMarks ( 50, 50)

It is also possible to define the lower limits for one or both the dimensions as for fixed size arrays. An example for this is given here.

Dim Marks ( 101 To 200, 1 To 100)

An example for three dimensional-array with defined lower limits is given below.

Dim Details( 101 To 200, 1 To 100, 1 To 100)

Static and dynamic arrays

Basically, you can create either static or dynamic arrays. Static arrays must include a fixed number of items, and this number must be known at compile time so that the compiler can set aside the necessary amount of memory. You create a static array using a Dim statement with a constant argument:

‘ This is a static array.
Dim Names(100) As String

Visual Basic starts indexing the array with 0. Therefore, the preceding array actually holds 101 items.

Most programs don’t use static arrays because programmers rarely know at compile time how many items you need and also because static arrays can’t be resized during execution. Both these issues are solved by dynamic arrays. You declare and create dynamic arrays in two distinct steps. In general, you declare the array to account for its visibility (for example, at the beginning of a module if you want to make it visible by all the procedures of the module) using a Dim command with an empty pair of brackets. Then you create the array when you actually need it, using a ReDim statement:

‘ An array defined in a BAS module (with Private scope)
Dim Customers() As String

Sub Main()
‘ Here you create the array.
ReDim Customer(1000) As String
End Sub

If you’re creating an array that’s local to a procedure, you can do everything with a single ReDim statement:

Sub PrintReport()
‘ This array is visible only to the procedure.
ReDim Customers(1000) As String
‘ …
End Sub

If you don’t specify the lower index of an array, Visual Basic assumes it to be 0, unless an Option Base 1 statement is placed at the beginning of the module. My suggestion is this: Never use an Option Base statement because it makes code reuse more difficult. (You can’t cut and paste routines without worrying about the current Option Base.) If you want to explicitly use a lower index different from 0, use this syntax instead:

ReDim Customers(1 To 1000) As String

Dynamic arrays can be re-created at will, each time with a different number of items. When you re-create a dynamic array, its contents are reset to 0 (or to an empty string) and you lose the data it contains. If you want to resize an array without losing its contents, use the ReDim Preserve command:

ReDim Preserve Customers(2000) As String

When you’re resizing an array, you can’t change the number of its dimensions nor the type of the values it contains. Moreover, when you’re using ReDim Preserve on a multidimensional array, you can resize only its last dimension:

ReDim Cells(1 To 100, 10) As Integer

ReDim Preserve Cells(1 To 100, 20) As Integer ‘ This works.
ReDim Preserve Cells(1 To 200, 20) As Integer ‘ This doesn’t.

Finally, you can destroy an array using the Erase statement. If the array is dynamic, Visual Basic releases the memory allocated for its elements (and you can’t read or write them any longer); if the array is static, its elements are set to 0 or to empty strings.

You can use the LBound and UBound functions to retrieve the lower and upper indices. If the array has two or more dimensions, you need to pass a second argument to these functions to specify the dimension you need:

Print LBound(Cells, 1) ‘ Displays 1, lower index of 1st dimension
Print LBound(Cells) ‘ Same as above
Print UBound(Cells, 2) ‘ Displays 20, upper index of 2nd dimension
‘ Evaluate total number of elements.
NumEls = (UBound(Cells) _ LBound(Cells) + 1) * _
(UBound(Cells, 2) _ LBound(Cells, 2) + 1)

Question: -3(a) what are procedures? What are the different types of procedures in VB? Also explain the mode of passing parameters to procedures.

Answer: –

A procedure is a block of code that performs some operation. The events we have been using so far are a special form of procedure known as an event procedure.
For example, associating code with a CommandButton to quit an application is a procedure.
The basic Syntax for a procedure is:
[Private | Public][Static] Sub procName ([arglist])
Parts of the Procedure
Part Description
Public Indicates that the procedure is available to all modules. If Option Private is used in the module, the procedure is not available to modules outside of the project.
Private Indicates that the procedure is only available to other procedures or functions in the current module or form.
Static Indicates that all variables declared within the procedure are retained, even when the procedure is out of scope.
procName The name of the procedure. Must be unique to the module if declared Private, otherwise unique to the project. The name of the procedure follows the naming rule for Variables.
arglist A list of variables passed to the procedure as arguments, and their data types. Multiple arguments are separated by commas. Arguments may be Optional, and may be Read Only.

The following example is a Private Procedure to print the sum of two numbers on the Form.

Private Sub printSum(ByVal x As Integer, ByVal y As Integer)
Debug.Print Str(x + y)
End Sub


Types of Procedures

Visual Basic uses several types of procedures:

  • Sub Proceduresperform actions but do not return a value to the calling code.
  • Event-handling procedures are Subprocedures that execute in response to an event raised by user action or by an occurrence in a program.
  • Function Proceduresreturn a value to the calling code. They can perform other actions before returning.

Some functions written in C# return a reference return value. Function callers can modify the return value, and this modification is reflected in the state of the called object. Starting with Visual Basic 2017, Visual Basic code can consume reference return values, although it cannot return a value by reference. For more information, see Reference return values.

  • Property Proceduresreturn and assign values of properties on objects or modules.
  • Operator Proceduresdefine the behavior of a standard operator when one or both of the operands is a newly-defined class or structure.
  • Generic Procedures in Visual Basicdefine one or more type parameters in addition to their normal parameters, so the calling code can pass specific data types each time it makes a call.
Passing Parameters by Value

This is the default mechanism for passing parameters to a method. In this mechanism, when a method is called, a new storage location is created for each value parameter. The values of the actual parameters are copied into them. So, the changes made to the parameter inside the method have no effect on the argument.

In VB.Net, you declare the reference parameters using the ByVal keyword. The following example demonstrates the concept:

Module paramByval

Sub swap(ByVal x As Integer, ByVal y As Integer)

Dim temp As Integer

temp = x ‘ save the value of x

x = y    ‘ put y into x

y = temp ‘put temp into y

End Sub

Sub Main()      ‘

‘local variable definition

Dim a As Integer = 100

Dim b As Integer = 200

Console.WriteLine(“Before swap, value of a : {0}”, a)

Console.WriteLine(“Before swap, value of b : {0}”, b)      ‘ calling a function to swap the values ‘

swap(a, b)

Console.WriteLine(“After swap, value of a : {0}”, a)

Console.WriteLine(“After swap, value of b : {0}”, b)


End Sub

End Module

When the above code is compiled and executed, it produces the following result:

Before swap, value of a :100Before swap, value of b :200After swap, value of a :100After swap, value of b :200

It shows that there is no change in the values though they had been changed inside the function.

Passing Parameters by Reference

A reference parameter is a reference to a memory location of a variable. When you pass parameters by reference, unlike value parameters, a new storage location is not created for these parameters. The reference parameters represent the same memory location as the actual parameters that are supplied to the method.

In VB.Net, you declare the reference parameters using the ByRef keyword. The following example demonstrates this:

Module paramByref

Sub swap(ByRef x As Integer, ByRef y As Integer)

Dim temp As Integer

temp = x ‘ save the value of x

x = y    ‘ put y into x

y = temp ‘put temp into y

End Sub

Sub Main()

‘ local variable definition

Dim a As Integer = 100

Dim b As Integer = 200

Console.WriteLine(“Before swap, value of a : {0}”, a)

Console.WriteLine(“Before swap, value of b : {0}”, b)      ‘ calling a function to swap the values ‘

swap(a, b)

Console.WriteLine(“After swap, value of a : {0}”, a)

Console.WriteLine(“After swap, value of b : {0}”, b)


End Sub

End Module

When the above code is compiled and executed, it produces the following result:

Before swap, value of a : 100Before swap,

value of b : 200After swap,

value of a : 200After swap,

value of b : 100

Question:- 3(b) Explain important components of VB IDE which make it powerful development framework.

Answer: – An Integrated Development Environment (IDE) is software that facilitates application development. In the context of .NET-based applications, Visual Studio is the most commonly used IDE. Some of the key features included are:

  • Single IDE for all .NET applications. Therefore no switching required to other IDEs for developing .NET applications
  • Single .NET solution for an application which has been built on code written in multiple languages
  • Code editor supporting Intellisense and code refactoring
  • Compilation from within the environment based on defined configuration options
  • Integrated debugger that works at source and machine level
  • Plug-in architecture that helps to add tools for domain specific languages
  • Customizable environment to help the user to configure the IDE based on the required settings
  • Browser that is built-in within the IDE helps to view content from internet such as help, source-code, etc. in online mode.

Visual Basic .NET (VB.NET) is an object-oriented computer programming language implemented on the .NET Framework. Although it is an evolution of classic Visual Basic language, it is not backwards-compatible with VB6, and any code written in the old version does not compile under VB.NET.

The following reasons make VB.Net a widely used professional language:

  • Modern, general purpose.
  • Object oriented.
  • Component oriented.
  • Easy to learn.
  • Structured language.
  • It produces efficient programs.
  • It can be compiled on a variety of computer platforms.
  • Part of .Net Framework.
Strong Programming Features VB.Net

VB.Net has numerous strong programming features that make it endearing to multitude of programmers worldwide. Let us mention some of these features:

  • Boolean Conditions
  • Automatic Garbage Collection
  • Standard Library
  • Assembly Versioning
  • Properties and Events
  • Delegates and Events Management
  • Easy-to-use Generics
  • Indexers
  • Conditional Compilation
  • Simple Multithreading

VISUAL BASIC IDE (INTEGRATED DEVELOPMENT ENVIRONMENT ) is the environment in which its program are written and executed. IDE stands for Integrated Development Environment. IDE is a term commonly used to describe the interface and environment for creating your application. It is called IDE because you can access all of the development tools that you need for developing an application.

VISUAL BASIC IDE contains different components. These components are:
Tool Bar:-

It provides quick access to commonly used commands in the programming environment. You click a button on the toolbar to carry out the action represented by that button. The Standard toolbar is displayed when you start Visual Basic. You can change toolbar settings by selecting Toolbars option form View menu.

Toolbar can be removed from beneath of the menu bar, if you click on the Left Edge and drag it away from the menu bar.

Form Designer:-

Form objects are the basic building blocks of Visual Basic application.It is the actual window with which a user interacts at the start of application. Forms have their own properties, events, and methods with which you can control their appearance and behavior.

The Form Window is central to developing Visual Basic applications.It is the window where you draw your application

Tool Box:-

The Toolbox is a long box with many different icons on it. It is the selection menu for controls (objects) used in your application. To work in Visual Basic we’ve to know about its tools so that we can make different forms, projects and how to edit them.

Property Window:-

Properties Window is used to establish initial property values for objects (Like Form etc.). The drop-down box at the top of the window lists all objects in the current form.Two views are available: Alphabetic and Categorized.

If you are not getting property window on IDE then click on Project Explorer option from view menu. Or, Use key F4 or Alt+V+W for displaying property window.

Project Explorer:-

Project Explorer Window displays a list of all forms used in making up your application. You can also obtain a view of the Form or Code windows (window containing the actual Basic coding) from Project Explorer window.

If you are not viewing Project Explorer on IDE then click on Project Explorer option from the view menu. OR, Use Ctrl+R for displaying Project Explorer Window.

Menu Bar:-

It is a horizontal strip that appears across the top of the screen and contains the names of menus.Menu Bar lists the menus that you can use in the active window.

You can modify the menu bar using the Commands tab of the Customize dialog box. For that go to the View menu ,then select Toolbars. Now Click on the customize option. OR, You can use key combination Alt+V+T+C for that.

This article is contributed by Tarun Jangra. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.


click here for solved UNIT-II

Posted in Uncategorized

Database Management System (May-2017) Unit-IV

Question: -8(a) Differentiate between conflict and view serializability. How both of those are tested for a schedule?

Answer: – Serializibility is the process which prevent inconsistency ans aim to find a non serial schedule that allows transaction to execute concurrently without being interfered with one another and produce a database state that could be produced by a serial schedule.


1.) Conflict serializability is easy to achieve but view serializability is hard to achieve

2.) Every conflict serializable is view serializable but the reverse is not true.

3.) It is much easy to test conflict serializability but expensive to test view serializability.

4.) Most of the concurrency control schemes used in practice are based on conflict serializability.

  • Schedule− A chronological execution sequence of a transaction is called a schedule. A schedule can have many transactions in it, each comprising of a number of instructions/tasks.

Testing for conflict serializability

  • First write the given schedule in a linear way.
  • Find the conflict pairs (RW, WR, WW) on same variable by different transactions.
  • Whenever conflict pairs are find, write the dependency relation like Ti → Tj, if conflict pair is from Ti to Tj. For example, (W1(A), R2(A)) ⇒ T1 → T2
  • Check to see if there is a cycle formed,
    • If yes= not conflict serializable
    • No= we get a sequence and hence are conflict serializable.

Questions: – 8(b) What is cursor? Discuss its different types. Name the commands used to control the cursor with their syntax.

Answer: – A cursor is a temporary work area created in the system memory when a SQL statement is executed. A cursor contains information on a select statement and the rows of data accessed by it. This temporary work area is used to store the data retrieved from the database, and manipulate this data. A cursor can hold more than one row, but can process only one row at a time. The set of rows the cursor holds is called the active set.
There are two types of cursors in PL/SQL  :

  1. Implicit cursors.
  2. Explicit cursors.

Implicit Cursors

These are the cursors which are automatically created and used by the DBMS. These cursors are created when we use DML statements like INSERT, DELETE or UPDATE or when we use SELECT query. Even for single row SELECT statement DBMS creates implicit cursors. It allocates some space in the memory to hold the data. Even when we fire DML statements, the implicit cursor is fired to select the matching records and the space is reserved for those records in the memory.

Even though these implicit cursors are internal and users/developers cannot control their execution, developers can view the details of these cursors using its attributes. We can access this implicit cursor using ‘SQL’ which refers to the cursor SQL. Its attributes are described below:

  • %ISOPEN– It returns TRUE or FALSE depending on the cursor is still open for execution or not. It returns always FALSE for implicit cursor because DBMS would have executed and closed the cursor automatically.
  • %FOUND– It returns TRUE if the DML statement affects any of the rows or the SELECT statement retrieves one or more row. Otherwise it returns FALSE. If the statements are not executed then it returns NULL.
  • %NOTFOUND– It is opposite of %FOUND. It returns TRUE if the DML statement does not affect any of the rows or the SELECT statement does not retrieve any row.  Otherwise it returns FALSE.
  • %ROWCOUNT– It returns the number of rows affected by the DML statements or the number of rows retrieved by the SELECT statement.

These implicit cursor attributes are called as below:





One of the examples of implicit cursor


n_count NUMBER;



SET SALARY = SALARY + (SALARY*0.1) – 10% increment in salary for DEPT_ID = 10 employees


IF SQL%FOUND THEN – checks if any row is updated

n_count: = SQL%ROWCOUNT; — Number of rows updated are counted

DBMS_OUTPUT.PUT_LINE (‘Total Number of employees who got increment: ‘|| n_count);


DBMS_OUTPUT.PUT_LINE (‘None of the employees got increment’);



Explicit Cursors

These cursors are created by the developer in his program. In this type of cursors, developer will have the control on it. Explicit cursors have to be declared in the DECLARE section and can be used as any other variable, but it will hold one or more rows of data.

Any explicit cursor created in the PL/SQL block will have four basic steps:

  • DECLARE the Cursor– This step declares the cursor with its name and defines the SELECT statement for the cursor. The general syntax for declaring explicit cursor is as below:

CURSOR cursor_name IS



CURSOR cursor_Dept10Emp IS


  • OPEN the Cursor– Cursor is opened for reading the records from it. When we open the cursor it gets executed and memory space is allocated for the records.

OPEN cursor_name


OPEN cursor_Dept10Emp;

  • FETCH the records– This step will start traversing the records in the cursor. Rows will be fetched one at a time from this cursor. We can perform any calculation or manipulation on these fetched records.

FETCH cursor_Dept10Emp INTO n_empID, v_empName;

  • CLOSE the Cursor– Once the cursor is used and no more required in the program, close the cursor. This will release the allocated memory for this cursor.

CLOSE cursor_Dept10Emp;

Simple program with above 4 steps would be like below


CURSOR cursor_Dept10Emp IS   — Declare the cursor






OPEN cursor_Dept10Emp; — Open the cursor



FETCH cursor_Dept10Emp INTO n_empID, v_empName; — Fetch cursor records



DBMS_OUTPUT.PUT_LINE (‘Employee ID: ‘|| n_empID|| ‘Employee Name:’|| v_empName);



CLOSE cursor_Dept10Emp; — Close the Cursor


Cursor For Loop

In the above four steps, FETCH step will retrieve only one record at a time and it will assign individual column values into respective variables. Also a separate loop has to be created to iterate through all the records. If we use For loop for cursors, then we can iterate and traverse the whole records (all the columns) of cursor query using the for loop variable. This is useful when all the records of a table need to be used in the program. The cursor will automatically be opened when it is called in the FOR loop and closed when for loop ends. No need to explicitly OPEN or CLOSE the cursor. Below example shows how cursor for loop is different from above four steps.


CURSOR cursor_Dept10Emp IS   — Declare the cursor





FOR rec_emp IN cursor_Dept10Emp — implicitly opens cursor and assigns cursor variable rec_emp


DBMS_OUTPUT.PUT_LINE (‘Employee ID: ‘||rec_emp.EMP_ID|| ‘Employee Name:’|| rec_emp.EMP_NAME);



Cursor Parameter

We can even pass parameters to the cursors like we do in procedures and functions. These parameters are used in the cursor query to retrieve the data. The scope of the cursor parameter is localized to cursor alone.



SELECT EMP_ID, EMP_NAME FROM EMPLOYEE WHERE DEPT_ID = deptID; — deptID is the cursor parameter




FOR rec_emp IN cursor_Dept10Emp (10) – Pass the cursor parameter as 10


DBMS_OUTPUT.PUT_LINE (‘Employee ID: ‘||rec_emp.EMP_ID|| ‘Employee Name:’|| rec_emp.EMP_NAME);




Ref Cursors

These are the cursor variables used to reference and access the static cursor. We can pass this variable to procedures / functions and get the values from function as a refcursor. In short it acts a variable, but reference to the query defined at the runtime.

We can see the difference between cursor and refcursor in below program.


TYPE rc_cursor is ref cursor;

CURSOR c_course IS


l_cursor rc_cursor;



IF n_ID = 10 THEN

— Dynamically opens the cursor for student ids less than 10



— Dynamically opens the cursor for student ids greater than 10



— Opens static cursor c_course

OPEN c_course;


Question: -9(a) Discuss the optimistic concurrency control technique. Name its phases. How is minimum overhead reached?

Answer: – Concurrency control is a database management systems (DBMS) concept that is used to address conflicts with the simultaneous accessing or altering of data that can occur with a multi-user system. concurrency control, when applied to a DBMS, is meant to coordinate simultaneous transactions while preserving data integrity. [1] The Concurrency is about to control the multi-user access of Database

Locking Methods of Concurrency Control :

“A lock is a variable, associated with the data item, which controls the access of that data item.”
Locking is the most widely used form of  the concurrency control. Locks are further divided into three fields:

  1. Lock Granularity
  2. Lock Types
  3. Deadlocks

 Lock Granularity :

A database is basically represented as a collection of named data items.  The size of the data item chosen as the unit of protection by a concurrency control program is

called GRANULARITY.  Locking can take place at the following level :

  • Database level.
  • Table level.
  • Page level.
  • Row (Tuple) level.
  • Attributes (fields) level.

Lock Types :

The DBMS mailnly uses following types of locking techniques.

  1. Binary Locking
  2. Shared / Exclusive Locking
  3. Two – Phase Locking (2PL)

Deadlocks :

A deadlock is a condition in which two (or more) transactions in a set are waiting  simultaneously for locks held by some other transaction in the set.

Neither transaction can continue because each transaction in the set is on a waiting queue,  waiting for one of the other transactions in the set to release the lock on an item.  Thus, a deadlock is an impasse that may result when two or more transactions are each  waiting for locks to be released that are held by the other.  Transactions whose lock requests have been refused are queued until the lock can be  granted.
A deadlock is also called a circular waiting condition where two transactions are waiting  (directly or indirectly) for each other.  Thus in a deadlock, two transactions are mutually excluded from accessing the next record  required to complete their transactions, also called a deadly embrace.

Time-Stamp Methods for Concurrency control : 

Timestamp is a unique identifier created by the DBMS to identify the relative starting time of a transaction.
Typically, timestamp values are assigned in the order in which the transactions are submitted to the system. So, a timestamp can be thought of as the transaction start time.  Therefore, time stamping is a method of concurrency control in which each transaction is  assigned a transaction timestamp.  Timestamps must have two properties namely

  1. Uniqueness :  The uniqueness property assures that no equal timestamp values can exist.
  2. monotonicity  :  monotonicity assures that timestamp values always increase.

Timestamp are divided into further fields :

  1. Granule Timestamps
  2.  Timestamp Ordering
  3. Conflict Resolution in Timestamps

Optimistic Methods of Concurrency Control :

The optimistic method of concurrency control is based on the assumption that conflicts of database operations are rare and that it is better to let transactions run to completion and only check for conflicts before they commit.
An optimistic concurrency control method is also known as validation or certification methods. No checking is done while the transaction is executing.  The optimistic method does not require locking or timestamping techniques. Instead, a transaction is executed without restrictions until it is committed. In optimistic methods, each transaction moves through the following phases:

  1. Read phase.
  2. Validation or certification phase.
  3. Write phase.

Question: – 9(b) Where to stored procedure and function reside? Also discuss the steps to execute a procedure or function.

Answer: – stored procedure

A stored procedure is a set of Structured Query Language (SQL) statements with an assigned name, which are stored in a relational database management system as a group, so it can be reused and shared by multiple programs. Stored procedures can access or modify data in a database, but it is not tied to a specific database or object, which offers a number of advantages.

Benefits of using stored procedures

A stored procedure provides an important layer of security between the user interface and the database. It supports security through data access controls because end users may enter or change data, but do not write procedures. A stored procedure preserves data integrity because information is entered in a consistent manner. It improves productivity because statements in a stored procedure only must be written once.

Stored procedures offer advantages over embedding queries in a graphical user interface (GUI). Since stored procedures are modular, it is easier to troubleshoot when a problem arises in an application. Stored procedures are also tunable, which eliminates the need to modify the GUI source code to improve its performance. It’s easier to code stored procedures than to build a query through a GUI.

Use of stored procedures can reduce network traffic between clients and servers, because the commands are executed as a single batch of code. This means only the call to execute the procedure is sent over a network, instead of every single line of code being sent individually.

Stored Procedure Functions
Compilation Stored in database in compiled format.

Note: Compiled indicates, Execution plan will be made by sql at the time it created and stored in DB.

Will compiled at run time
Return type It can directly return only integers


Return type is not must

It can return any scalar or table


Return type is must

Multiple return values It can also return more than one values (of any data type) indirectly with the help of out parameters It won’t support out parameters
DML Statements Can have DML statements. Cannot have DML statements.

Note: In case of multi-table valued functions it can contain DML statements affecting Table Variables.

Execution Stored procedure can execute function.


Cannot be the part of Select query as a column.


Stored Procedures cannot be used in the SQL statements anywhere in the WHERE/HAVING/SELECT

Function cannot execute stored procedure.


Can be the part of select query as a column.



Functions be used in the SQL statements anywhere in the WHERE/HAVING/SELECT

Exception handling Can have Try….Catch Cannot have Try….Catch


Function Reside:-


  1. A FUNCTION always returns a value using the return statement. – Practical scenarios, when expecting a value to be returned which in turn helps for computation in rest of code

PROCEDURE may return one or more values through parameters or may not return any at all.

IN,OUT,INOUT parameters are different types. IN will be the input to the procedure. OUT will be the output from the procedure and this helps to get the output from the procedure. INOUT usually a same parameter behaves as input as well as output.

  1. Functions are normally used for computations where as procedures are normally used for executing business logic.
  2. A Function returns 1 value only. Procedure can return multiple values (max 1024).
  3. Stored procedure always returns an integer value of zero by default. Whereas function return types could be scalar or table or table values.- This is because Functions mainly meant for computation
  4. Stored procedures have a precompiled execution plan, where as functions are not. – Because of precompiled plan, for routines, stored procedure is preferred a lot.
  5. A function can be called directly by SQL statement like select func_name from dual while procedures cannot.
  6. Stored procedure has the security and reduces the network traffic and also we can call stored procedure in any no. of applications at a time.
  7. A Function can be used in the SQL Queries while a procedure cannot be used in SQL queries .that cause a major difference b/w function and procedures.

This article is contributed by Tarun Jangra. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.

Posted in Uncategorized

Database Management System (May-2017) Unit-III

Question: -6(a) Describe conceptually how an SQL retrieval query will be executed by specifying the conceptual order of executing each of the six clauses.

Answer: – Order of execution of a Query

Now that we have an idea of all the parts of a query, we can now talk about how they all fit together in the context of a complete query.

Complete SELECT query

SELECT DISTINCT column, AGG_FUNC(column_or_expression), …

FROM mytable

JOIN another_table

ON mytable.column = another_table.column

WHERE constraint_expression

GROUP BY column

HAVING constraint_expression



Each query begins with finding the data that we need in a database, and then filtering that data down into something that can be processed and understood as quickly as possible. Because each part of the query is executed sequentially, it’s important to understand the order of execution so that you know what results are accessible where.

Query order of execution

  1. FROMand JOINs

The FROM clause, and subsequent JOINs are first executed to determine the total working set of data that is being queried. This includes subqueries in this clause, and can cause temporary tables to be created under the hood containing all the columns and rows of the tables being joined.

  1. WHERE

Once we have the total working set of data, the first-pass WHERE constraints are applied to the individual rows, and rows that do not satisfy the constraint are discarded. Each of the constraints can only access columns directly from the tables requested in the FROM clause. Aliases in the SELECT part of the query are not accessible in most databases since they may include expressions dependent on parts of the query that have not yet executed.


The remaining rows after the WHERE constraints are applied are then grouped based on common values in the column specified in the GROUP BY clause. As a result of the grouping, there will only be as many rows as there are unique values in that column. Implicitly, this means that you should only need to use this when you have aggregate functions in your query.


If the query has a GROUP BY clause, then the constraints in the HAVING clause are then applied to the grouped rows, discard the grouped rows that don’t satisfy the constraint. Like the WHERE clause, aliases are also not accessible from this step in most databases.


Any expressions in the SELECT part of the query are finally computed.


Of the remaining rows, rows with duplicate values in the column marked as DISTINCT will be discarded.


If an order is specified by the ORDER BY clause, the rows are then sorted by the specified data in either ascending or descending order. Since all the expressions in the SELECT part of the query have been computed, you can reference aliases in this clause.


Finally, the rows that fall outside the range specified by the LIMIT and OFFSET are discarded, leaving the final set of rows to be returned from the query.


Not every query needs to have all the parts we listed above, but a part of why SQL is so flexible is that it allows developers and data analysts to quickly manipulate data without having to write additional code, all just by using the above clauses.

Question: – 6(b) Illustrate how the process of creating first normal form relations may lead by multivalve dependencies? How should the first normalization be done properly so that MVDs are avoided?

Answer: –

Normalization of Database

Database Normalization is a technique of organizing the data in the database. Normalization is a systematic approach of decomposing tables to eliminate data redundancy(repetition) and undesirable characteristics like Insertion, Update and Deletion Anamolies. It is a multi-step process that puts data into tabular form, removing duplicated data from the relation tables.

Normalization is used for mainly two purposes,

  • Eliminating reduntant(useless) data.
  • Ensuring data dependencies make sense i.e data is logically stored.
Problems Without Normalization

If a table is not properly normalized and have data redundancy then it will not only eat up extra memory space but will also make it difficult to handle and update the database, without facing data loss. Insertion, Updation and Deletion Anamolies are very frequent if database is not normalized. To understand these anomalies let us take an example of a Student table.

rollno name branch hod office_tel
401 Akon CSE Mr. X 53337
402 Bkon CSE Mr. X 53337
403 Ckon CSE Mr. X 53337
404 Dkon CSE Mr. X 53337

In the table above, we have data of 4 Computer Sci. students. As we can see, data for the fields branch, hod(Head of Department) and office_tel is repeated for the students who are in the same branch in the college, this is Data Redundancy.

Insertion Anomaly

Suppose for a new admission, until and unless a student opts for a branch, data of the student cannot be inserted, or else we will have to set the branch information as NULL.

Also, if we have to insert data of 100 students of same branch, then the branch information will be repeated for all those 100 students.

These scenarios are nothing but Insertion anomalies.

Updation Anomaly

What if Mr. X leaves the college? or is no longer the HOD of computer science department? In that case all the student records will have to be updated, and if by mistake we miss any record, it will lead to data inconsistency. This is Updation anomaly.

Deletion Anomaly

In our Student table, two different informations are kept together, Student information and Branch information. Hence, at the end of the academic year, if student records are deleted, we will also lose the branch information. This is Deletion anomaly.

Normalization Rule

Normalization rules are divided into the following normal forms:

  1. First Normal Form
  2. Second Normal Form
  3. Third Normal Form
  4. BCNF
  5. Fourth Normal Form

First Normal Form (1NF)

For a table to be in the First Normal Form, it should follow the following 4 rules:

  1. It should only have single(atomic) valued attributes/columns.
  2. Values stored in a column should be of the same domain
  3. All the columns in a table should have unique names.
  4. And the order in which data is stored, does not matter.

Second Normal Form (2NF)

For a table to be in the Second Normal Form,

  1. It should be in the First Normal form.
  2. And, it should not have Partial Dependency.

Third Normal Form (3NF)

A table is said to be in the Third Normal Form when,

  1. It is in the Second Normal form.
  2. And, it doesn’t have Transitive Dependency.

Boyce and Codd Normal Form (BCNF)

Boyce and Codd Normal Form is a higher version of the Third Normal form. This form deals with certain type of anomaly that is not handled by 3NF. A 3NF table which does not have multiple overlapping candidate keys is said to be in BCNF. For a table to be in BCNF, following conditions must be satisfied:

  • R must be in 3rd Normal Form
  • and, for each functional dependency ( X → Y ), X should be a super Key.

Fourth Normal Form (4NF)

A table is said to be in the Fourth Normal Form when,

  1. It is in the Boyce-Codd Normal Form.
  2. And, it doesn’t have Multi-Valued Dependency.

Question: – 7(a) Discuss the design and implementation issues for active databases.

Answer: –

An active database management system (ADBMS) is an event-driven system in which schema or data changes generate events monitored by active rules. Active database management systems are invoked by synchronous events generated by user or application programs as well as external asynchronous data change events such as a change in sensor value or time.

Onlinestudy.guruà explains Active Database Management System (ADBMS)

Active database management systems support event monitoring. They store events in event history as an event type and time; the former represents any kind of primitive event, while the latter represents time the event occurred. ADMSs clearly define rule semantics such as event consumption policy, event detection and coupling modes along with instance or set oriented semantics.

A common event consumption policy includes the following parameter contexts:

  • Cumulative: All instances of primitive event are consumed if a complex event occurs.
  • Chronicle: Events are consumed in time order.
  • Recent: The latest instances of primitive events that are part of complex events are consumed in time order.

Design and Implementation Issues for Active Databases

                                           Active Database

The previous section gave an overview of some of the main concepts for specifying active rules. In this section, we discuss some additional issues concerning how rules are designed and implemented. The first issue concerns activation, deactivation, and grouping of rules. In addition to creating rules, an active database system should allow users to activate, deactivate, and drop rules by referring to their rule names. Adeactivated rule will not be triggered by the triggering event. This feature allows users to selectively deactivate rules for certain periods of time when they are not needed. The activate command will make the rule active again. The drop command deletes the rule from the system. Another option is to group rules into named rule sets, so the whole set of rules can be activated, deactivated, or dropped. It is also useful to have a command that can trigger a rule or rule set via an explicit PROCESS RULES command issued by the user.

  • The second issue concerns whether the triggered action should be executed before,after, instead of, orconcurrently with the triggering event. A before trigger executes the trigger before executing the event that caused the trigger. It can be used in applications such as checking for constraint violations. An after triggerexecutes the trigger after executing the event, and it can be used in applications such as maintaining derived data and monitoring for specific events and conditions. An instead of trigger executes the trigger instead of executing the event, and it can be used in applications such as executing corresponding updates on base relations in response to an event that is an update of a view.
  • A related issue is whether the action being executed should be considered as a separatetransaction or whether it should be part of the same transaction that triggered the  We will try to categorize the various options. It is important to note that not all options may be available for a particular active database system. In fact, most commercial systems are limited to one or two of the options that we will now discuss.
  • Let us assume that the triggering event occurs as part of a transaction execution. We should first consider the various options for how the triggering event is related to the evaluation of the rule’s condition. The rulecondition evaluationis also known as rule consideration, since the action is to be executed only after considering whether the condition evaluates to true or false. There are three main possibilities for rule consideration:
  • Immediate consideration. The condition is evaluated as part of the same transaction as the triggering event, and is evaluated  This case can be further categorized into three options:
  • Evaluate the condition before executing the triggering event.
  • Evaluate the condition after executing the triggering event.
  • Evaluate the condition instead of executing the triggering event.
  • Deferred consideration. The condition is evaluated at the end of the trans-action that included the triggering event. In this case, there could be many triggered rules waiting to have their conditions evaluated.
  • Detached consideration. The condition is evaluated as a separate transaction, spawned from the triggering transaction.
  • The next set of options concerns the relationship between evaluating the rule condition and executingthe rule action. Here, again, three options are possible: immediate, deferred, or detached  Most active systems use the first option. That is, as soon as the condition is evaluated, if it returns true, the action is immediately executed.
  • The Oracle system uses the immediate considerationmodel, but it allows the user to specify for each rule whether the before or after option is to be used with immediate condition evaluation. It also uses the immediate execution The STARBURST system  uses the deferred consideration option, meaning that all rules triggered by a transaction wait until the triggering transaction reaches its end and issues its COMMIT WORK command before the rule conditions are evaluated.7
  • Another issue concerning active database rules is the distinction between row-levelrules and statement-level rules. Because SQL update statements (which act as triggering events) can specify a set of tuples, one has to distinguish between whether the rule should be considered once for the whole statement or whether it should be considered separately for each row (that is, tuple) affected by the statement. The SQL-99 standard the Oracle system  allow the user to choose which of the options is to be used for each rule, whereas STAR-BURST uses statement-level semantics only. We will give examples of how statement-level triggers can be specified
  • One of the difficulties that may have limited the widespread use of active rules, in spite of their potential to simplify database and software development, is that there are no easy-to-use techniques for designing, writing, and verifying rules. For exam-ple, it is quite difficult to verify that a set of rules is consistent, meaning that two or more rules in the set do not contradict one another. It is also difficult to guaranteetermination of a set of rules under all circumstances. To illustrate the termination
  • problem briefly, consider the rules in Figure 26.4. Here, rule R1is triggered by an INSERT event onTABLE1 and its action includes an update event on Attribute1 of TABLE2. However, rule R2’s triggering event is an UPDATE event on Attribute1 of TABLE2, and its action includes an INSERT event onTABLE1. In this example, it is easy to see that these two rules can trigger one another indefinitely, leading to non-termination. However, if dozens of rules are written, it is very difficult to determine whether termination is guaranteed or not.
  • If active rules are to reach their potential, it is necessary to develop tools for the design, debugging, and monitoring of active rules that can help users design and debug their rules.

Questions: -Define join dependency and 5NF. Why is 5NF also calledproject-join normal form? Illustrate.

Answer: –

 Join Dependency(JD)

Let R be a relation. Let A, B, …, Z be arbitrary subsets of R’s attributes. R satisfies the JD
* ( A, B, …, Z )
if and only if R is equal to the join of its projections on A, B, …, Z.

A join dependency JD(R1, R2, …, Rn) specified on relation schema R, is a trivial JD, if one of the relation schemas Ri in JD(R1, R2, ….,Rn) is equal to R.

Join dependency is used in the following case :

When there is no lossless join decomposition of R into two relation schemas, but there is a lossless join decompositions of R into more than two relation schemas.

Point : A join dependency is very difficult in a database, hence normally not used.

Negative Example :

Consider a relation ACP(Agent, Company, Product)

ACP : Meaning of the tuples
Agent(A) Company(C) Product(P) Agent sells Company’s Products.
A1 PQR Nut A1 sells PQR’s Nuts and Screw.
A1 PQR Screw
A1 XYZ Bolt A1 sells XYZ’s Bolts.
A2 PQR Bolt A2 sells PQR’s Bolts.

The table is in 4 NF as it does not contain multivalued dependency. But the relation contains redundancy as A1 is an agent for PQR twice. But there is no way of eliminating this redundancy without losing information.
Suppose that the table is decomposed into its two relations, R1 and R2.

R1 :

Agent Company
R2 :

Agent Product
A1 Nut
A1 Screw
A1 Bolt
A2 Bolt

The redundancy has been eliminated by decomposing ACP relation, but the information about which companies make which products and which agents supply which product has been lost.
The natural join of these relations over the ‘agent’ columns is:

R12 :
Agent Company Product
A1 PQR Nut
A1 PQR Screw
A1 PQR Bolt
A1 XYZ Nut
A1 XYZ Screw
A1 XYZ Bolt
A2 PQR Bolt

Hence, the decomposition of ACP is a lossy join decomposition as the natural join table is spurious, since it contains extra tuples(shaded) that gives incorrect information.
But now, suppose the original relation ACP is decomposed into 3 relations :

  • R1(Agent, Company)
  • R2(Agent, Product)
  • R3(Company, Product)

The result of the natural join of R1 and R2 over ‘Agent’ (already Calculated R12) and then, natural join of R12 and R3 over ‘Company’ & ‘Product’ is  –

R123 :
Agent Company Product
A1 PQR Nut
A1 PQR Screw
A1 PQR Bolt
A1 XYZ Bolt
A2 PQR Bolt

Again, we get an extra tuple shown as by shaded portion.
Hence, it has to be accepted that it is not possible to eliminate all redundancies using normalization techniques because it cannot be assumed that all decompositions will be non-loss. Hence again, the decomposition of ACP is a lossy join decomposition

Positive Example :

Consider the above schema, but with a different case as “if a company makes a product and an agent is an agent for that company, then he always sells that product for the company”. Under these circumstances, the ACP table is shown as :

Agent Company Product
A1 PQR Nut
A1 PQR Bolt
A1 XYZ Nut
A1 XYZ Bolt
A2 PQR Nut

The relation ACP is again decompose into 3 relations. Now, the natural Join of all the three relations will be shown as :

R1 :
Agent Company
R3 :
Company Product
PQR Bolt
XYZ Bolt
R2 :
Agent Product
A1 Nut
A1 Bolt
A2 Nut
Result of Natural Join of R1 and R3 over ‘Company’ and
then Natural Join of R
13 and R2 over ‘Agent’and ‘Product’
R123 :
Agent Company Product
A1 PQR Nut
A1 PQR Bolt
A1 XYZ Nut
A1 XYZ Bolt
A2 PQR Nut

Hence, in this example, all the redundancies are eliminated, and the decomposition of ACP is a lossless join decomposition. Hence the relation is in 5NF as it does not violate the property of lossless join.

This article is contributed by Tarun Jangra. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.


click here for solved UNIT-IV

Posted in MCA

Database Management System (May-2017) Unit-II

Question: – 4(a) Discuss the DIVISION operation. How is it represented, and what are the requirements of the numerator and denominator relations? Explain with an example.

Answer: –

The division operator is used when we have to evaluate queries which contain the keyword ALL.

Some instances where division operator is used are:

  1. Which person has account in all the banks of a particular city?
  2. Which students have taken all the courses required to graduate?

In above specified problem statements, the description after the keyword 'all' defines a set which contains some elements and the final result contains those units which satisfy these requirements.

Select all records from a table

A special character asterisk * is used to address all the data(belonging to all columns) in a query. SELECT statement uses * character to retrieve all records from a table, for all the columns.

SELECT * FROM student;

The above query will show all the records of student table, that means it will show complete dataset of the table.

s_id name age address
101 Adam 15 Chennai
102 Alex 18 Delhi
103 Abhi 17 Banglore
104 Ankit 22 Mumbai

Question: – What is view? How is it different from table? How views can be created, altered and destroyed?

Answer: – A database view is a searchable object in a database that is defined by a query.  Though a view doesn’t store data, some refer to a views as “virtual tables,” you can query a view like you can a table.  A view can combine data from two or more table, using joins, and also just contain a subset of information.  This makes them convenient to abstract, or hide, complicated queries.

Active Database

The reasons why views are created are:

  • When Data security is required .
  • When Data redundancy is to be kept to the minimum while maintaining data security .

Types of views :

  1. Read-only View : Allows only SELECT operations.
  2. Updateable View : Allows SELECT as well as INSERT , UPDATE and DELETE operations.

Creating a View :

The ORDER BY clause cannot be used while creating a view.  The columns of the table are related to the view using a one-to-one relationship.


CREATE <OR REPLACE> VIEW <ViewName> AS SELECT <ColumnName1 >, <ColumnName2> FROM <TableName> WHERE <ColumnName> = < Expression List> <WITH    READ ONLY> ;  

This statements creates a view based on query specified in SELECT statement.
OR REPLACE option recreates the view if it is already existing maintaning the privileges granted to view viewname.
WITH READ ONLY option creates readonly view.

Example :

Creating a view stu based on student table and than update it.

Run SQL Command Line–>

SQL>create view stu as select enroll,name from student;

View Created.

SQL>select * from stu;

ENROLL             NAME
———–    ———-
4866        ABCD
4546        BDSG
Updateable Views :

Views can also be used for data manipulation .   Views on which data manipulation can be done are called Updateable Views.
When an updateable view name is given in an Insert Update, or Delete SQL statement, modifications to data in the view will be immediately passed to the underlying table.
For a view to be updateable, it should meet the following criteria:

  •  Views defined from Single table
  •  If the user wants to INSERT records with the help of a view, then the PRIMARY KEY column(s) and all the NOT NULL columns must be included in the view .
  •  The user can UPDATE, DELETE records with the help of a view even if the PRIMARY KEY column and NOT NULL column(s) are excluded from the view definition .

Example :

SQL>update stu set name=’xyz’ where enroll=4866;

1 Row updated.
SQL>select * from stu;

ENROLL             NAME
———–    ———-
4866        xyz
4546        BDSG

Destroying a View :

The drop command drops the specified view.

Syntax :

DROP VIEW Viewname;


SQL>drop view stu;

View dropped.

Benefits of a Database View

There are many benefits to using views.  Listed below are some of the one that come to mind:

  • Enforce Business Rules– Use views to define business rules, such as when an items is active, or what is meant by “popular.”  By placing complicated or misunderstood business logic into the view, you can be sure to present a unified portrayal of the data.  This increases use and quality.
  • Consistency– Simplify complicated query logic and calculations by hiding it behind the view’s definition.  Once defined they calculations are reference from the view rather than being restated in separate queries.  This makes for less mistakes and easier maintenance of code.
  • Security– Restrict access to a table, yet allow users to access non-confidential data via views.  For example, you can restrict access to the employee table, that contains social security numbers, but allow access to a view containing name and phone number.
  • Simplicity– Databases with many tables possess complex relationships, which can be difficult to navigate if you aren’t comfortable using Joins.  Use views to provide a “flattened” view of the database for reporting or ad-hoc queries.
  • Space– Views take up very little space, as the data is stored once in the source table.  Some DBMS all you to create an index on a view, so in some cases views do take up more space than the definition.


Disadvantages of Views

  • Performance– What may seem like a simple query against a view could turn out to be a hugely complex job for the database engine.  That is because each time a view is referenced, the query used to define it, is rerun.
  • Modifications– Not all views support INSERT, UPDATE, or DELETE operations.  In general, in order to support these operations, the primary key and required fields must be present in the view.  Complex multi-table views are generally read only.

Question: – 5(a) Discuss different types of update operations on a relation with an example.

Answer: – The following are the different types of update operation in relation to the integrity constraints which must be satisfied in a relational database model.

  1. The SQL CREATE DATABASEstatement is used to create a new SQL database.
  • The basic syntax of this CREATE DATABASE statement is as follows −
  • CREATE DATABASE DatabaseName;
  • Always the database name should be unique within the RDBMS.


  1. The SQL DROP DATABASEstatement is used to drop an existing database in SQL schema.

The basic syntax of DROP DATABASE statement is as follows −


Always the database name should be unique within the RDBMS.

  1. Creating a basic table involves naming the table and defining its columns and each column’s data type.
  2. The SQL CREATE TABLEstatement is used to create a new table.

The basic syntax of the CREATE TABLE statement is as follows −

CREATE TABLE table_name(   column1 datatype,   column2 datatype,   column3 datatype,   …..   columnN datatype,   PRIMARY KEY( one or more columns ));

CREATE TABLE is the keyword telling the database system what you want to do. In this case, you want to create a new table. The unique name or identifier for the table follows the CREATE TABLE statement.

Then in brackets comes the list defining each column in the table and what sort of data type it is. The syntax becomes clearer with the following example.

A copy of an existing table can be created using a combination of the CREATE TABLE statement and the SELECT statement.

  1. The SQL DROP TABLEstatement is used to remove a table definition and all the data, indexes, triggers, constraints and permission specifications for that table.

NOTE − You should be very careful while using this command because once a table is deleted then all the information available in that table will also be lost forever.

The basic syntax of this DROP TABLE statement is as follows −

DROP TABLE table_name;


  1. The SQL INSERT INTOStatement is used to add new rows of data to a table in the database.

There are two basic syntaxes of the INSERT INTO statement which are shown below.

INSERT INTO TABLE_NAME (column1, column2, column3,…columnN)  VALUES (value1, value2, value3,…valueN);

Here, column1, column2, column3,…columnN are the names of the columns in the table into which you want to insert the data.

You may not need to specify the column(s) name in the SQL query if you are adding values for all the columns of the table. But make sure the order of the values is in the same order as the columns in the table.

The SQL INSERT INTO syntax will be as follows −

INSERT INTO TABLE_NAME VALUES (value1,value2,value3,…valueN);


  1. The SQL UPDATEQuery is used to modify the existing records in a table. You can use the WHERE clause with the UPDATE query to update the selected rows, otherwise all the rows would be affected.

The basic syntax of the UPDATE query with a WHERE clause is as follows −

UPDATE table_nameSET column1 = value1, column2 = value2…., columnN = valueNWHERE [condition];

You can combine N number of conditions using the AND or the OR operators.


  1. The SQL DELETE Query is used to delete the existing records from a table.

You can use the WHERE clause with a DELETE query to delete the selected rows, otherwise all the records would be deleted.

The basic syntax of the DELETE query with the WHERE clause is as follows −

DELETE FROM table_nameWHERE [condition];

You can combine N number of conditions using AND or OR operators.

Question: – 5(b) What are various type of inner join operations? Why is theta join required?

Answer: –

Different Types of SQL JOINs

Here are the different types of the JOINs in SQL:

  • (INNER) JOIN: Returns records that have matching values in both tables
  • LEFT (OUTER) JOIN: Return all records from the left table, and the matched records from the right table
  • RIGHT (OUTER) JOIN: Return all records from the right table, and the matched records from the left table
  • FULL (OUTER) JOIN: Return all records when there is a match in either left or right table


Full Outer Join
Inner Join
Right Join
Left Join

Theta Join:

In theta join we apply the condition on input relation(s) and then only thoseselected

rows are used in the cross product to be merged and included in the output. It means

that in normal cross product all the rows of one relation are mapped/merged with all

the rows of second relation, but here only selected rows of a relation are made cross

product with second relation.

If R and S are two relations then is the condition, which is applied for select

operation on one relation and then only selected rows are cross product with all the

rows of second relation. For Example there are two relations of FACULTY and

COURSE, now we will first apply select operation on the FACULTY relation for

selection certain specific rows then these rows will have across product with

COURSE relation, so this is the difference in between cross product and theta join.

We will now see first both the relation their different attributes and then finally the

cross product after carrying out select operation on relation.

This article is contributed by Tarun Jangra. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.


click here for solved UNIT-III

Posted in Uncategorized

Database Management System (May-2017) Unit-I

Question:- 2(a) Explain the operation of two-tier client/server architecture. How three-tire client/server architecture  is different from it.

Answer:-Understanding DBMS Architecture

A Database Management system is not always directly available for users and applications to access and store data in it. A Database Management system can be centralised(all the data stored at one location), decentralised(multiple copies of database at different locations) or hierarchical, depending upon its architecture.

1-tier DBMS architecture also exist, this is when the database is directly available to the user for using it to store data. Generally such a setup is used for local application development, where programmers communicate directly with the database for quick response.

Database Architecture is logically of two types:

  1. 2-tier DBMS architecture
  2. 3-tier DBMS architecture

2-tier DBMS Architecture

2-tier DBMS architecture includes an Application layer between the user and the DBMS, which is responsible to communicate the user’s request to the database management system and then send the response from the DBMS to the user.

An application interface known as ODBC(Open Database Connectivity) provides an API that allow client side program to call the DBMS. Most DBMS vendors provide ODBC drivers for their DBMS.

The two-tier is based on Client Server architecture. The two-tier architecture is like client server application. The direct communication takes place between client and server. There is no intermediate between client and server. Because of tight coupling a 2 tiered application will run faster.


The above figure shows the architecture of two-tier. Here the direct communication happens between client and server, there is no intermediate layer between client and server. The Two-tier architecture is divided into two parts:

  1. Client Application (Client Tier)
  2. Database (Data Tier)

On client application side the code is written for saving the data in database server. Client sends the request to server and it process the request & send back with data. The main problem of two tier architecture is the server cannot respond multiple request same time, as a result it cause a data integrity issue. When the developers are not disciplined, the display logic, business logic and database logic are muddled up and/or duplicated in a 2tier client server system.

1. Easy to maintain and modification is bit easy.
2. Communication is faster.

1. In two tier architecture application performance will be degrade upon increasing the users.
2. Cost-ineffective.


Such an architecture provides the DBMS extra security as it is not exposed to the End User directly. Also, security can be improved by adding security and authentication checks in the Application layer too.

3-tier DBMS Architecture

3-tier DBMS architecture is the most commonly used architecture for web applications.

It is an extension of the 2-tier architecture. In the 2-tier architecture, we have an application layer which can be accessed programatically to perform various operations on the DBMS. The application generally understands the Database Access Language and processes end users requests to the DBMS.

In 3-tier architecture, an additional Presentation or GUI Layer is added, which provides a graphical user interface for the End user to interact with the DBMS.


For the end user, the GUI layer is the Database System, and the end user has no idea about the application layer and the DBMS system.

Three-tier architecture typically comprise a presentation tier, a business or data access tier, and a data tier. Three layers in the three tier architecture are as follows:

  1. Client layer
  2. Business layer
  3. Data layer
  1. Client layer: Represents Web browser, a Java or other application, Applet, WAP phone etc. The client tier makes requests to the Web server who will be serving the request by either returning static content if it is present in the Web server or forwards the request to either Servlet or JSP in the application server for either static or dynamic content.
  1. Business layer:  This layer provides the business services. This tier contains the business logic and the business data. All the business logic like validation of data, calculations, data insertion etc.  Are centralized into this tier as opposed to 2-tier systems where the business logic is scattered between the front end and the backend. The benefit of having a centralized business tier is that same business logic can support different types of clients like browser, WAP (Wireless Application Protocol) client, other standalone applications written in Java, C++, C# etc. This acts as an interface between Client layer and Data Access Layer. This layer is also called the intermediary layer helps to make communication faster between client and data layer.
  1. Data layer: This layer is the external resource such as a database, ERP system, Mainframe system etc. responsible for storing the data. This tier is also known as Data Tier. Data Access Layer contains methods to connect with database or other data source and to perform insert, update, delete, get data from data source based on our input data.



  1. 1.    High performance, lightweight persistent objects.
    2.    Scalability – Each tier can scale horizontally.
    3.    Performance – Because the Presentation tier can cache requests, network utilization is minimized, and the load is reduced on the Application and Data tiers.
    4.    Better Re-usability.
    5.    Improve Data Integrity.
    6.    Improved Security – Client is not direct access to database.
    7.    Forced separation of user interface logic and business logic.
    8.    Business logic sits on small number of centralized machines (may be just one).
    9.    Easy to maintain, to manage, to scale, loosely coupled etc.


  1. 1.    Increase Complexity/Effort

Question: -2(b) Describe two alternative for specifying structural constraint on relationship types with an example.

Answer: –

figure 7.9


Constraints on Relationship Types

Relationship types usually have certain constraints that limit the possible combinations of entities that may participate in the corresponding relationship set. These constraints are determined from the miniworld situation that the relationships represent. For example, in Figure 7.9, if the company has a rule that each employee must work for exactly one department, then we would like to describe this constraint in the schema. We can distinguish two main types of binary relationship constraints: cardinality ratio and participation.

Cardinality for Binary Relationship
The cardinality ratio for a binary relationship specifies the maximum number of relationship instances that an entity can participate in. For example, in the WORKS_FOR binary relationship type, DEPARTMENT:EMPLOYEE is of cardinality ratio 1:N, meaning that each department can be related to (that is, employs) any number of employees,9 but an employee can be related to (work for) only one department. This means that for this particular relationship WORKS_FOR, a particular department entity can be related to any number of employees (N indicates there is no maximum number). On the other hand, an employee can be related to a maximum of one department. The possible cardinality ratios for binary relationship types are 1:1, 1:N, N:1, and M:N. An example of a 1:1 binary relationship is MANAGES (Figure 7.12), which relates a department entity to the employee who manages that department. This represents the miniworld constraints that—at any point in time—an employee can manage one department only and a department can have one manager only. The relationship type WORKS_ON (Figure 7.13) is of cardinality ratio M:N, because the mini world rule is that an employee can work on several projects and a project can have several employees. Cardinality ratios for binary relationships are represented on ER diagrams by displaying 1, M, and N on the diamonds. Notice that in this notation, we can either specify no maximum (N) or a maximum of one (1) on participation. An alternative notation allows the designer to specify a specific maximum number on participation, such as 4 or 5.

figure 7.12
figure 7.13

Participation Constraints and Existence Dependencies. The participation constraint specifies whether the existence of an entity depends on its being related to another entity via the relationship type. This constraint specifies the minimum number of relationship instances that each entity can participate in, and is sometimes called the minimum cardinality constraint. There are two types of participation constraints—total and partial—that we illustrate by example. If a company policy states that every employee must work for a department, then an employee entity can exist only if it participates in at least one WORKS_FOR relationship instance (Figure 7.9). Thus, the participation of EMPLOYEE in WORKS_FOR is called total participation, meaning that every entity in the total set of employee entities must be related to a department entity via WORKS_FOR. Total participation is also called existence dependency. In Figure 7.12 we do not expect every employee to manage a department, so the participation of EMPLOYEE in the MANAGES relationship type is partial, meaning that some or part of the set of employee entities are related to some department entity via MANAGES, but not necessarily all. We will refer to the cardinality ratio and participation constraints, taken together, as the structural constraints of a relationship type. In ER diagrams, total participation (or existence dependency) is displayed as a double line connecting the participating entity type to the relationship, whereas partial participation is represented by a single line (see Figure 7.2). Notice that in this notation, we can either specify no minimum (partial participation) or a minimum of one (total participation). The alternative notation (see Section 7.7.4) allows the designer to specify a specific minimum number on participation in the relationship, such as 4 or 5.

  • Attributes of Relationship Types

Relationship types can also have attributes, similar to those of entity types. For example, to record the number of hours per week that an employee works on a particular project, we can include an attribute Hours for the WORKS_ON relationship type in Figure 7.13. Another example is to include the date on which a manager started managing a department via an attribute Start_date for the MANAGES relationship type in Figure 7.12. Notice that attributes of 1:1 or 1:N relationship types can be migrated to one of the participating entity types. For example, the Start_date attribute for the MANAGES relationship can be an attribute of either EMPLOYEE or DEPARTMENT, although conceptually it belongs to MANAGES. This is because MANAGES is a 1:1 relationship, so every department or employee entity participates in at most one relationship instance. Hence, the value of the Start_date attribute can be determined separately, either by the participating department entity or by the participating employee (manager) entity. For a 1:N relationship type, a relationship attribute can be migrated only to the entity type on the N-side of the relationship. For example, in Figure 7.9, if the WORKS_FOR relationship also has an attribute Start_date that indicates when an employee started working for a department, this attribute can be included as an attribute of EMPLOYEE. This is because each employee works for only one department, and hence participates in at most one relationship instance in WORKS_FOR. In both 1:1 and 1:N relationship types, the decision where to place a relationship attribute—as a relationship type attribute or as an attribute of a participating entity type—is determined subjectively by the schema designer. For M:N relationship types, some attributes may be determined by the combination of participating entities in a relationship instance, not by any single entity. Such attributes must be specified as relationship attributes. An example is the Hours attribute of the M:N relationship WORKS_ON (Figure 7.13); the number of hours per week an employee currently works on a project is determined by an employeeproject combination and not separately by either entity.

Question: – 3(a) Discuss different types of user-friendly interfaces and database utility and list the common function that the utility perform.

Answer: – User-friendly interfaces

User-friendly interfaces provided by a DBMS may include the following: Menu-Based Interfaces for Web Clients or Browsing. These interfaces present the user with lists of options (called menus) that lead the user through the formulation of a request. Menus do away with the need to memorize the specific commands and syntax of a query language; rather, the query is composed step-bystep by picking options from a menu that is displayed by the system. Pull-down menus are a very popular technique in Web-based user interfaces. They are also often used in browsing interfaces, which allow a user to look through the contents of a database in an exploratory and unstructured manner. Forms-Based Interfaces. A forms-based interface displays a form to each user. Users can fill out all of the form entries to insert new data, or they can fill out only certain entries, in which case the DBMS will retrieve matching data for the remaining entries. Forms are usually designed and programmed for naive users as interfaces to canned transactions. Many DBMSs have forms specification languages, which are special languages that help programmers specify such forms. SQL*Forms is a form-based language that specifies queries using a form designed in conjunction with the relational database schema. Oracle Forms is a component of the Oracle product suite that provides an extensive set of features to design and build applications using forms. Some systems have utilities that define a form by letting the end user interactively construct a sample form on the screen. Graphical User Interfaces. A GUI typically displays a schema to the user in diagrammatic form. The user then can specify a query by manipulating the diagram. In many cases, GUIs utilize both menus and forms. Most GUIs use a pointing device, such as a mouse, to select certain parts of the displayed schema diagram.

Database System Utilities:- In addition to possessing the software modules just described, most DBMSs have database utilities that help the DBA manage the database system. Common utilities have the following types of functions: ■ Loading. A loading utility is used to load existing data files—such as text files or sequential files—into the database. Usually, the current (source) for- mat of the data file and the desired (target) database file structure are specified to the utility, which then automatically reformats the data and stores it in the database. With the proliferation of DBMSs, transferring data from one DBMS to another is becoming common in many organizations. Some vendors are offering products that generate the appropriate loading programs, given the existing source and target database storage descriptions (internal schemas). Such tools are also called conversion tools. For the hierarchical DBMS called IMS (IBM) and for many network DBMSs including IDMS (Computer Associates), SUPRA (Cincom), and IMAGE (HP), the vendors or third-party companies are making a variety of conversion tools available (e.g., Cincom’s SUPRA Server SQL) to transform data into the relational model. ■ Backup. A backup utility creates a backup copy of the database, usually by dumping the entire database onto tape or other mass storage medium. The backup copy can be used to restore the database in case of catastrophic disk failure. Incremental backups are also often used, where only changes since the previous backup are recorded. Incremental backup is more complex, but saves storage space. ■ Database storage reorganization.This utility can be used to reorganize a set of database files into different file organizations, and create new access paths to improve performance. ■ Performance monitoring. Such a utility monitors database usage and provides statistics to the DBA. The DBA uses the statistics in making decisions such as whether or not to reorganize files or whether to add or drop indexes to improve performance. Other utilities may be available for sorting files, handling data compression, monitoring access by users, interfacing with the network, and performing other functions.

Question: – 3(b) Why E-R model is considered as high level conceptual model? Discuss the role of high level model in database design process.

Answer: –

The three levels of data modeling, conceptual data modellogical data model, and physical data model, were discussed in prior sections. Here we compare these three types of data models. The table below compares the different features:

Feature Conceptual Logical Physical
Entity Names
Entity Relationships
Primary Keys
Foreign Keys
Table Names
Column Names
Column Data Types


Conceptual Data Model

A conceptual data model identifies the highest-level relationships between the different entities. Features of conceptual data model include:

  • Includes the important entities and the relationships among them.
  • No attribute is specified.
  • No primary key is specified.

The figure below is an example of a conceptual data model.

Conceptual Data Model

From the figure above, we can see that the only information shown via the conceptual data model is the entities that describe the data and the relationships between those entities. No other information is shown through the conceptual data model.

Logical Data Model

A logical data model describes the data in as much detail as possible, without regard to how they will be physical implemented in the database. Features of a logical data model include:

  • Includes all entities and relationships among them.
  • All attributes for each entity are specified.
  • The primary key for each entity is specified.
  • Foreign keys (keys identifying the relationship between different entities) are specified.
  • Normalization occurs at this level.

The steps for designing the logical data model are as follows:

  1. Specify primary keys for all entities.
  2. Find the relationships between different entities.
  3. Find all attributes for each entity.
  4. Resolve many-to-many relationships.

The figure below is an example of a logical data model.

Logical Data Model

Comparing the logical data model shown above with the conceptual data model diagram, we see the main differences between the two:

  • In a logical data model, primary keys are present, whereas in a conceptual data model, no primary key is present.
  • In a logical data model, all attributes are specified within an entity. No attributes are specified in a conceptual data model.
  • Relationships between entities are specified using primary keys and foreign keys in a logical data model. In a conceptual data model, the relationships are simply stated, not specified, so we simply know that two entities are related, but we do not specify what attributes are used for this relationship.

Physical Data Model

Physical data model represents how the model will be built in the database. A physical database model shows all table structures, including column name, column data type, column constraints, primary key, foreign key, and relationships between tables. Features of a physical data model include:

  • Specification all tables and columns.
  • Foreign keys are used to identify relationships between tables.
  • Denormalization may occur based on user requirements.
  • Physical considerations may cause the physical data model to be quite different from the logical data model.
  • Physical data model will be different for different RDBMS. For example, data type for a column may be different between MySQL and SQL Server.

The steps for physical data model design are as follows:

  1. Convert entities into tables.
  2. Convert relationships into foreign keys.
  3. Convert attributes into columns.
  4. Modify the physical data model based on physical constraints / requirements.

The figure below is an example of a physical data model.

Physical Data Model

Comparing the physical data model shown above with the logical data model diagram, we see the main differences between the two:

  • Entity names are now table names.
  • Attributes are now column names.
  • Data type for each column is specified. Data types can be different depending on the actual database being used.

This article is contributed by Tarun Jangra. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.


click here for solved UNIT-II

Posted in Uncategorized

Compiler Design (May-2017) Unit-IV

Question: -8(a) Explain the main issues of code generation  in detail.

Answer: – Issues in the design of code generator are:

  1. Input to the Code Generator
  • An input to the code generator consists of the intermediate representation of the source program.
  • Moreover, There are several types of the intermediate language, such as postfix notation, quadruples, and syntax trees or DAGs.
  • After The detection of semantic error should be done before submitting the input to the code submitting generator.
  • The code generation phase require complete error-free intermediate code as an input.requires
  1. Target program
  • The output of the code generator is the target program. The output may take on a variety of forms; absolute machine language, relocatable machine language, or assembly language, language.
  • Producing an absolute machine language program as output has the advantage that it can be placed in a location in memory and immediately executed.
  • Producing a relocatable machine language program as output is that the subroutine can be compiled separately. A set of relocatable object modules can link together and loaded for execution by a linking loader.
  • Similarly, Producing an assembly language program as output makes the process of code generation somewhat easier.We can generate symbolic instructions and use the macro what facilities of the assembler to help generate core.
  1. Memory management
  • Mapping names in the source program to addresses of data objects in run-time memory are done cooperatively by the front end and the code generator.
  • So that We assume that a name in a three-address statement refers to a symbol table entry for three-address the name.
  • From the symbol table information, a relative address can determine for the name bol
    in a data area.

4.Instruction selection

  • If we do not care about the efficiency of the target program, instruction selection is straightforward. It requires special handling. So that, For example, the sequence of statement example,
    a := b + c
    d := a + e
    would be translated into
    MOV b, R0
    ADD c, R0
    MOV R0, a
    MOV a, R0
    ADD e, R0
    MOV R0, d
  • So, Here the fourth statement is redundant, so we can eliminate that statement.

5.Register allocation Issues: Code Generation

  • If the instruction contains register operands then such a use becomes shorter and faster than that of using in memory.
  • Moreover, The use of registers often subdivided into two subproblems:
  • During register allocation, we select the set of variables that will reside in registers at a point in the program.
  • During a subsequent register assignment phase, we pick the specific register that a variable will reside in.
  • Finding an optimal assignment of registers to variables is difficult, even with single register value.
  • Mathematically the problem is NPNP-complete.

6.Choice of evaluation Issues: Code Generation

  • The order in which computations performed can affect the efficiency of the target code. Some computation orders require fewer registers to hold intermediate results than others. Picking the best order is another difficult, NP-complete problem.NP-complete

7.Approaches to code generation Issues: Code Generation

  • Similarly, The most important criterion for a code generator is that it produces correct code.
  • Correctness takes on special significance because of the number of special cases that code generator must face.
  • Given the premium on correctness, designing a code generator so it can easily implement, tested, and maintained is an important design goal.

Question: -8(b) Define Peephole optimization. List the characteristics of peephole optimization.

Answer: – Definition: Peephole optimization is a simple and effective technique for locally improving target code. This technique is applied to improve the performance of the target program by examining the short sequence of target instructions (called the peephole) and replace these instructions replacing by shorter or faster sequence whenever possible. Peephole is a small, moving window on the target program.

Characteristics of Peephole Optimization

So The peephole optimization can be applied to the target code using the following characteristic.

1. Redundant instruction elimination

  • Especially the redundant loads and stores can be eliminated in this type of transformations.
    MOV R0,x
    MOV x,R0
  • We can eliminate the second instruction since x is in already R0. But if MOV x, R0 is a label statement then we can not remove it.

2. Unreachable code

  • Especially the redundant loads and stores can be eliminated in this type of transformations.
  • An unlabeled instruction immediately following an unconditional jump may be removed.
  • This operation can be repeated to eliminate the sequence of instructions.
    #define debug 0
    If(debug) {
    Print debugging information
    In the intermediate representation the if statement may be translated as if-
    If debug=1 goto L1
    goto L2
    L1: print debugging information
  • Additionally, One obvious peephole optimization is to eliminate jumps over jumps. Thus no matter
    what the value of debugging, can replaced by:
    If debug goto L2debug≠1
    Print debugging information
  • Now, since debug set to 0 at the beginning of the program, constant propagation program,
    should replace by
    If 0≠1 goto L2≠1
    Print debugging information
  • As the argument of the first statement of evaluates to a constant true, it can replace by goto L2.
  • Then all the statement that prints debugging aids are manifestly unreachable and can manifestly eliminate one at a time.

3. The flow of control optimization

  • The unnecessary jumps can eliminate in either the intermediate code or the target code by the following types of peephole optimizations.
  • We can replace the jump sequence.
    Goto L1
    L1: goto L2
    By the sequence
    Goto L2
    L1: goto L2
  • If there are no jumps to L1 then it may be possible to eliminate the statement L1: goto L2 provided it preceded by an unconditional jump. Similarly, the sequence
    If a<b goto L1
    L1: goto L2
    Can replaced by
    If a<b goto L2
    L1: goto L2

4. Algebraic simplification

  • So Peephole optimization is an effective technique for algebraic simplification.
  • The statements such as x = x + 0 or x := x* 1 can eliminated by peephole optimization.

5. Reduction in strength

  • Certain machine instructions are cheaper than the other.
  • In order to improve the performance of the intermediate code, we can replace these instructions by equivalent cheaper instruction.
  • So For example, x2 is cheaper than  x * x. Similarly, addition and subtraction are cheaper than multiplication and division. So we can add an effectively equivalent addition and subtraction for multiplication and division.

6. Machine idioms

  • So The target instructions have equivalent machine instructions for performing some have operations.
  • Hence we can replace these target instructions by equivalent machine instructions in order to improve the efficiency.
  • Example: Some machines have auto-increment or auto-decrement addressing modes.decrement These modes can use in a code for a statement like i=i+1.

Question: -9(a) Explain DAG representation of basic blocks in detail.

Answer: –DAG Representation

  • DAG stands for Directed Acyclic Graph
  • Syntax tree and DAG both, are graphical representations. Syntax tree does not find the common sub expressions whereas DAG can
  • Another usage of DAG is the application of optimization technique in the basic block
  • To apply optimization technique on basic block, a DAG is a constructed three address code which is the output of an intermediate code generation

Characteristics of DAG are:

  • All internal nodes store operator values
  • External or leaf nodes are identifiers or variable names or constants

Algorithm for Construction of DAG

There are three possible cases to construct DAG on three address code:

Case 1: x = y op z

Case 2: x = op y

Case 3: x = y

DAG can be constructed as follows:


If y is undefined then create a node with label y. Similarly create a node with label z.


For case 1, create a node with label op whose left child is node y, and node z will be the right child. Also, check for any common sub expressions. For case 2, determine if a node is labelled op. such node will have a child node y. for case 3 node n will be node y.


Delete x from list of identifiers for node x. append x to the list of attached identifiers for node n found in step 2.


Consider the following three address code statements.

a = b * c

d = b

e = d * c

b = e

f = b + c

g = f + d

Step 1

Consider the first statement, i.e., a = b * c. Create a leaf node with label b and c as left and right child respectively and parent of it will be *. Append resultant variable a to the node *.

Step 2

For second statement, i.e., d = b, node b is already created. So, append d to this node.


Step 3

For third statement e = d * c, the nodes for d, c and * are already create. Node e  is not created, so append node e to node *.


Step 4

For fourth statement b = e, append b to node e.


Step 5

For fifth statement f = b + c, create a node for operator + whose left child b and right child c and append f to newly created node +


Step 6

For last statement g = f + d, create a node for operator + whose left child d and right child f and append g to newly created node +.


DAG Applications:

  • Determines the common sub expressions
  • Determines the names used inside the block, and the names that are computed outside the block
  • Determines which statements of the block could have their computed value outside the block
  • Code may be represented by a DAG describing the inputs and outputs of each of the arithmetic operations performed within the code; this representation allows the compiler to perform common subexpression elimination efficiently
  • Several programming languages describe systems of values that are related to each other by a directed acyclic graph. When one value changes, its successors are recalculate; each value is evaluated as a function of this predecessors in the DAG

Question: – 9(b) Explain various code optimization techniques. Discuss the strategies for loop optimization and dead code elimination.

Answer:-Code Optimization Techniques

  • Optimization is a program transformation technique, which tries to improve the code by making it consume less resources (i.e. CPU, Memory) and deliver high speed.
  • In optimization, high-level general programming constructs are replaced by very efficient low-level programming codes.
  • A code optimizing process must follow the three rules given below:
    1. The output code must not, in any way, change the meaning of the program.
    2. Optimization should increase the speed of the program and if possible, the program should demand less number of resources.
    3. Optimization should itself be fast and should not delay the overall compiling process.
  • Efforts for an optimized code can be made at various levels of compiling the process.
  1. At the beginning, users can change/rearrange the code or use better algorithms to write the code.
  2. After generating intermediate code, the compiler can modify the intermediate code by address calculations and improving loops.
  3. While producing the target machine code, the compiler can make use of memory hierarchy and CPU registers.
  • Optimization can be categorized broadly into two types:

Machine Independent and Machine Dependent.

Machine Independent Optimization

  • In this optimization, the compiler takes in the intermediate code and transforms a part of the code that does not involve any CPU registers and/or absolute memory locations.
  • For example:
  • do
  • {
  • item = 10;
  •  value = value + item;
  • } while(value<100);
  • This code involves repeated assignment of the identifier item, which if we put this way:
  • Item = 10;
  • do
  • {
  • value = value + item;
  • } while(value<100);

should not only save the CPU cycles, but can be used on any processor.

Machine Dependent Optimization

  • Machine-dependent optimization is done after the target code has been generated and when the code is transformed according to the target machine architecture.
  • It involves CPU registers and may have absolute memory references rather than relative references.
  • Machine-dependent optimizers put efforts to take maximum advantage of memory hierarchy.

Machine independence includes two types

  1. Function Preserving
  2. Loop optimization
  • Function preserving
  • Common Sub Expression Elimination

The expression that produces the same results should be removed out from the code


T1 = 4+i T1 = 4+i
T2 = T2 +T1 T2 = T2 +T1
T3 = 4 * i T4 = T2 + T1
T4 = T2 + T3
  • Constant folding

If expression generates a constant value then instead of performing its calculation again and again we calculate it once and assign it.


T1 = 512 T1 = 2.5
  • Copy Propagation

In this propagation a F value is been send to G and G value is been send to H We can eliminate G variable directly assigning the value of F to H.

T1 = X T2 = T3 + T2
T2 = T3 + T2 T3 = T1
T3 = X
  • Dead Code Elimination

Dead code is one or more than one code statements, which are:

    • Either never executed or unreachable,
    • Or if executed, their output is never used.

Thus, dead code plays no role in any program operation and therefore it can simply be eliminated.

Partially dead code Elimination

  • There are some code statements whose computed values are used only under certain circumstances, i.e., sometimes the values are used and sometimes they are not.
  • Such codes are known as partially dead-code.

  • The above control flow graph depicts a chunk of program where variable ‘a’ is used to assign the output of expression ‘x * y’.
  • Let us assume that the value assigned to ‘a’ is never used inside the loop.
  • Immediately after the control leaves the loop, ‘a’ is assigned the value of variable ‘z’, which would be used later in the program.
  • We conclude here that the assignment code of ‘a’ is never used anywhere, therefore it is eligible to be eliminated.
  • Likewise, the picture below depicts that the conditional statement is always false, implying that the code, written in true case, will never be executed, hence it can be removed.

  1. Loop Optimization

We are going to perform optimization on loops.

  • Code Motion

It specifies on a condition if we perform some operations to be carried out and then compare for a condition.

Instead of that perform the calculation outside the loop and assign a value in the calculation.

While(i < = limit-2)

T1 = limit – 2
While (i< =t1)
  • Strength Reduction

It specifies the operators such as multiplication and division can be replaced by a addition and subtraction respectively.

The multiplication operator can be easily replaced by left shift operator a<<1 Division can be replaced by a a>>1 operator.

T1 = a * 2 a<<1
T1 = a / 2 a >> 1
  • Frequency Reduction

In this case if a expression inside a loop is not dynamically affected by a loop we calculate it outside the loop and use the value inside the loop.

  • Loop Distribution

It specifies the values in a particular loop to be assigned to a array keeps of varing i.e the array location in which a loop need to be work again and again. We can use two different loops which allows loop distribution

Peephole Optimization

  • This optimization technique works locally on the source code to transform it into an optimized code. By locally, we mean a small portion of the code block at hand.
  • These methods can be applied on intermediate codes as well as on target codes.
  • A bunch of statements is analyzed and are checked for the following possible optimization:
  1. Redundant instruction elimination
  • At source code level, the following can be done by the user:

At compilation level, the compiler searches for instructions redundant in nature. Multiple loading and storing of instructions may carry the same meaning even if some of them are removed. For example:

  • MOV x, R0
  • MOV R0, R1

We can delete the first instruction and re-write the sentence as:

MOV x, R1

  1. Unreachable code
  • Unreachable code is a part of the program code that is never accessed because of programming constructs.
  • Programmers may have accidently written a piece of code that can never be reached.


void add_ten(int x)


return x + 10;

printf(“value of x is %d”, x);


  • In this code segment, the printf statement will never be executed as the program control returns back before it can execute, hence printf can be removed.
  • In this code segment, the printf statement will never be executed as the program control returns back before it can execute, hence printf can be removed.
  1. Flow of control optimization
  • There are instances in a code where the program control jumps back and forth without performing any significant task.
  • These jumps can be removed. Consider the following chunk of code:
  • MOV R1, R2
  • GOTO L1
  • L1:   GOTO L2
  • L2: INC R1
  • In this code, label L1 can be removed as it passes the control to L2. So instead of jumping to L1 and then to L2, the control can directly reach L2, as shown below:
  • MOV R1, R2
  • GOTO L2
  •             L2:   INC R1
  1. Algebraic expression simplification
  • There are occasions where algebraic expressions can be made simple. For example, the expression a = a + 0 can be replaced by a itself and the expression a = a + 1 can simply be replaced by INC a.
  1. Strength reduction
  • There are operations that consume more time and space. Their ‘strength’ can be reduced by replacing them with other operations that consume less time and space, but produce the same result.
  • For example, x * 2 can be replaced by x << 1, which involves only one left shift. Though the output of a * a and a2 is same, a2 is much more efficient to implement.
  1. Accessing machine instructions
  • The target machine can deploy more sophisticated instructions, which can have the capability to perform specific operations much efficiently.
  • If the target code can accommodate those instructions directly, that will not only improve the quality of code, but also yield more efficient results.

This article is contributed by Tarun Jangra. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.

Posted in Uncategorized


Question:- 4(a) How input buffering helps lexical analyzer in compilation process? Discuss with an example.

Answer:- The Role of Lexical Analyzer:


As the first phase of  compiler, the main task of the lexical analyzer is to read the input characters of the source program group them into lexemes and produce as output a sequence of tokens for each lexeme in the source program. When the lexical analyzer  discovers a lexeme constituting an identifier, it needs to enter that lexeme into the symbol table. The lexical analyzer not only identifies the lexemes but also pre-processes the source text like removing comments, white spaces, etc.


Lexical analyzers are divided into a cascade of two processes:


  1. Scanning – It consists of simple processes that do not require the tokenization of the input such as deletion of comments, compaction of consecutive white space characters into one.
  2. Lexical Analysis- This is the more complex portion where the scanner produces sequence of tokens as output.


Tokens, Patterns and Lexemes:


  1. A Token is pair consisting of a token name and an optional attribute value. The token name is an abstract symbol representing the kind of lexical unit, eg., a particular keyword or an identifier.
  2. A pattern is a description of the form that the lexemes of a token may take. In case of a keyword as a token the pattern is just a sequence of characters that form the keyword.
  3. A Lexeme is a sequence of characters in the source program that matches the pattern for a token and is identified by the lexical analyzer as an instance of that token.



Input Buffering:


Buffer Pairs:


Because of the amount of time taken to process characters and the large number of characters that must be processed during the compilation of a large source program, specialized buffering techniques have been developed to reduce the amount of overhead required to process a single input character.

Two pointers to the input are maintained:

  1. Pointer Lexeme Begin, marks the beginning of the current lexeme, whose extent we are attempting to determine
  2. Pointer Forward, scans ahead until a pattern match is found.

Once the next lexeme is determined, forward is set to character at its right end.Then, after the lexeme is recorded as an attribute value of a token returned to the parser, Lexeme Begin is set to the character immediately after the lexeme just found.


If we use the scheme of Buffer pairs we must check, each time we advance forward, that we have not moved off one of the buffers; if we do, then we must also reload the other buffer. Thus, for each character read, we make two tests: one for the end of the buffer, and one to determine what character is read (the latter may be a multiway branch). We can combine the buffer-end test with the test for the current character if we extend each buffer to hold a sentinel character at the end. The sentinel is a special character that cannot be part of the source program, and a natural choice is the character EOF.

Note that EOF retains its use as a marker for the end of the entire input. Any EOF that appears other than at the end of a buffer means that the input is at an end.

Question :- 4(b) What is recursive descent parser? Construct-recursive descent parser for the following:      E|E+T|T



Answer: – Parsing

Recall that parsing is the problem of taking a string of terminal symbols and finding a derivation for that string of symbols in a context-free grammar. Parsing is critical task in implementing an interpreter or compiler for a programming language, since the syntax of a program contains essential information about the meaning of the program.

parser is the module of an interpreter or compiler which performs parsing. It takes a sequence of tokens from the lexical analyzer (you know how to write one of those!), finds a derivation for the sequence of tokens, and builds a parse tree (also known as a syntax tree) representing the derivation. We have seen that parse trees are very important in figuring out the meaning of a program (or part of a program).

Recursive Descent

Recursive descent is a simple parsing algorithm that is very easy to implement. It is a top-down parsing algorithm because it builds the parse tree from the top (the start symbol) down.

The main limitation of recursive descent parsing (and all top-down parsing algorithms in general) is that they only work on grammars with certain properties. For example, if a grammar contains any left recursion, recursive descent parsing doesn’t work.

Eliminating Left Recursion

Here’s our simple expression grammar we discussed earlier:

S → E

E → T | E + T | E – T

T → F | T * F | T / F

F → a | b | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9

[S, E, T, and F are nonterminal symbols, and ab, and the digits 0-9 are terminal symbols.]

Unfortunately, this grammar is not suitable for parsing by recursive descent, because it uses left recursion. For example, consider the production

E → E + T

This production is left recursive because the nonterminal on the left hand side of the production, E, is the first symbol on the right hand side of the production.

To adapt this grammar to use with a recursive descent parser, we need to eliminate the left recursion. There is a simple technique for eliminating immediate instances of left recursion. [This technique won’t handle indirect instances of left recursion.]

Given an occurrence of left-recursion:

A → A α

A → β

Note that some non-recursive production of the form A → β must exist; otherwise, we could never eliminate occurrences of the nonterminal A from our working string when constructing a derivation.

We can rewrite the rules to eliminate the left recursion as follows:

A → β A’

A’ → α A’

A’ → ε

So, for example,

E → E + T

E → T


E → T E’

E’ → + T E’

E’ → ε

Here’s the entire expression grammar, with left-recursion eliminated.

E → T E’

E’ → + T E’

E’ → – T E’

E’ → ε

T → F T’

T’ → * F T’

T’ → / F T’

T’ → ε

F → a | b | 0 | 1 | 2 | … | 9


Another property required of a grammar to be suitable for top-down parsing is that the grammar is left-factored. A left-factored grammar is one where for each nonterminal, there are no two productions on that nonterminal which have a common nonempty prefix of symbols on the right-hand side of the production.

For example, here is a grammar that is not left-factored:

A → a b c

A → a b d

Both productions share the common prefix a b on the right hand side of the production.

We can left-factor the grammar by making a new nonterminal to represent the alternatives for the symbols following the common prefix:

A → a b A’

A’ → c

A’ → d

Our non-left-recursive expression grammar is already left-factored, so we don’t need to change it.

Recursive Descent Parsing

Once you have a non-left-recursive, left-factored grammar, recursive descent parsing is extremely easy to implement.

Each nonterminal symbol has a parsing function. The purpose of the parsing function for a nonterminal is to consume a string of terminal symbols that are “generated” by an occurrence of that nonterminal.

Terminal symbols are consumed either by directly reading them from the lexer, or by calling the parse methods of another nonterminal (or nonterminals). In general, the behavior of the parse method for a particular nonterminal is governed by what string of symbols (nonterminal and terminal) is legal for the right-hand side of productions on the nonterminal.

Typically, any parser will construct a parse tree as a side-effect of parsing a string of input symbols. The nodes of a parse tree represent the nonterminal and nonterminal symbols generated in the derivation of the input string. So, we can have the parse method for each nonterminal return a reference to a parse tree node for that particular nonterminal symbol.


Here’s what a parse method for the E nonterminal in our revised expression grammar might look like:

public Symbol parseE() throws IOException {

NonterminalSymbol e = new NonterminalSymbol(“E”);





return e;


The parseE method internally calls parseT and parseEPrime methods, because the only production on E is

E → T E’

The parseT method is similar.

The parseEPrime method is more interesting. There are three productions on E’:

E’ → ε

E’ → + T E’

E’ → – T E’

When parseEPrime is called, we should ask the lexical analyzer if we’ve reached the end of the input string. If so, then the epsilon production must be applied.

If the lexer is not at end-of-input, we should ask the lexical analyzer for a single terminal symbol. If we see a symbol other than + or , then we’ll again assume that the epsilon production should be applied. Otherwise, we’ll add the terminal symbol to the parse node we’re creating for the E’ symbol, and continue by parsing the T and E’ nonterminal symbols by calling their parse methods:

private Symbol parseEPrime() throws IOException {

NonterminalSymbol ePrime = new NonterminalSymbol(“E'”);


TerminalSymbol op = lexer.peek(); // look ahead

if (op == null) {

// end of input: epsilon production is applied

System.out.println(“end of input”);

} else {

if (!op.getValue().equals(“+”) && !op.getValue().equals(“-“)) {

// we saw a terminal symbol other than + or -:

// apply epsilon production

} else {

// consume the operator



// saw + or –

// production is

//    E’ -> + T E’

// or

//    E’ -> – T E’







return ePrime;


Note a few interesting things going on in parseEPrime:

  • The lexical analyzer is the lexer object. We’re using two of its methods: peek, which asks for the next token without consuming it, and get, which asks for and consumes the next token. Both methods return a TerminalSymbol object. peek returns the null value when the end of input is reached.
  • Applying the epsilon production means we don’t add any child symbols to the parse node being created.
  • The parseEPrime method can call itself recursively, because the

E’ → + T E’

E’ → – T E’

productions contain the symbol E’ on the right hand side. That’s why it’s called recursive descent!

To use a recursive descent parser to parse an entire input string, simply call the parse method for the grammar’s start symbol. It will return a reference to the root node of the parse tree.

Question: -5(a)  Generate SLR Parsing table for following grammar.




and parse the sentence “bdc” and “dd”.

Answer: –Introduction SLR Parsing table

Implementation of a particular method of constructing a parse table for an LR (left to right bottom up) parser called an SLR parser or Simple-LR parser.

Main things that lead to i am start to program this project is export pars Table in reusable format for other parsing program.


Syntax Analysis. Recognize sentences in a language. Discover the structure of a document/program.

Construct (implicitly or explicitly) a tree (called as a parse tree) to represent the structure.

LR Parsers

Recognize CFGs for most programming languages. No need to rewrite grammar.
Most general O(n) shift-reduce method.
Optimal syntax error detection.
LR grammars superset of LL grammars.

Bottom-Up Parsing

Construct parse tree from the leaves to the root.
Corresponds to a rightmost derivation in reverse.

Shift-Reduce Parsing

Parser performs actions based on parse table information:

  1. Shift. Shift the next input symbol onto the top of the stack.
  2. Reduce. The right end of the string to be reduced must be at the top of the stack. Locate the left end of the string within the stack and decide with what nonterminal to replace the string.
  3. Accept. Announce successful completion of parsing.
  4. Error. Discover a syntax error and call an error recovery routine.

Consider following example grammar and steps to create Parse Table :

Grammar :

States of this grammar :

state creation rules :

1- State 0 build from extra grammar Law S’ -> S $ that S is all start symbol of grammar

and one Dot ( . ) before S symbol.

2- if Dot symbol is before a nonterminal Add grammar Laws that this nonterminal is in Left Hand Side of that Law and set Dot in before of first part of Right Hand Side.

3-if state is exist ( a state with this Laws and same Dot position ) use that instead

4- for one state that step 4 not check to now find Set of terminal and nonterminal that Dot exist in before

5- if Step 4 Set is Nonempty go to 5 else go to 7

5- for each terminal/nonterminal in set step 4 create new state by using all grammar law that Dot position is before of that terminal/nonterminal in refrence state by increasing Dot Point to Next Part in Right Hand Side of that Laws

6- goto step 2

7-end of state building 

fill the parse table by SLR algorithm rules:

Goto : in parse table in row StateNumber SN and Column nonterminal V set GX when exist an export nonterminal V from state SN to state SX

Reduce : in parse table in row StateNumber SN and Column terminal T set RX when exist Law number X in state SN that have Dot point in end of law and T is last part of Right Hand Side of law number X ( terminal T is before Dot )

Shift : in parse table in row StateNumber SN and Column terminal T set SY when exist a Law in state SN that have Dot point before terminal T and state SN exports T to State number Y

Accept : In parse Table in row StateNumber SN and Column ( Extra terminal added to real Terminals ) set Accept when S’ -> S Dot ( Dot after Start nonterminal ) member of StateNumber SN Laws

Error : all empty parse table cells are Parse Errors.

grammar can ambiguity between shift and reduce when both condition is true and set parse table Row SN to RX/SY

Using the code

for avoid of constraint in count of grammar production rule and nonterminal and terminal and … all classes implemented in linked list data structure.

by this logic all Classes implemented in 3 level :

1- Item fields Struct

2- Node Class

3- List Class

Classes :

Class Diagram exist in download list

Classes Definitions:


this Class Top Level of All Detail Classes and Contain main Properties:

first(Head) of nonterminal list object

All terminal count

All nonterminal count

a stack for terminal symbols


for manage Nonterminals. Nonterminal Parent is ParsHead

sample method description :

Grammar Laws proccessed and LHS nonterminal and LawNumber send to this method, nonterminal pure name build from split # and rest of name after that check that Nonterminals list is empty if it was Add first node that is extraNonTerm and add law number 0 ( added law ) and exit from method.

if first item is not null , searching for Nonterminal node that it’s name is equall to searching nonterminal , if not found add new NonterminalNode and add it in List.

finally add LawNum to LawLink of Nonterminal

Question: -5(b)  How can errors be recovered in lexical phase of a compiler? Explain.


The program errors are detected and reported by parser. The parser handles the errors encountered and the rest of the input is parsed. The errors may be encountered at various stages of the compilation process. At various stages, the following kinds of errors occur:

  • Lexical : name of some identifier typed incorrectly
  • Syntactical : missing semicolon or unbalanced parenthesis
  • Semantical: incompatible value assignment
  • Logical : code not reachable, infinite loop

Implemented in Compiler Design

In order to deal with the errors in the code, the following are the four common error-recovery strategies:

Panic mode

When an error is encountered anywhere in the statement, the rest of the statement is ignored by not processing the input from erroneous input to delimiter such as semi-colon. This mode prevents the parser from developing infinite loops and is considered as the easiest way for recovery of the errors.

Statement mode

When an error is encountered by the parser, corrective measures are taken which facilitate the parser to parse the rest of the inputs of the statements. For example, inserting a missing semicolon, replacing comma with a semicolon etc. Here more attention is required as one wrong correction may lead to infinite loop.

Error productions

The compiler designers sometimes know that certain errors may occur in the code. In such instances, augmented grammar is created by the designers as productions which generate constructs during the time of occurrence of errors.

Global correction

The program in hand is considered as a whole and the intended program is figured out and the closest match for the same is matched, which is error-free. When an erroneous input (statement) X is fed, it creates a parse tree for some closest error-free statement Y. This enables the parser in making minimal changes to the source code.

Abstract Syntax Trees

The representation of the parse tree is not easily parsed by the compiler as they contain more details. For instance, the following parse tree is considered as an example:

It is observed that the parent nodes have single child leaf nodes. When feeding it to the next phase, this information can be eliminated. The hiding of extra information results in a tree as shown below:

Abstract tree can be represented as:

The important data structures in a compiler with minimum unnecessary information are ASTs. ASTs are easily used by the compiler as they are more compact than a parse tree.

This article is contributed by Tarun Jangra. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.

deeply explained topics


click here for solved UNIT-III