DSTCA data structure is a way of storing data in a computer so that it can be used efficiently. Often a carefully chosen data structure will allow the most efficient algorithm to be used. The choice of the data structure often begins from the choice of an abstract data type. A well-designed data structure allows a variety of critical operations to be performed, using as few resources, both execution time and memory space, as possible. Data structures are implemented using the data types, references and operations on them provided by a programming language, such as C.Different kinds of data structures are suited to different kinds of applications, and some are highly specialized to certain tasks. For example, B-trees are particularly well-suited for implementation of databases, while routing tables rely on networks of machines to function.The fundamental building blocks of most data structures are arrays, records,discriminated unions, and references. For example, the nullable reference, a reference which can be null, is a combination of references and discriminated unions, and the simplest linked data structure, the linked list, is built from records and nullable references.
| Introduction to Data structures | |||||||||||||||||||||||||||
| Introduction | |||||||||||||||||||||||||||
|
|||||||||||||||||||||||||||
| Data Structures & Algorithms – Defined | |||||||||||||||||||||||||||
| A data structure is an arrangement of data in a computer’s memory or even disk storage. An example of several common data structures are arrays, linked lists, queues, stacks,binary trees, and hash tables. Algorithms, on the other hand, are used to manipulate the data contained in these data structures as in searching and sorting. | |||||||||||||||||||||||||||
| Many algorithms apply directly to a specific data structures. When working with certain data structures you need to know how to insert new data, search for a specified item, and deleting a specific item. | |||||||||||||||||||||||||||
| Commonly used algorithms include are useful for: | |||||||||||||||||||||||||||
|
|||||||||||||||||||||||||||
| Characteristics of Data Structures | |||||||||||||||||||||||||||
|
|||||||||||||||||||||||||||
|
|
|||||||||||||||||||||||||||
Introduction to Pointer
| To understand pointers, you need a basic knowledge of how your computer stores infor-mation in memory. The following is a somewhat simplified account of PC memory storage.Pointers are so commonly used as references that sometimes people use the word “pointer” to refer to references in general; however, more properly it only applies to data structures whose interface explicitly allows it to be manipulated as a memory address. |
|
So now that you understand how pointers work, let’s define them a little better.
- A pointer when declared is just a reference. DECLARING A POINTER DOES NOT CREATE ANY SPACE FOR THE POINTER TO POINT TO. We will tackle this dynamic memory allocation issue later.
- A pointer is a reference to an area of memory in the heap. The heap is a dynamically allocated area of memory when the program runs.
Pointer Declaration & Syntax
| Pointers are declared by using the * in front of the variable identifier. For example: |
| int *ip; float *fp = NULL; |
| This declares a pointer, ip, to an integer. Let’s say we want ip to point to an integer. The second line declares a pointer to a float, but initializes the pointer to point to the NULL pointer. The NULL pointer points to a place in memory that cannot be accessed. NULL is useful when checking for error conditions and many functions return NULL if they fail. |
int x = 5;
int *ip;
ip = &x; We first encountered the & operator first in the I/O section. The & operator is to specify the address-of x. Thus, the pointer, ip is pointing to x by assigning the address of x. This is important. You must understand this concept. This brings up the question, if pointers contain addresses, then how do I get the actual value of what the pointer is pointing to? This is solved through the * operator. The * dereferences the pointer to the value. So,printf(“%d %d\n”, x, *ip);would print 5 5 to the screen. There is a critical difference between a dereference and a pointer declaration:int x = 0, y = 5, *ip = &y;
x = *ip; The statement int *ip = &y; is different than x = *ip;. The first statement does not dereference, the * signifies to create a pointer to an int. The second statement uses a dereference.Remember the swap function? We can now simulate call by reference using pointers. Here is a modified version of the swap function using pointers:
void swap(int *x, int *y) {
int tmp;
tmp = *x;
*x = *y;
*y = tmp;
}
int main() {
int a = 2, b = 3;
swap(&a, &b);
return EXIT_SUCCESS;
}
This snip of swapping code works. When you call swap, you must give the address-of a and b, because swap is expecting a pointer. Why does this work? It’s because you are giving the address-of the variables. This memory does not “go away” or get “popped off” after the function swap ends. The changes within swap change the values located in those memory addresses.
Working with Pointers
| Imagine that we have an int called i. Its address could be represented by the symbol &i. If the pointer is to be stored as a variable, it should be stored like this. |
| int *pi = &i; |
| int * is the notation for a pointer to an int. & is the operator which returns the address of its argument. When it is used, as in &i we say it is referencing i. |
The opposite operator, which gives the value at the end of the pointer is *. An example of use, known as de-referencing pi, would bei = *pi; Take care not to confuse the many uses of the * sign; Multiplication, pointer declaration and pointer de-referencing.This is a very confusing subject, so let us illustrate it with an example. The following function fiddle takes two arguments, x is an int while y is a pointer to int. It changes both values. fiddle(int x, int *y)
{ printf(” Starting fiddle: x = %d, y = %d\n”, x, *y);
x ++;
(*y)++;
printf(“Finishing fiddle: x = %d, y = %d\n”, x, *y);
} since y is a pointer, we must de-reference it before incrementing its value.A very simple program to call this function might be as follows.main();
{ int i = 0;
int j = 0;
printf(” Starting main : i = %d, j = %d\n”, i, j);
printf(“Calling fiddle now\n”);.
fiddle(i, &j);
printf(“Returned from fiddle\n”);
printf(“Finishing main : i = %d, j = %d\n”, i, j);
} Note here how a pointer to int is created using the & operator within the call fiddle(i, &j);.The result of running the program will look like this.Starting main : i = 0 ,j = 0
Calling fiddle now
Starting fiddle: x = 0, y = 0
Finishing fiddle: x = 1, y = 1
Returned from fiddle
Finishing main : i = 0, j = 1 After the return from fiddle the value of i is unchanged while j, which was passed as a pointer, has changed.To summarise, if you wish to use arguments to modify the value of variables from a function, these arguments must be passed as pointers, and de-referenced within the function. Where the value of an argument isn’t modified, the value can be passed without any worries about pointers.
| Array | ||||
| Introduction to Arrays | ||||
|
||||
| Program | ||||
| #include <stdio.h>
main() { int a[5]; \\A for(int i = 0;i<5;i++) { a[i]=i;\\B } printarr(a); } void printarr(int a[]) { for(int i = 0;i<5;i++) { printf(“value in array %d\n”,a[i]); } } |
||||
| Explanation | ||||
|
||||
| Memory Representation | ||||
| An array is represented in memory by using a sequential mapping. The basic characteristic of the sequential mapping is that every element is at a fixed distance apart. Therefore, if the ith element is mapped into a location having an address a, then the (i + 1)th element is mapped into the | ||||
| memory location having an address (a + 1), as shown in Representation of an array. | ||||
| The address of the first element of an array is called the base address, so the address of the the ith element is Base address + offset of the ith element from base address where the offset is computed as: | ||||
| Offset of the ith element = number of elements before the ith * size of each element. | ||||
| If LB is the lower bound, then the offset computation becomes: | ||||
| offset = (i – LB) * size. | ||||
| Representation of Two-Dimensional Array | ||||
| A two-dimensional array can be considered as a one-dimensional array whose elements are also one-dimensional arrays. So, we can view a two dimensional array as one single column of rows and map it sequentially as shown in image below . Such a representation is called a row-major representation. | ||||
|
Row-major representation of a two-dimensional array |
||||
| The address of the element of the ith row and the jth column therefore is: | ||||
| addr(a[i, j]) = ( number of rows placed before ith row * size of a row) + (number of elements placed before the jth element in the ith row * size of element) | ||||
| where: | ||||
| Number of rows placed before ith row = (i – LB1), and LB1 is the lower bound of the first dimension. | ||||
| Size of a row = number of elements in a row * a size of element. | ||||
| Number of elements in a row = (UB2 – LB2+1), where UB2 and LB2 are the upper and lower bounds of the second dimension, respectively. | ||||
| Therefore: | ||||
| addr(a[i, j]) = ((i – LB1) * (UB2 – LB2+1) * size) + ((j – LB2)*size) | ||||
| It is also possible to view a two-dimensional array as one single row of columns and map it sequentially as shown in image below. Such a representation is called a column-major representation. | ||||
Application of Arrays
| Whenever we require a collection of data objects of the same type and want to process them as a single unit, an array can be used, provided the number of data items is constant or fixed. Arrays have a wide range of applications ranging from business data processing to scientific calculations to industrial projects. |
| Implementation of a Static Contiguous List |
| A list is a structure in which insertions, deletions, and retrieval may occur at any position in the list. Therefore, when the list is static, it can be implemented by using an array. When a list is implemented or realized by using an array, it is a contiguous list. By contiguous, we mean that the elements are placed consecutively one after another starting from some address, called the base address. The advantage of a list implemented using an array is that it is randomly accessible. The disadvantage of such a list is that insertions and deletions require moving of the entries, and so it is costlier. A static list can be implemented using an array by mapping the ith element of the list into the ith entry of the array, as shown below: |
A complete C program for implementing a list with operations for reading values of the elements of the list and displaying them is given here: #include<stdio.h>
#include<conio.h>
void read(int *,int);
void dis(int *,int);
void main()
{
int a[5],i,sum=0;
clrscr();
printf(“Enter the elements of array \n”);
read(a,5); /*read the array*/
printf(“The array elements are \n”);
dis(a,5);
}
void read(int c[],int i)
{
int j;
for(j=0;j<i;j++)
scanf(“%d”,&c[j]);
fflush(stdin);
}
void dis(int [],int i)
{
int j;
for(j=0;j<i;j++)
printf(“%d “,d[j]);
printf(“\n”);
}
Array Manipulation
| Shown next are C programs for carrying out manipulations such as finding the sum of elements of an array, adding two arrays, and reversing an array. |
| Program |
| ADDITION OF THE ELEMENTS OF THE LIST |
| #include<stdio.h>
#include<conio.h> void main() { void dis(int *,int); int a[5],i,sum=0; clrscr(); printf(“Enter the elements of list \n”); read(a,5); /*read the list*/ printf(“The list elements are \n”); dis(a,5); for(i=0;i<5;i++) { sum+=a[i]; } printf(“The sum of the elements of the list is %d\n”,sum); getch(); } void read(int c[],int i) { int j; for(j=0;j<i;j++) scanf(“%d”,&c[j]); fflush(stdin); } void dis(int d[],int i) { int j; for(j=0;j<i;j++) printf(“%d “,d[j]); printf(“\n”); } |
Example InputEnter the elements of the first array 15
30
45
60
75 OutputThe elements of the first array are15 30 45 60 75 The sum of the elements of an array is 225. Addition of the two listsSuppose the first list is1
2
3
4
5 and the second list is5
6
8
9
10 The first element of first list is added to the first element of the second list, and the result of the addition is the first element of the third list. In this example, 5 is added to 1, and the first element of third list is 6.This step is repeated for all the elements of the lists and the resultant list after the addition is6
8
11
13
15 #include<stdio.h>
#include<conio.h>
void main()
{
void read(int *,int);
void dis(int *,int);
void add(int *,int *,int * ,int);
int a[5],b[5],c[5],i;
clrscr();
printf(“Enter the elements of first list \n”);
read(a,5); /*read the first list*/
printf(“The elements of first list are \n”);
dis(a,5); /*Display the first list*/
printf(“Enter the elements of second list \n”);
read(b,5); /*read the second list*/
printf(“The elements of second list are \n”);
dis(b,5); /*Display the second list*/
add(a,b,c,i);
printf(“The resultant list is \n”);
dis(c,5);
getch();
}
void add(int a[],int b[],int c[],int i)
{
for(i=0;i<5;i++)
{
c[i]=a[i]+b[i];
}
}
void read(int c[],int i)
{
int j;
for(j=0;j<i;j++)
scanf(“%d”,&c[j]);
fflush(stdin);
}
void dis(int d[],int i)
{
int j;
for(j=0;j<i;j++)
printf(“%d “,d[j]);
printf(“\n”);
} Elucidation
- Repeat step (2) for i=0,1,2,… (n−1), where n is the maximum number of elements in a list.
- c[i] = a[i]+b[i], where a is the first list, b is the second list, and c is the resultant list; a[i] denotes the ith element of list a.
Example InputEnter the elements of the first list1
2
3
4
5 OutputThe elements of the first list are2 3 4 5 InputEnter the elements of the second list6
7
8
9
10 OutputThe elements of the second list are6 7 8 9 10 The resultant list is7 9 11 13 15 Inverse of the list The following program makes a reverse version of the list.#include<stdio.h>
#include<conio.h>
void main()
{
void read(int *,int);
void dis(int *,int);
void inverse(int *,int);
int a[5],i;
clrscr();
read(a,5);
dis(a,5);
inverse(a,5);
dis(a,5);
getch();
}
void read(int c[],int i)
{
int j;
printf(“Enter the list \n”);
for(j=0;j<i;j++)
scanf(“%d”,&c[j]);
fflush(stdin);
}
void dis(int d[],int i)
{
int j;
printf(“The list is \n”);
for(j=0;j<i;j++)
printf(“%d “,d[j]);
printf(“\n”);
}
void inverse(int inver_a[],int j)
{
int i,temp;
j-;
for(i=0;i<(j/2);i++)
{
temp=inver_a[i];
inver_a[i]=inver_a[j];
inver_a[j]=temp;
j-;
}
} Example InputEnter the list10
20
30
40
50 OutputThe list is10 20 30 40 50 The inverse of the list is50 40 30 20 10 This is another version of an inverse program, in which another list is used to hold the reversed list.#include<stdio.h>
#include<conio.h>
void main()
{
void read(int *,int);
void dis(int *,int);
void inverse(int *,int *,int);
int a[5],b[5];
clrscr();
read(a,5);
dis(a,5);
inverse(a,b,5);
dis(b,5);
getch();
}
void read(int c[],int i)
{
int j;
printf(“Enter the list \n”);
for(j=0;j<i;j++)
scanf(“%d”,&c[j]);
fflush(stdin);
}
void dis(int d[],int i)
{
int j;
printf(“The list is \n”);
for(j=0;j<i;j++)
printf(“%d “,d[j]);
printf(“\n”);
}
void inverse(int a[],int inverse_b[],int j)
{
int i,k;
k=j-1;
for(i=0;i<j;i++)
{
inverse_b[i]=a[k];
k-;
}
} Example InputEnter the list10
20
30
40
50 OutputThe list is10 20 30 40 50 The inverse of the list is50 40 30 20 10
Merging Array
| Assume that two lists to be merged are sorted in descending order. Compare the first element of the first list with the first element of the second list. If the element of the first list is greater, then place it in the resultant list. Advance the index of the first list and the index of the resultant list so that they will point to the next term. If the element of the first list is smaller, place the element of the second list in the resultant list. Advance the index of the second list and the index of the resultant list so that they will point to the next term. |
| Repeat this process until all the elements of either the first list or the second list are compared. If some elements remain to be compared in the first list or in the second list, place those elements in the resultant list and advance the corresponding index of that list and the index of the resultant list. |
| Suppose the first list is 10 20 25 50 63, and the second list is 12 16 62 68 80. The sorted lists are 63 50 25 20 10 and 80 68 62 16 12. |
| The first element of the first list is 63, which is smaller than 80, so the first element of the resultant list is 80. Now, 63 is compared with 68; again it is smaller, so the second element in the resultant list is 68. Next, 63 is compared with 50. In this case it is greater, so the third element of the resultant list is 63. |
| Repeat this process for all the elements of the first list and the second list. The resultant list is 80 68 63 62 50 25 20 16 12 10. |
| Program |
| #include<stdio.h>
#include<conio.h> void read(int *,int); void dis(int *,int); void sort(int *,int); void merge(int *,int *,int *,int); void main() { int a[5],b[5],c[10]; clrscr(); printf(“Enter the elements of first list \n”); read(a,5); /*read the list*/ printf(“The elements of first list are \n”); dis(a,5); /*Display the first list*/ printf(“Enter the elements of second list \n”); read(b,5); /*read the list*/ printf(“The elements of second list are \n”); dis(b,5); /*Display the second list*/ sort(a,5); printf(“The sorted list a is:\n”); dis(a,5); sort(b,5); printf(“The sorted list b is:\n”); dis(b,5); merge(a,b,c,5); printf(“The elements of merged list are \n”); dis(c,10); /*Display the merged list*/ getch(); } void read(int c[],int i) { int j; for(j=0;j<i;j++) scanf(“%d”,&c[j]); fflush(stdin); } void dis(int d[],int i) { int j; for(j=0;j<i;j++) printf(“%d “,d[j]); printf(“\n”); } void sort(int arr[] ,int k) { int temp; int i,j; for(i=0;i<k;i++) { for(j=0;j<k-i-1;j++) { if(arr[j]<arr[j+1]) { temp=arr[j]; arr[j]=arr[j+1]; arr[j+1]=temp; } } } } void merge(int a[],int b[],int c[],int k) { int ptra=0,ptrb=0,ptrc=0; while(ptra<k && ptrb<k) { if(a[ptra] < b[ptrb]) { c[ptrc]=a[ptra]; ptra++; } else { c[ptrc]=b[ptrb]; ptrb++; } ptrc++; } while(ptra<k) { c[ptrc]=a[ptra]; ptra++;ptrc++; } while(ptrb<k) { c[ptrc]=b[ptrb]; ptrb++; ptrc++; } } |
Example
Elucidation
- ptra=0, ptrb=0, ptrc=0;
- If the element in the first list pointed to by ptra is greater than the element in the second list pointed to by ptrb, place the element of the first list in the resultant list at the index equal to ptrc. Increment ptra or ptrc by one, or else place the element of the second list in the resultant list at the index equal to ptrc. Increment ptrb and ptrc by 1. Repeat this step until ptra is greater than the number of terms in the first list and ptrb is greater than the number of terms in the second list.
- If the first list has any elements, place one in the resultant list pointed to by ptrc, and increment ptra and ptrc. Repeat this step until ptra is greater than the number of terms in the first list.
- If the second list has any elements, place one in the resultant list pointed to byptrc, and increment ptrb and ptrc. Repeat this step until ptrb is greater than the number of terms in the first list.
Sorting Arrays
| We encounter several applications that require an ordered list. So it is required to order the elements of a given list either in ascending/increasing order or descending/decreasing order, as per the requirement. This process is called sorting. |
| Sorting Array using Bubble Sort |
| Bubble sorting is a simple sorting technique in which we arrange the elements of the list by forming pairs of adjacent elements. That means we form the pair of the ith and (i+1)thelement. If the order is ascending, we interchange the elements of the pair if the first element of the pair is greater than the second element. That means for every pair(list[i],list[i+1]) for i :=1 to (n-1) if list[i] > list[i+1], we need to interchange list[i] andlist[i+1]. Carrying this out once will move the element with the highest value to the last or nth position. Therefore, we repeat this process the next time with the elements from the first to (n-1)th positions. This will bring the highest value from among the remaining (n-1)values to the (n-1)th position. We repeat the process with the remaining (n-2) values and so on. Finally, we arrange the elements in ascending order. This requires to perform (n-1) passes. In the first pass we have (n-1) pairs, in the second pass we have (n-2) pairs, and in the last (or (n-1)th) pass, we have only one pair. Therefore, the number of probes or comparisons that are required to be carried out is |
| and the order of the algorithm is O(n2). |
| Program |
| #include <stdio.h>
#define MAX 10 void swap(int *x,int *y) { int temp; temp = *x; *x = *y; *y = temp; } void bsort(int list[], int n) { int i,j; for(i=0;i<(n-1);i++) for(j=0;j<(n-(i+1));j++) if(list[j] > list[j+1]) swap(&list[j],&list[j+1]); } void readlist(int list[],int n) { int i; printf(“Enter the elements\n”); for(i=0;i<n;i++) scanf(“%d”,&list[i]); } void printlist(int list[],int n) { int i; printf(“The elements of the list are: \n”); for(i=0;i<n;i++) printf(“%d\t”,list[i]); } void main() { int list[MAX], n; printf(“Enter the number of elements in the list max = 10\n”); scanf(“%d”,&n); readlist(list,n); printf(“The list before sorting is:\n”); printlist(list,n); bsort(list,n); printf(“The list after sorting is:\n”); printlist(list,n); } |
Output
Stack and Queue Introduction to Stack and Queue
| There are many applications requiring the use of the data structures stacks and queues. The most striking use of a data structure stack is the runtime stack that a programming language uses to implement a function call and return. Similarly, one of the important uses of a data structure queue is the process queue maintained by the scheduler. Both these data structures are modified versions of the list data structure, so they can be implemented using arrays or linked representation. |
| Stack |
| A stack is a list in which all insertions and deletions are made at one end, called the top. The last element to be inserted into the stack will be the first to be removed. Thus stacks are sometimes referred to as Last In First Out (LIFO) lists. |
| Queue |
A Queue is an ordered collection of items from which items may be deleted at one end (called the front of the queue) and into which items may be inserted at the other end (the rear of the queue).
Working with Stacks
| A stack is simply a list of elements with insertions and deletions permitted at one end—called the stack top. That means that it is possible to remove elements from a stack in reverse order from the insertion of elements into the stack. Thus, a stack data structureexhibits the LIFO (last in first out) property. Push and pop are the operations that are provided for insertion of an element into the stack and the removal of an element from the stack, respectively. Shown in the image below are the effects of push and pop operations on the stack. |
| Stack operations. |
| Since a stack is basically a list, it can be implemented by using an array or by using a linked representation. |
| Array Implementation of a Stack |
When an array is used to implement a stack, the push and pop operations are realized by using the operations available on an array. The limitation of an array implementation is that the stack cannot grow and shrink dynamically as per the requirement. ProgramA complete C program to implement a stack using an array appears here: #include <stdio.h>
#define MAX 10 /* The maximum size of the stack */
#include <stdlib.h>
void push(int stack[], int *top, int value)
{
if(*top < MAX )
{
*top = *top + 1;
stack[*top] = value;
}
else
{
printf(“The stack is full can not push a value\n”);
exit(0);
}
}
void pop(int stack[], int *top, int * value)
{
if(*top >= 0 )
{
*value = stack[*top];
*top = *top – 1;
}
else
{
printf(“The stack is empty can not pop a value\n”);
exit(0);
}
}
void main()
{
int stack[MAX];
int top = -1;
int n,value;
do
{
do
{
printf(“Enter the element to be pushed\n”);
scanf(“%d”,&value);
push(stack,&top,value);
printf(“Enter 1 to continue\n”);
scanf(“%d”,&n);
} while(n == 1);
printf(“Enter 1 to pop an element\n”);
scanf(“%d”,&n);
while( n == 1)
{
pop(stack,&top,&value);
printf(“The value poped is %d\n”,value);
printf(“Enter 1 to pop an element\n”);
scanf(“%d”,&n);
}
printf(“Enter 1 to continue\n”);
scanf(“%d”,&n);
} while(n == 1);
} Example
Implementation of a Stack Using Linked Representation Initially the list is empty, so the top pointer is NULL. The push function takes a pointer to an existing list as the first parameter and a data value to be pushed as the second parameter, creates a new node by using the data value, and adds it to the top of the existing list. A pop function takes a pointer to an existing list as the first parameter, and a pointer to a data object in which the popped value is to be returned as a second parameter. Thus it retrieves the value of the node pointed to by the top pointer, takes the top point to the next node, and destroys the node that was pointed to by the top. If this strategy is used for creating a stack with the previously used four data values: 10, 20, 30, and 40, then the stack is created as shown below:
Linked stack ProgramA complete C program for implementation of a stack using the linked list is given here: # include <stdio.h>
# include <stdlib.h>
struct node
{
int data;
struct node *link;
};
struct node *push(struct node *p, int value)
{
struct node *temp;
temp=(struct node *)malloc(sizeof(struct node));
/* creates new node
using data value
passed as parameter */
if(temp==NULL)
{
printf(“No Memory available Error\n”);
exit(0);
}
temp->data = value;
temp->link = p;
p = temp;
return(p);
}
struct node *pop(struct node *p, int *value)
{
struct node *temp;
if(p==NULL)
{
printf(” The stack is empty can not pop Error\n”);
exit(0); }
*value = p->data;
temp = p;
p = p->link;
free(temp);
return(p);
}
void main()
{
struct node *top = NULL;
int n,value;
do
{
do
{
printf(“Enter the element to be pushed\n”);
scanf(“%d”,&value);
top = push(top,value);
printf(“Enter 1 to continue\n”);
scanf(“%d”,&n);
} while(n == 1);
printf(“Enter 1 to pop an element\n”);
scanf(“%d”,&n);
while( n == 1)
{
top = pop(top,&value);
printf(“The value poped is %d\n”,value);
printf(“Enter 1 to pop an element\n”);
scanf(“%d”,&n);
}
printf(“Enter 1 to continue\n”);
scanf(“%d”,&n);
} while(n == 1);
} Example
Working with Queues
| One of the applications of the stack is in expression evaluation. A complex assignment statement such as a = b + c*d/e–f may be interpreted in many different ways. Therefore, to give a unique meaning, the precedence and associativity rules are used. But still it is difficult to evaluate an expression by computer in its present form, called the infix notation. In infix notation, the binary operator comes in between the operands. A unary operator comes before the operand. To get it evaluated, it is first converted to the postfix form, where the operator comes after the operands. For example, the postfix form for the expression a*(b–c)/d is abc–*d/. A good thing about postfix expressions is that they do not require any precedence rules or parentheses for unique definition. So, evaluation of a postfix expression is possible using a stack-based algorithm. |
| Program |
Convert an infix expression to prefix form.
#include <stdio.h>
#include <string.h>
#include <ctype.h>
#define N 80
typedef enum {FALSE, TRUE} bool;
#include “stack.h”
#include “queue.h”
#define NOPS 7
char operators [] = “()^/*+-“;
int priorities[] = {4,4,3,2,2,1,1};
char associates[] = ” RLLLL”;
char t[N]; char *tptr = t; // this is where prefix will be saved.
int getIndex( char op ) {
/*
* returns index of op in operators.
*/
int i;
for( i=0; i<NOPS; ++i )
if( operators[i] == op )
return i;
return -1;
}
int getPriority( char op ) {
/*
* returns priority of op.
*/
return priorities[ getIndex(op) ];
}
char getAssociativity( char op ) {
/*
* returns associativity of op.
*/
return associates[ getIndex(op) ];
}
void processOp( char op, queue *q, stack *s ) {
/*
* performs processing of op.
*/
switch(op) {
case ‘)’:
printf( “\t S pushing )…\n” );
sPush( s, op );
break;
case ‘(‘:
while( !qEmpty(q) ) {
*tptr++ = qPop(q);
printf( “\tQ popping %c…\n”, *(tptr-1) );
}
while( !sEmpty(s) ) {
char popop = sPop(s);
printf( “\tS popping %c…\n”, popop );
if( popop == ‘)’ )
break;
*tptr++ = popop;
}
break;
default: {
int priop; // priority of op.
char topop; // operator on stack top.
int pritop; // priority of topop.
char asstop; // associativity of topop.
while( !sEmpty(s) ) {
priop = getPriority(op);
topop = sTop(s);
pritop = getPriority(topop);
asstop = getAssociativity(topop);
if( pritop < priop || (pritop == priop && asstop == ‘L’)
|| topop == ‘)’ ) // IMP.
break;
while( !qEmpty(q) ) {
*tptr++ = qPop(q);
printf( “\tQ popping %c…\n”, *(tptr-1) );
}
*tptr++ = sPop(s);
printf( “\tS popping %c…\n”, *(tptr-1) );
}
printf( “\tS pushing %c…\n”, op );
sPush( s, op );
break;
}
}
}
bool isop( char op ) {
/*
* is op an operator?
*/
return (getIndex(op) != -1);
}
char *in2pre( char *str ) { /*
* returns valid infix expr in str to prefix.
*/
char *sptr;
queue q = {NULL};
stack s = NULL;
char *res = (char *)malloc( N*sizeof(char) );
char *resptr = res;
tptr = t;
for( sptr=str+strlen(str)-1; sptr!=str-1; -sptr ) {
printf( “processing %c tptr-t=%d…\n”, *sptr, tptr-t );
if( isalpha(*sptr) ) // if operand.
qPush(&q, *sptr );
else if( isop(*sptr) ) // if valid operator.
processOp( *sptr, &q, &s );
else if( isspace(*sptr) ) // if whitespace.
;
else {
fprintf( stderr, “ERROR:invalid char %c.\n”, *sptr );
return “”;
}
}
while( !qEmpty(&q) ) {
*tptr++ = qPop(&q);
printf( “\tQ popping %c…\n”, *(tptr-1) );
}
while( !sEmpty(&s) ) {
*tptr++ = sPop(&s);
printf( “\tS popping %c…\n”, *(tptr-1) );
}
*tptr = 0;
printf( “t=%s.\n”, t );
for( -tptr; tptr!=t-1; -tptr ) {
*resptr++ = *tptr;
}
*resptr = 0;
return res;
}
int main() {
char s[N];
puts( “enter infix freespaces max 80.” );
gets(s);
while(*s) {
puts( in2pre(s) );
gets(s);
}
return 0;
} Output
Elucidation
- In an infix expression, a binary operator separates its operands (a unary operator precedes its operand). In a postfix expression, the operands of an operator precede the operator. In a prefix expression, the operator precedes its operands. Like postfix, a prefix expression is parenthesis-free, that is, any infix expression can be unambiguously written in its prefix equivalent without the need for parentheses.
- To convert an infix expression to reverse-prefix, it is scanned from right to left. A queue of operands is maintained noting that the order of operands in infix and prefix remains the same. Thus, while scanning the infix expression, whenever an operand is encountered, it is pushed in a queue. If the scanned element is a right parenthesis (‘)’), it is pushed in a stack of operators. If the scanned element is a left parenthesis (‘(‘), the queue of operands is emptied to the prefix output, followed by the popping of all the operators up to, but excluding, a right parenthesis in the operator stack.
- If the scanned element is an arbitrary operator o, then the stack of operators is checked for operators with a greater priority then o. Such operators are popped and written to the prefix output after emptying the operand queue. The operator o is finally pushed to the stack.
- When the scanning of the infix expression is complete, first the operand queue, and then the operator stack, are emptied to the prefix output. Any whitespace in the infix input is ignored. Thus the prefix output can be reversed to get the required prefix expression of the infix input.
ExampleIf the infix expression is a*b + c/d, then different snapshots of the algorithm, while scanning the expression from right to left, are shown in the Table below
|
Scanning the infex expression a*b+c/d from right to left |
|||||
| STEP | REMAINING EXPRESSION | SCANNED ELEMENT | QUEUE OF OPERANDS | STACK OF OPERATORS | PREFIX OUTPUT |
| 0 | a*b+c/d | nil | empty | empty | nil |
| 1 | a*b+c/ | d | d | empty | nil |
| 2 | a*b+c | / | d | / | nil |
| 3 | a*b+ | c | d c | / | nil |
| 4 | a*b | + | empty | + | dc/ |
| 5 | a* | b | b | + | dc/ |
| 6 | a | * | b | * + | dc/ |
| 7 | nil | a | b a | * + | dc/ |
| 8 | nil | nil | empty | empty | dc/ba*+ |
The final prefix output that we get is dc/ba*+ whose reverse is +*ab/cd, which is the prefix equivalent of the input infix expression a*b+c*d. Note that all the operands are simply pushed to the queue in steps 1, 3, 5, and 7. In step 2, the operator / is pushed to the empty stack of operators. In step 4, the operator + is checked against the elements in the stack. Since / (division) has higher priority than + (addition), the queue is emptied to the prefix output (thus we get ‘dc’ as the output) and then the operator / is written (thus we get ‘dc/’ as the output). The operator + is then pushed to the stack. In step 6, the operator * is checked against the stack elements. Since * (multiplication) has a higher priority than + (addition), * is pushed to the stack. Step 8 signifies that all of the infix expression is scanned. Thus, the queue of operands is emptied to the prefix output (to get ‘dc/ba’), followed by the emptying of the stack of operators (to get ‘dc/ba*+’). Points to remember
- A prefix expression is parenthesis-free.
- To convert an infix expression to the postfix equivalent, it is scanned from right to left. The prefix expression we get is the reverse of the required prefix equivalent.
- Conversion of infix to prefix requires a queue of operands and a stack, as in the conversion of an infix to postfix.
- The order of operands in a prefix expression is the same as that in its infix equivalent.
- If the scanned operator o1 and the operator o2 at the stack top have the same priority, then the associativity of o2 is checked. If o2 is right-associative, it is popped from the stack.
Concept of Linked List
| A linked list is one of the fundamental data structures, and can be used to implement other data structures. It consists of a sequence of nodes, each containing arbitrary data fields and one or two references (“links“) pointing to the next and/or previous nodes. The principal benefit of a linked list over a conventional array is that the order of the linked items may be different from the order that the data items are stored in memory or on disk, allowing the list of items to be traversed in a different order. A linked list is a self-referential datatype because it contains a pointer or link to another datum of the same type. Linked lists permit insertion and removal of nodes at any point in the list in constant time, but do not allow random access. Several different types of linked list exist: singly-linked lists, doubly-linked lists, and circularly-linked lists. |
| An array is represented in memory using sequential mapping, which has the property that elements are fixed distance apart. |
But this has the following disadvantage:
- It makes insertion or deletion at any arbitrary position in an array a costly operation, because this involves the movement of some of the existing elements.
- When we want to represent several lists by using arrays of varying size, either we have to represent each list using a separate array of maximum size or we have to represent each of the lists using one single array.
- The first one will lead to wastage of storage, and the second will involve a lot of data movement.
So we have to use an alternative representation to overcome these disadvantages. One alternative is a linked representation. In a linked representation, it is not necessary that the elements be at a fixed distance apart. Instead, we can place elements anywhere in memory, but to make it a part of the same list, an element is required to be linked with a previous element of the list. This can be done by storing the address of the next element in the previous element itself. This requires that every element be capable of holding the data as well as the address of the next element. Thus every element must be a structure with a minimum of two fields, one for holding the data value, which we call a data field, and the other for holding the address of the next element, which we call link field.Therefore, a linked list is a list of elements in which the elements of the list can be placed anywhere in memory, and these elements are linked with each other using an explicit link field, that is, by storing the address of the next element in the link field of the previous element. Types of linked lists Linearly-linked list Singly-linked listThe simplest kind of linked list is a singly-linked list (or slist for short), which has one link per node. This link points to the next node in the list, or to a null value or empty list if it is the final node.A singly-linked list containing two values: the value of the current node and a link to the next node. Doubly-linked listA more sophisticated kind of linked list is a doubly-linked list or two-way linked list. Each node has two links: one points to the previous node, or points to a null value or empty list if it is the first node; and one points to the next, or points to a null value or empty list if it is the final node.A doubly-linked list containing three integer values: the value, the link forward to the next node, and the link backward to the previous nodeIn some very low level languages, Xor-linking offers a way to implement doubly-linked lists using a single word for both links, although the use of this technique is usually discouraged. Circularly-linked list In a circularly-linked list, the first and final nodes are linked together. This can be done for both singly and doubly linked lists. To traverse a circular linked list, you begin at any node and follow the list in either direction until you return to the original node. Viewed another way, circularly-linked lists can be seen as having no beginning or end. This type of list is most useful for managing buffers for data ingest, and in cases where you have one object in a list and wish to see all other objects in the list. The pointer pointing to the whole list may be called the access pointer. A circularly-linked list containing three integer values. Singly-circularly-linked listIn a singly-circularly-linked list, each node has one link, similar to an ordinary singly-linked list, except that the next link of the last node points back to the first node. As in a singly-linked list, new nodes can only be efficiently inserted after a node we already have a reference to. For this reason, it’s usual to retain a reference to only the last element in a singly-circularly-linked list, as this allows quick insertion at the beginning, and also allows access to the first node through the last node’s next pointer. Doubly-circularly-linked listIn a doubly-circularly-linked list, each node has two links, similar to a doubly-linked list, except that the previous link of the first node points to the last node and the next link of the last node points to the first node. As in doubly-linked lists, insertions and removals can be done at any point with access to any nearby node. Although structurally a doubly-circularly-linked list has no beginning or end, an external access pointer may formally establish the pointed node to be the head node or the tail node, and maintain order just as well as a doubly-linked list with sentinel nodes. Sentinel nodesLinked lists sometimes have a special dummy or sentinel node at the beginning and/or at the end of the list, which is not used to store data. Its purpose is to simplify or speed up some operations, by ensuring that every data node always has a previous and/or next node, and that every list (even one that contains no data elements) always has a “first” and “last” node. Lisp has such a design – the special value nil is used to mark the end of a ‘proper‘ singly-linked list, or chain of cons cells as they are called. A list does not have toend in nil, but a list that did not would be termed ‘improper’.
Inserting a Node
| A linked list is a recursive data structure. A recursive data structure is a data structure that has the same form regardless of the size of the data. You can easily write recursive programs for such data structures. |
ExampleProgram
# include <stdio.h>
# include <stdlib.h>
struct node
{
int data;
struct node *link;
};
struct node *insert(struct node *p, int n)
{
struct node *temp;
if(p==NULL)
{
p=(struct node *)malloc(sizeof(struct node));
if(p==NULL)
{
printf(“Error\n”);
exit(0);
}
p-> data = n;
p-> link = NULL;
}
else
p->link = insert(p->link,n);/* the while loop replaced by
recursive call */
return (p);
}
void printlist ( struct node *p )
{
printf(“The data values in the list are\n”);
while (p!= NULL)
{
printf(“%d\t”,p-> data);
p = p-> link;
}
}
void main()
{
int n;
int x;
struct node *start = NULL ;
printf(“Enter the nodes to be created \n”);
scanf(“%d”,&n);
while ( n- > 0 )
{
printf( “Enter the data values to be placed in a node\n”);
scanf(“%d”,&x);
start = insert ( start, x );
}
printf(“The created list is\n”);
printlist ( start );
}
Explanation
- This recursive version also uses a strategy of inserting a node in an existing list to create the list.
- An insert function is used to create the list. The insert function takes a pointer to an existing list as the first parameter, and a data value with which the new node is to be created as the second parameter. It creates the new node by using the data value, then appends it to the end of the list. It then returns a pointer to the first node of the list.
- Initially, the list is empty, so the pointer to the starting node is NULL. Therefore, when insert is called the first time, the new node created by the insert functionbecomes the start node.
- Subsequently, the insert function traverses the list by recursively calling itself.
- The recursion terminates when it creates a new node with the supplied data value and appends it to the end of the list.
Points to Remember
- A linked list has a recursive data structure.
- Writing recursive programs for such structures is programmatically convenient.
Sorting and Reversing Link List
| Introduction |
| To sort a linked list, first we traverse the list searching for the node with a minimum data value. Then we remove that node and append it to another list which is initially empty. We repeat this process with the remaining list until the list becomes empty, and at the end, we return a pointer to the beginning of the list to which all the nodes are moved, as shown in image below: |
| Sorting of a linked list |
| To reverse a list, we maintain a pointer each to the previous and the next node, then we make the link field of the current node point to the previous, make the previous equal to the current, and the current equal to the next, as shown in A linked list showing the previous, current, and next nodes at some point during reversal process. |
| Therefore, the code needed to reverse the list is |
| Prev = NULL;
While (curr != NULL) { Next = curr->link; Curr -> link = prev; Prev = curr; Curr = next; } |
| Program |
| # include <stdio.h>
# include <stdlib.h> struct node { int data; struct node *link; }; struct node *insert(struct node *p, int n) { struct node *temp; if(p==NULL) { p=(struct node *)malloc(sizeof(struct node)); if(p==NULL) { printf(“Error\n”); exit(0); } p-> data = n; p-> link = NULL; } else { temp = p; while (temp-> link!= NULL) temp = temp-> link; temp-> link = (struct node *)malloc(sizeof(struct node)); if(temp -> link == NULL) { printf(“Error\n”); exit(0); } temp = temp-> link; temp-> data = n; temp-> link = null; } return(p); }
void printlist ( struct node *p ) { printf(“The data values in the list are\n”); while (p!= NULL) { printf(“%d\t”,p-> data); p = p-> link; } }
/* a function to sort reverse list */ struct node *reverse(struct node *p) { struct node *prev, *curr; prev = NULL; curr = p; while (curr != NULL) { p = p-> link; curr-> link = prev; prev = curr; curr = p; } return(prev); } /* a function to sort a list */ struct node *sortlist(struct node *p) { struct node *temp1,*temp2,*min,*prev,*q; q = NULL; while(p != NULL) { prev = NULL; min = temp1 = p; temp2 = p -> link; while ( temp2 != NULL ) { if(min -> data > temp2 -> data) { min = temp2; prev = temp1; } temp1 = temp2; temp2 = temp2-> link; } if(prev == NULL) p = min -> link; else prev -> link = min -> link; min -> link = NULL; if( q == NULL) q = min; /* moves the node with lowest data value in the list pointed to by p to the list pointed to by q as a first node*/ else { temp1 = q; /* traverses the list pointed to by q to get pointer to its last node */ while( temp1 -> link != NULL) temp1 = temp1 -> link; temp1 -> link = min; /* moves the node with lowest data value in the list pointed to by p to the list pointed to by q at the end of list pointed by q*/ } } return (q); }
void main() { int n; int x; struct node *start = NULL ; printf(“Enter the nodes to be created \n”); scanf(“%d”,&n); while ( n- > 0 ) { printf( “Enter the data values to be placed in a node\n”); scanf(“%d”,&x); start = insert ( start,x); } printf(“The created list is\n”); printlist ( start ); start = sortlist(start); printf(“The sorted list is\n”); printlist ( start ); start = reverse(start); printf(“The reversed list is\n”); printlist ( start ); } |
| Explanation |
| The working of the sorting function on an example list is shown below |
| Sorting of a linked list |
| The working of a reverse function is shown below |
| Output: |
Sorting and Reversing Link List
| Introduction |
| To sort a linked list, first we traverse the list searching for the node with a minimum data value. Then we remove that node and append it to another list which is initially empty. We repeat this process with the remaining list until the list becomes empty, and at the end, we return a pointer to the beginning of the list to which all the nodes are moved, as shown in image below: |
| Sorting of a linked list |
| To reverse a list, we maintain a pointer each to the previous and the next node, then we make the link field of the current node point to the previous, make the previous equal to the current, and the current equal to the next, as shown in A linked list showing the previous, current, and next nodes at some point during reversal process. |
| Therefore, the code needed to reverse the list is |
| Prev = NULL;
While (curr != NULL) { Next = curr->link; Curr -> link = prev; Prev = curr; Curr = next; } |
| Program |
| # include <stdio.h>
# include <stdlib.h> struct node { int data; struct node *link; }; struct node *insert(struct node *p, int n) { struct node *temp; if(p==NULL) { p=(struct node *)malloc(sizeof(struct node)); if(p==NULL) { printf(“Error\n”); exit(0); } p-> data = n; p-> link = NULL; } else { temp = p; while (temp-> link!= NULL) temp = temp-> link; temp-> link = (struct node *)malloc(sizeof(struct node)); if(temp -> link == NULL) { printf(“Error\n”); exit(0); } temp = temp-> link; temp-> data = n; temp-> link = null; } return(p); }
void printlist ( struct node *p ) { printf(“The data values in the list are\n”); while (p!= NULL) { printf(“%d\t”,p-> data); p = p-> link; } }
/* a function to sort reverse list */ struct node *reverse(struct node *p) { struct node *prev, *curr; prev = NULL; curr = p; while (curr != NULL) { p = p-> link; curr-> link = prev; prev = curr; curr = p; } return(prev); } /* a function to sort a list */ struct node *sortlist(struct node *p) { struct node *temp1,*temp2,*min,*prev,*q; q = NULL; while(p != NULL) { prev = NULL; min = temp1 = p; temp2 = p -> link; while ( temp2 != NULL ) { if(min -> data > temp2 -> data) { min = temp2; prev = temp1; } temp1 = temp2; temp2 = temp2-> link; } if(prev == NULL) p = min -> link; else prev -> link = min -> link; min -> link = NULL; if( q == NULL) q = min; /* moves the node with lowest data value in the list pointed to by p to the list pointed to by q as a first node*/ else { temp1 = q; /* traverses the list pointed to by q to get pointer to its last node */ while( temp1 -> link != NULL) temp1 = temp1 -> link; temp1 -> link = min; /* moves the node with lowest data value in the list pointed to by p to the list pointed to by q at the end of list pointed by q*/ } } return (q); }
void main() { int n; int x; struct node *start = NULL ; printf(“Enter the nodes to be created \n”); scanf(“%d”,&n); while ( n- > 0 ) { printf( “Enter the data values to be placed in a node\n”); scanf(“%d”,&x); start = insert ( start,x); } printf(“The created list is\n”); printlist ( start ); start = sortlist(start); printf(“The sorted list is\n”); printlist ( start ); start = reverse(start); printf(“The reversed list is\n”); printlist ( start ); } |
| Explanation |
| The working of the sorting function on an example list is shown below |
| Sorting of a linked list |
| The working of a reverse function is shown below |
| Output: |
Deleting a Node from Link List
| To delete a node, first we determine the node number to be deleted (this is based on the assumption that the nodes of the list are numbered serially from 1 to n). The list is thentraversed to get a pointer to the node whose number is given, as well as a pointer to a node that appears before the node to be deleted. Then the link field of the node that appears before the node to be deleted is made to point to the node that appears after the node to be deleted, and the node to be deleted is freed. Image 1 & 2 show the list before and after deletion, respectively. |
Program
# include <stdio.h>
# include <stdlib.h>
struct node *delet ( struct node *, int );
int length ( struct node * );
struct node
{
int data;
struct node *link;
};
struct node *insert(struct node *p, int n)
{
struct node *temp;
if(p==NULL)
{
p=(struct node *)malloc(sizeof(struct node));
if(p==NULL)
{
printf(“Error\n”);
exit(0);
}
p-> data = n;
p-> link = NULL;
}
else
{
temp = p;
while (temp-> link != NULL)
temp = temp-> link;
temp-> link = (struct node *)malloc(sizeof(struct node));
if(temp -> link == NULL)
{
printf(“Error\n”);
exit(0);
}
temp = temp-> link;
temp-> data = n;
temp-> link = NULL;
}
return (p);
}
void printlist ( struct node *p )
{
printf(“The data values in the list are\n”);
while (p!= NULL)
{
printf(“%d\t”,p-> data);
p = p-> link;
}
}
int main()
{
int n;
int x;
struct node *start = NULL;
printf(“Enter the nodes to be created \n”);
scanf(“%d”,&n);
while (( n–) > 0 )
{
printf( “Enter the data values to be placed in a node\n”);
scanf(“%d”,&x);
start = insert ( start, x );
}
printf(“\nThe list before deletion id\n”);
printlist ( start );
printf(“%\n Enter the node no \n”);
scanf ( ” %d”,&n);
start = delet (start , n );
printf(“\nThe list after deletion is\n”);
printlist ( start );
printf(“\n”);
return 0;
}
/* a function to delete the specified node*/
struct node *delet ( struct node *p, int node_no )
{
struct node *prev, *curr ;
int i;
if (p == NULL )
{
printf(“There is no node to be deleted \n”);
}
else
{
if ( node_no > length (p))
{
printf(“Error\n”);
}
else
{
prev = NULL;
curr = p;
i = 1 ;
while ( i < node_no )
{
prev = curr;
curr = curr-> link;
i = i+1;
}
if ( prev == NULL )
{
p = curr -> link;
free ( curr );
}
else
{
prev -> link = curr -> link ;
free ( curr );
}
}
}
return(p);
}
/* a function to compute the length of a linked list */
int length ( struct node *p )
{
int count = 0 ;
while ( p != NULL )
{
count++;
p = p->link;
}
return ( count ) ;
}
Explanation
Output:
Doubly Link List
| A more sophisticated kind of linked list is a doubly-linked list or two-way linked list. Each node has two links: one points to the previous node, or points to a null value or empty listif it is the first node; and one points to the next, or points to a null value or empty list if it is the final node. |
The following are problems with singly linked lists: A singly linked list allows traversal of the list in only one direction. Deleting a node from a list requires keeping track of the previous node, that is, the node whose link points to the node to be deleted. If the link in any node gets corrupted, the remaining nodes of the list become unusable. These problems of singly linked lists can be overcome by adding one more link to each node, which points to the previous node. When such a link is added to every node of a list, the corresponding linked list is called a doubly linked list. Therefore, a doubly linked list is a linked list in which every node contains two links, called left link and right link, respectively. The left link of the node points to the previous node, whereas the right points to the next node. Like a singly linked list, a doubly linked list can also be a chain or it may be circular with or without a header node. If it is a chain, the left link of the first node and the right link of the last node will be NULL, as shown in image below:
If it is a circular list without a header node, the right link of the last node points to the first node. The left link of the first node points to the last node, as shown in image below
If it is a circular list with a header node, the left link of the first node and the right link of the last node point to the header node. The right link of the header node points to the first node and the left link of the header node points to the last node of the list, as shown in image below:
Therefore, the following representation is required to be used for the nodes of a doubly linked list.
struct dnode
{
int data;
struct dnode *left,*right;
};
Program A program for building and printing the elements of a doubly linked list follows:
# include <stdio.h>
# include <stdlib.h>
struct dnode
{
int data;
struct dnode *left, *right;
};
struct dnode *insert(struct dnode *p, struct dnode **q, int n)
{
struct dnode *temp;
/* if the existing list is empty then insert a new node as the starting node */
if(p==NULL)
{
p=(struct dnode *)malloc(sizeof(struct dnode));
/* creates new node data value passed as parameter */
if(p==NULL)
{
printf(“Error\n”);
exit(0);
}
p->data = n;
p-> left = p->right =NULL;
*q =p;
}
else
{
temp = (struct dnode *)malloc(sizeof(struct dnode));
/* creates new node using data value passed as parameter and puts its address in the temp */
if(temp == NULL)
{
printf(“Error\n”);
exit(0);
}
temp->data = n;
temp->left = (*q);
temp->right = NULL;
(*q)->right = temp;
(*q) = temp;
}
return (p);
}
void printfor( struct dnode *p )
{
printf(“The data values in the list in the forward order are:\n”);
while (p!= NULL)
{
printf(“%d\t”,p-> data);
p = p->right;
}
}
void printrev( struct dnode *p )
{
printf(“The data values in the list in the reverse order are:\n”);
while (p!= NULL)
{
printf(“%d\t”,p->data);
p = p->left;
}
}
int main()
{
int n;
int x;
struct dnode *start = NULL ;
struct dnode *end = NULL;
printf(“\n”);
printf(“\nEnter the nodes to be created \t:”);
scanf(“%d”,&n);
while ( n– > 0 )
{
printf( “\nEnter the data values to be placed in a node\t”);
scanf(“%d”,&x);
start = insert ( start, &end,x );
}
printf(“\nThe created list is\n”);
printfor ( start );
printf(“\n”);
printrev(end);
printf(“\n”);
return 0;
}
Explanation
- This program uses a strategy of inserting a node in an existing list to create it. For this, an insert function is used. The insert function takes a pointer to an existing list as the first parameter.
- The pointer to the last node of a list is the second parameter. A data value with which the new node is to be created is the third parameter. This creates a new node using the data value, appends it to the end of the list, and returns a pointer to the first node of the list. Initially, the list is empty, so the pointer to the start node isNULL. When insert is called the first time, the new node created by the insert becomes the start node.
- Subsequently, insert creates a new node that stores the pointer to the created node in a temporary pointer. Then the left link of the node pointed to by the temporary pointer becomes the last node of the existing list, and the right link points to NULL. After that, it updates the value of the end pointer to make it point to this newly appended node.
- The main function reads the value of the number of nodes in the list, and calls insert that many times by going in a while loop, in order to get a doubly linked listwith the specified number of nodes created.
Output:
Tree Concept of Tree
| Definition of Tree |
| A tree is a widely-used data structure that emulates a tree structure with a set of linked nodes. |
| A tree is a set of one or more nodes T such that: |
| there is a specially designated node called a root The remaining nodes are partitioned into n disjointed set of nodes T1, T2,…,Tn, each of which is a tree. |
| Trees are used to impose a hierarchical structure on a collection of data items. For example, we need to impose a hierarchical structure on a collection of data items while preparing organizational charts and geneologies to represent the syntactic structure of a source program in compilers. So the study of trees as one of the data structures is important. |
| Degree of a Node of a Tree |
| The degree of a node of a tree is the number of subtrees having this node as a root. In other words, the degree is the number of descendants of a node. If the degree is zero, it is called a terminal or leaf node of a tree. |
| Degree of a Tree |
| The degree of a tree is defined as the maximum of degree of the nodes of the tree, that is, degree of tree = max (degree(node i) for I = 1 to n) |
| Level of a Node |
| We define the level of the node by taking the level of the root node as 1, and incrementing it by 1 as we move from the root towards the subtrees. So the level of all the descendants of the root nodes will be 2. The level of their descendants will be 3, and so on. We then define the depth of the tree to be the maximum value of the level of the node of the tree. |
Binary Tree
| Binary Tree is a tree data structure in which each node has at most two children. Typically the child nodes are called left and right. One common use of binary trees is binary search trees; another is binary heaps. |
| A binary tree is a special case of tree as defined in the preceding section, in which no node of a tree can have a degree of more than 2. Therefore, a binary tree is a set of zero or more nodes T such that: |
| there is a specially designated node called the root of the tree the remaining nodes are partitioned into two disjointed sets, T1 and T2, each of which is a binary tree. T1 is called the left subtree and T2 is called right subtree, or vice-versa. |
| Types of Binary Tree |
- A rooted binary tree is a rooted tree in which every node has at most two children.
- A full binary tree, or proper binary tree, is a tree in which every node has zero or two children.
- A perfect binary tree (sometimes complete binary tree) is a full binary tree in which all leaves are at the same depth.
- A complete binary tree is a tree with n levels, where for each level d <= n – 1, the number of existing nodes at level d is equal to 2d. This means all possible nodes exist at these levels. An additional requirement for a complete binary tree is that for the nth level, while every node does not have to exist, the nodes that do exist must fill from left to right. (This is ambiguous with perfect binary tree.)
- A balanced binary tree is where the depth of all the leaves differs by at most 1.
- A rooted complete binary tree can be identified with a free magma.
- An almost complete binary tree is a tree in which each node that has a right child also has a left child. Having a left child does not require a node to have a right child. Stated alternately, an almost complete binary tree is a tree where for a right child, there is always a left child, but for a left child there may not be a right child.
- A degenerate tree is a tree where for each parent node, there is only one associated child node. This means that in a performance measurement, the tree will behave like a linked list data structure.
- The number of nodes n in a perfect binary tree can be found using this formula: n = 2h + 1 − 1 where h is the height of the tree.
- The number of leaf nodes n in a perfect binary tree can be found using this formula:n = 2h where h is the height of the tree.
Representation of a Binary Tree If a binary tree is a complete binary tree, it can be represented using an array capable of holding n elements where n is the number of nodes in a complete binary tree. If the tree is an array of n elements, we can store the data values of the ith node of a complete binary tree with n nodes at an index i in an array tree. That means we can map node i to the ith index in the array, and the parent of node i will get mapped at an index i/2, whereas the left child of node i gets mapped at an index 2i and the right child gets mapped at an index 2i + 1. For example, a complete binary tree with depth k = 3, having the number of nodes n = 5, can be represented using an array of 5 as shown in image below:
An array representation of a binary tree is not suitable for frequent insertions and deletions, even though no storage is wasted if the binary tree is a complete binary tree. It makes insertion and deletion in a tree costly. Therefore, instead of using an array representation, we can use a linked representation, in which every node is represented as a structure with three fields: one for holding data, one for linking it with the left subtree, and the third for linking it with right subtree as shown here:
| leftchild | data | rightchild |
We can create such a structure using the following C declaration:
struct tnode
{
int data
struct tnode *lchild,*rchild;
};
A tree representation that uses this node structure is shown below:
Binary Tree Traversal
| Order of Traversal of Binary Tree |
| The following are the possible orders in which a binary tree can be traversed: |
| LDR
LRD DLR RDL RLD DRL |
Where L stands for traversing the left subtree, R stands for traversing the right subtree, and D stands for processing the data of the node. Therefore, the order LDR is the order of traversal in which we start with the root node, visit the left subtree, process the data of the root node, and then visit the right subtree. Since the left and right subtrees are also the binary trees, the same procedure is used recursively while visiting the left and right subtrees. The order LDR is called as inorder; the order LRD is called as postorder; and the order DLR is called as preorder. The remaining three orders are not used. If the processing that we do with the data in the node of tree during the traversal is simply printing the data value, then the output generated for a tree is given in image below, using inorder, preorder andpostorder as shown.
If an expression is represented as a binary tree, the inorder traversal of the tree gives us an infix expression, whereas the postorder traversal gives us a postfix expression as shown below:
A binary tree of an expression along with its inorder and postorder.
Searching Binary Tree
| A Binary Search Tree is a binary tree that may be empty, and every node must contain an identifier. An identifier of any node in the left subtree is less than the identifier of the root. An identifier of any node in the right subtree is greater than the identifier of the root. Both the left subtree and right subtree are binary search trees. |
The Binary Search Tree is basically a binary tree, and therefore it can be traversed ininorder, preorder, and postorder. If we traverse a binary search tree in inorder and print the identifiers contained in the nodes of the tree, we get a sorted list of identifiers in ascending order. A Binary Search Tree is an important search structure. For example, consider the problem of searching a list. If a list is ordered, searching becomes faster if we use a contiguous list and perform a binary search. But if we need to make changes in the list, such as inserting new entries and deleting old entries, using a contiguous list would be much slower, because insertion and deletion in a contiguous list requires moving many of the entries every time. So we may think of using a linked list because it permits insertions and deletions to be carried out by adjusting only a few pointers. But in an n-linked list, there is no way to move through the list other than one node at a time, permitting only sequential access. Binary trees provide an excellent solution to this problem. By making the entries of an ordered list into the nodes of a binary search tree, we find that we can search for a key in O(n logn) steps. The Program The following program shows how to build a binary tree in a C program. It uses dynamic memory allocation, pointers and recursion. A binary tree is a very useful data-structure, since it allows efficient insertion, searching and deletion in a sorted list. As such a tree is essentially a recursively defined structure, recursive programming is the natural and efficient way to handle it.
- tree
- empty
- node left-branch right-branch
- empty
- left-branch
- tree
- right-branch
- tree
#include<stdlib.h>
#include<stdio.h>
struct tree_el {
int val;
struct tree_el * right, * left;
};
typedef struct tree_el node;
void insert(node ** tree, node * item) {
if(!(*tree)) {
*tree = item;
return;
}
if(item->val<(*tree)->val)
insert(&(*tree)->left, item);
else if(item->val>(*tree)->val)
insert(&(*tree)->right, item);
}
void printout(node * tree) {
if(tree->left) printout(tree->left);
printf(“%d\n”,tree->val);
if(tree->right) printout(tree->right);
}
void main() {
node * curr, * root;
int i;
root = NULL;
for(i=1;i<=10;i++) {
curr = (node *)malloc(sizeof(node));
curr->left = curr->right = NULL;
curr->val = rand();
insert(&root, curr);
}
printout(root);
}
Counting Nodes of Binary Tree
| To count the number of nodes in a given binary tree, the tree is required to be traversed recursively until a leaf node is encountered. When a leaf node is encountered, a count of 1 is returned to its previous activation (which is an activation for its parent), which takes the count returned from both the children’s activation, adds 1 to it, and returns this value to the activation of its parent. This way, when the activation for the root of the tree returns, it returns the count of the total number of the nodes in the tree. |
Program A complete C program to count the number of nodes is as follows:
#include <stdio.h>
#include <stdlib.h>
struct tnode
{
int data;
struct tnode *lchild, *rchild;
};
int count(struct tnode *p)
{
if( p == NULL)
return(0);
else
if( p->lchild == NULL && p->rchild == NULL)
return(1);
else
return(1 + (count(p->lchild) + count(p->rchild)));
}
struct tnode *insert(struct tnode *p,int val)
{
struct tnode *temp1,*temp2;
if(p == NULL)
{
p = (struct tnode *) malloc(sizeof(struct tnode)); /* insert the new node as root node*/
if(p == NULL)
{
printf(“Cannot allocate\n”);
exit(0);
}
p->data = val;
p->lchild=p->rchild=NULL;
}
else
{
temp1 = p;
/* traverse the tree to get a pointer to that node whose child will be the newly created node*/
while(temp1 != NULL)
{
temp2 = temp1;
if( temp1 ->data > val)
temp1 = temp1->lchild;
else
temp1 = temp1->rchild;
}
if( temp2->data > val)
{
temp2->lchild = (struct tnode*)malloc(sizeof(struct tnode)); /
*inserts the newly created node
as left child*/
temp2 = temp2->lchild;
if(temp2 == NULL)
{
printf(“Cannot allocate\n”);
exit(0);
}
temp2->data = val;
temp2->lchild=temp2->rchild = NULL;
}
else
{
temp2->rchild = (struct tnode*)malloc(sizeof(struct tnode));/ *inserts the newly created node
as left child*/
temp2 = temp2->rchild;
if(temp2 == NULL)
{
printf(“Cannot allocate\n”);
exit(0);
}
temp2->data = val;
temp2->lchild=temp2->rchild = NULL;
}
}
return(p);
}
/* a function to binary tree in inorder */
void inorder(struct tnode *p)
{
if(p != NULL)
{
inorder(p->lchild);
printf(“%d\t”,p->data);
inorder(p->rchild);
}
}
void main()
{
struct tnode *root = NULL;
int n,x;
printf(“Enter the number of nodes\n”);
scanf(“%d”,&n);
while( n — > 0)
{
printf(“Enter the data value\n”);
scanf(“%d”,&x);
root = insert(root,x);
}
inorder(root);
printf(“\nThe number of nodes in tree are :%d\n”,count(root));
}
Explanation Input:
- The number of nodes that the tree to be created should have
- The data values of each node in the tree to be created
Output:
- The data value of the nodes of the tree in inorder
- The count of number of node in a tree.
Example
Deleting Node from BST
| Of course, if we are trying to delete a leaf, there is no problem. We just delete it and the rest of the tree is exactly as it was, so it is still a BST. |
There is another simple situation: suppose the node we’re deleting has only one subtree. In the following example, `3′ has only 1 subtree.
To delete a node with 1 subtree, we just `link past‘ the node, i.e. connect the parent of the node directly to the node’s only subtree. This always works, whether the one subtree is on the left or on the right. Deleting `3’ gives us:
which we normally draw:
Finally, let us consider the only remaining case: how to delete a node having two subtrees. For example, how to delete `6′? We’d like to do this with minimum amount of work and disruption to the structure of the tree. The standard solution is based on this idea: we leave the node containing `6‘ exactly where it is, but we get rid of the value 6 and find another value to store in the `6’ node. This value is taken from a node below the `6’s node, and it is that node that is actually removed from the tree. Deletion of a node with two children To delete a node from a binary search tree, the method to be used depends on whether a node to be deleted has one child, two children, or no child.
To delete a node printed to by x, we start by letting y be a pointer to the node that is the root of the node pointed to by x. We store the pointer to the left child of the node pointed to by x in a temporary pointer temp. We then make the left child of the node pointed to by y the left child of the node pointed to by x. We then traverse the tree with the root as the node pointed to by temp to get its right leaf, and make the right child of this right leaf the right child of the node pointed to by x, as shown in image below:
Another method is to store the pointer to the right child of the node pointed to by x in a temporary pointer temp. We then make the left child of the node pointed by y to be the right child of the node pointed to by x. We then traverse the tree with the root as the node pointed to by temp to get its left leaf, and make the left child of this left leaf the left child of the node pointed to by x, as shown in image below:
Deletion of a Node with One Child Consider the following binary tree:
If we want to delete a node pointed to by x, we can do that by letting y be a pointer to the node that is the root of the node pointed to by x. Make the left child of the node pointed to by y the right child of the node pointed to by x, and dispose of the node pointed to by x, as shown in
Graph Introduction
| Graphs are natural models that are used to represent arbitrary relationships among data objects. We often need to represent such arbitrary relationships among the data objects while dealing with problems in computer science, engineering, and many other disciplines. Therefore, the study of graphs as one of the basic data structures is important. |
A Graph is a kind of data structure, specifically an abstract data type (ADT), that consists of a set of nodes (also called vertices) and a set of edges that establish relationships (connections) between the nodes. The graph ADT follows directly from the graph concept from mathematics. Informally, G=(V,E) consists of vertices, the elements of V, which are connected by edges, the elements of E. Formally, a graph, G, is defined as an ordered pair, G=(V,E), where V is a finite set and E is a set consisting of two element subsets of V. Comparison with other data structures Graph data structures are non-hierarchical and therefore suitable for data sets where the individual elements are interconnected in complex ways. For example, a computer network can be modeled with a graph. Hierarchical data sets can be represented by a binary or nonbinary tree. It is worth mentioning, however, that trees can be seen as a special form of graph.
Representations of Graph Choices of representation Two main data structures for the representation of graphs are used in practice. The first is called an adjacency list, and is implemented by representing each node as a data structure that contains a list of all adjacent nodes. The second is an adjacency matrix, in which the rows and columns of a two-dimensional array represent source and destination vertices and entries in the graph indicate whether an edge exists between the vertices. Adjacency lists are preferred for sparse graphs; otherwise, an adjacency matrix is a good choice. Finally, for very large graphs with some regularity in the placement of edges, a symbolic graph is a possible choice of representation. Array Representation One way of representing a graph with n vertices is to use an n2 matrix (that is, a matrix with n rows and n columns—that means there is a row as well as a column corresponding to every vertex of the graph). If there is an edge from vi to vj then the entry in the matrix withrow index as vi and column index as vj is set to 1 (adj[vi, vj] = 1, if (vi, vj) is an edge ofgraph G). If e is the total number of edges in the graph, then there will 2e entries which will be set to 1, as long as G is an undirected graph. Whereas if G were a directed graph, only e entries would have been set to 1 in the adjacency matrix. The adjacency matrixrepresentation of an undirected as well as a directed graph is show in image below:
ExampleThe adjacency matrix representation of the following diagraph(directed graph), along with the indegree and outdegree of each node is shown here:
Linked List Representation Another way of representing a graph G is to maintain a list for every vertex containing all vertices adjacent to that vertex, as shown in image below:
Traversing a Graph Perhaps the most fundamental graph problem is to traverse every edge and vertex in a graph in a systematic way. Indeed, most of the basic algorithms you will need for book keeping operations on graphs will be applications of graph traversal. These include:
- Printing or validating the contents of each edge and/or vertex.
- Copying a graph, or converting between alternate representations.
- Counting the number of edges and/or vertices.
- Identifying the connected components of the graph.
- Finding paths between two vertices, or cycles if they exist.
Since any maze can be represented by a graph, where each junction is a vertex and each hallway an edge, any traversal algorithm must be powerful enough to get us out of an arbitrary maze. For efficiency, we must make sure we don’t get lost in the maze and visit the same place repeatedly. By being careful, we can arrange to visit each edge exactly twice. For correctness, we must do the traversal in a systematic way to ensure that we don’t miss anything. To guarantee that we get out of the maze, we must make sure our search takes us through every edge and vertex in the graph. The key idea behind graph traversal is to mark each vertex when we first visit it and keep track of what we have not yet completely explored. Although bread crumbs or unraveled threads are used to mark visited places in fairy-tale mazes, we will rely on Boolean flags or enumerated types. Each vertex will always be in one of the following three states:
- undiscovered – the vertex in its initial, virgin state.
- discovered – the vertex after we have encountered it, but before we have checked out all its incident edges.
- completely-explored – the vertex after we have visited all its incident edges.
Obviously, a vertex cannot be completely-explored before we discover it, so over the course of the traversal the state of each vertex progresses from undiscovered to discovered to completely-explored. We must also maintain a structure containing all the vertices that we have discovered but not yet completely explored. Initially, only a single start vertex is considered to have been discovered. To completely explore a vertex, we must evaluate each edge going out of it. If an edge goes to an undiscovered vertex, we mark it discovered and add it to the list of work to do. If an edge goes to a completely-explored vertex, we will ignore it, since further contemplation will tell us nothing new about the graph. We can also ignore any edge going to a discovered but not completely-explored vertex, since the destination must already reside on the list of vertices to completely explore. Regardless of which order we use to fetch the next vertex to explore, each undirected edge will be considered exactly twice, once when each of its endpoints is explored. Directed edges will be consider only once, when exploring the source vertex. Every edge and vertex in the connected component must eventually be visited. Why? Suppose the traversal didn’t visit everything, meaning that there exists a vertex u that remains unvisited whose neighbor v was visited. This neighbor v will eventually be explored, and we will certainly visit u when we do so. Thus we must find everything that is there to be found. The order in which we explore the vertices depends upon the container data structure used to store the discovered but not completely-explored vertices. There are two important possibilities:
- Queue – by storing the vertices in a first in, first out (FIFO) queue, we explore the oldest unexplored vertices first. Thus our explorations radiate out slowly from the starting vertex, defining a so-called breadth-first search.
- Stack – by storing the vertices in a last in, first out (LIFO) stack, we explore the vertices by lurching along a path, visiting a new neighbor if one is available, and backing up only when we are surrounded by previously discovered vertices. Thus our explorations quickly wander away from our starting point, defining a so-called depth-first search.
| DAG (Directed Acyclic Graph) | |||
| a Directed Acyclic Graph, also called a dag or DAG, is a directed graph with no directed cycles; that is, for any vertex v, there is no nonempty directed path that starts and ends on v. DAGs appear in models where it doesn’t make sense for a vertex to have a path to itself; for example, if an edge u?v indicates that v is a part of u, such a path would indicate that u is a part of itself, which is impossible. Informally speaking, a DAG “flows” in a single direction. | |||
| Concept | |||
| A directed acyclic graph (DAG) is a directed graph with no cycles. A DAG represents more general relationships than trees but less general than arbitrary directed graphs. An example of a DAG is given in image below | |||
| DAGs are useful in representing the syntactic structure of arithmetic expressions with common sub-expressions. | |||
| For example, consider the following expression: | |||
| (a+b)*c + ((a+b + e)) | |||
| In this expression, the term (a + b) is a common sub-expression, and therefore represented in the DAG by the vertices with more than one incoming edge, as shown in image below: | |||
| Each directed acyclic graph gives rise to a partial order = on its vertices, where u = v exactly when there exists a directed path from u to v in the DAG. However, many different DAGs may give rise to this same reachability relation. Among all such DAGs, the one with the fewest edges is the transitive reduction of each of them and the one with the most is their transitive closure. In particular, the transitive closure is the reachability order =. | |||
| Properties | |||
| Every directed acyclic graph has a topological sort, an ordering of the vertices such that each vertex comes before all vertices it has edges to. In general, this ordering is not unique. Any two graphs representing the same partial order have the same set oftopological sort orders. | |||
| DAGs can be considered to be a generalization of trees in which certain subtrees can be shared by different parts of the tree. In a tree with many identical subtrees, this can lead to a drastic decrease in space requirements to store the structure. Conversely, a DAG can be expanded to a forest of rooted trees using this simple algorithm: | |||
|
|||
| If we explore the graph without modifying it or comparing nodes for equality, this forest will appear identical to the original DAG. | |||
| Some algorithms become simpler when used on DAGs instead of general graphs. For example, search algorithms like depth-first search without iterative deepening normally must mark vertices they have already visited and not visit them again. If they fail to do this, they may never terminate because they follow a cycle of edges forever. Such cycles do not exist in DAGs. | |||
|
|
|||
Hashing, Searching & Sorting Introduction There are many applications requiring a search for a particular element. Searching refers to finding out whether a particular element is present in the list. The method that we use for this depends on how the elements of the list are organized. If the list is an unordered list, then we use linear or sequential search, whereas if the list is an ordered list, then we use binary search. The search proceeds by sequentially comparing the key with elements in the list, and continues until either we find a match or the end of the list is encountered. If we find a match, the search terminates successfully by returning the index of the element in the list which has matched. If the end of the list is encountered without a match, the search terminates unsuccessfully. Searching Algorithm A search algorithm, broadly speaking, is an algorithm that takes a problem as input and returns a solution to the problem, usually after evaluating a number of possible solutions. Most of the algorithms studied by computer scientists that solve problems are kinds of search algorithms. The set of all possible solutions to a problem is called the search space. Brute-force search or “naïve“/uninformed search algorithms use the simplest, most intuitive method of searching through the search space, whereas informed search algorithms use heuristic functions to apply knowledge about the structure of the search space to try to reduce the amount of time spent searching. Types of Searching Algorithm:
| 1 | Uninformed search |
| 2 | List search |
| 3 | Tree search |
| 4 | Graph search |
| 5 | SQL search |
| 6 | Tradeoff-based search |
| 7 | Informed search |
| 8 | Adversarial search |
Uninformed search An uninformed search algorithm is one that does not take into account the specific nature of the problem. As such, they can be implemented in general, and then the same implementation can be used in a wide range of problems thanks to abstraction. The drawback is that most search spaces are extremely large, and an uninformed search (especially of a tree) will take a reasonable amount of time only for small examples. As such, to speed up the process, sometimes only an informed search will do. List search List search algorithms are perhaps the most basic kind of search algorithm. The goal is to find one element of a set by some key (perhaps containing other information related to the key). As this is a common problem in computer science, the computational complexity of these algorithms has been well studied. The simplest such algorithm is linear search, which simply examines each element of the list in order. It has expensive O(n) running time, where n is the number of items in the list, but can be used directly on any unprocessed list. A more sophisticated list search algorithm is binary search; it runs in O(log n) time. This is significantly better than linear search for large lists of data, but it requires that the list be sorted before searching (see sorting algorithm) and also be random access. Interpolation search is better than binary search for large sorted lists with fairly even distributions, but has a worst-case running time of O(n). Grover’s algorithm is a quantum algorithm that offers quadratic speedup over the classical linear search for unsorted lists. However, it requires a currently non-existent quantum computer on which to run. Hash tables are also used for list search, requiring only constant time for search in the average case, but more space overhead and terrible O(n) worst-case search time. Another search based on specialized data structures uses self-balancing binary search trees and requires O(log n) time to search; these can be seen as extending the main ideas of binary search to allow fast insertion and removal. See associative array for more discussion of list search data structures. Most list search algorithms, such as linear search, binary search, and self-balancing binary search trees, can be extended with little additional cost to find all values less than or greater than a given key, an operation called range search. The glaring exception is hash tables, which cannot perform such a search efficiently. Tree search Tree search algorithms are the heart of searching techniques. These search trees of nodes, whether that tree is explicit or implicit (generated on the go). The basic principle is that a node is taken from a data structure, its successors examined and added to the data structure. By manipulating the data structure, the tree is explored in different orders for instance level by level (Breadth-first search) or reaching a leaf node first and backtracking (Depth-first search). Other examples of tree-searches include Iterative-deepening search, Depth-limited search, Bidirectional search and Uniform-cost search. Graph search Many of the problems in graph theory can be solved using graph traversal algorithms, such as Dijkstra’s algorithm, Kruskal’s algorithm, the nearest neighbour algorithm, and Prim’s algorithm. These can be seen as extensions of the tree-search algorithms. SQL search Many of the problems in Tree search can be solved using SQL type searches. SQL typically works best on Structured data. It offers one advantage over hierarchical type search in that it allows access to the data in many different ways. In a Hierarchical search your path is forced by the branches of the tree (example name by alphabetical order) while with SQL you have the flexibility of accessing the data along multiple directions (name, address, income etc…). Tradeoff-based search While SQL search offers great flexibility to search the data, it still operates like a computer does: by constraints. In SQL, constraints are used to eliminate the data, while tradeoff based search uses a more “human” metaphor. For example if you are searching for a car in a dataset, your SQL statement looks like: select car from dataset where price < $30,000, and Consumption > 30MPG, and Color = ‘RED’. While a tradeoff type query would look like “I like the red car, but I will settle for the blue if it is $2,000 cheaper”. Informed search In an informed search, a heuristic that is specific to the problem is used as a guide. A good heuristic will make an informed search dramatically out-perform any uninformed search. There are few prominent informed list-search algorithms. A possible member of that category is a hash table with a hashing function that is a heuristic based on the problem at hand. Most informed search algorithms explore trees. These include Best-first search, andA*. Like the uninformed algorithms, they can be extended to work for graphs as well. Adversarial search In games such as chess, there is a game tree of all possible moves by both players and the resulting board configurations, and we can search this tree to find an effective playing strategy. This type of problem has the unique characteristic that we must account for any possible move our opponent might make. To account for this, game-playing computer programs, as well as other forms of artificial intelligence like machine planning, often use search algorithms like the minimax algorithm, search tree pruning, and alpha-beta pruning. Searching Algorithm A sorting algorithm is an algorithm that puts elements of a list in a certain order. The most-used orders are numerical order and lexicographical order. Efficient sorting is important to optimizing the use of other algorithms (such as search and merge algorithms) that require sorted lists to work correctly; it is also often useful for canonicalizing data and for producing human-readable output. More formally, the output must satisfy two conditions:
- The output is in nondecreasing order (each element is no smaller than the previous element according to the desired total order);
- The output is a permutation, or reordering, of the input.
Since the dawn of computing, the sorting problem has attracted a great deal of research, perhaps due to the complexity of solving it efficiently despite its simple, familiar statement. For example, bubble sort was analyzed as early as 1956. Although many consider it a solved problem, useful new sorting algorithms are still being invented to this day (for example, library sort was first published in 2004). Sorting algorithms are prevalent in introductory computer science classes, where the abundance of algorithms for the problem provides a gentle introduction to a variety of core algorithm concepts, such as big O notation, divide-and-conquer algorithms, data structures, randomized algorithms, best, worst and average case analysis, time-space tradeoffs, and lower bounds. Popular Searching Algorithm
- Bubble sort
- Selection sort
- Insertion sort
- Shell sort
- Merge sort
- Heapsort
- Quicksort
- Bucket sort
- Radix sort
1. Bubble SortBubble sort is a straightforward and simplistic method of sorting data that is used in computer science education. The algorithm starts at the beginning of the data set. It compares the first two elements, and if the first is greater than the second, it swaps them. It continues doing this for each pair of adjacent elements to the end of the data set. It then starts again with the first two elements, repeating until no swaps have occurred on the last pass. While simple, this algorithm is highly inefficient and is rarely used except in education. A slightly better variant, cocktail sort, works by inverting the ordering criteria and the pass direction on alternating passes. Its average case and worst case are both O(n²). 2. Selection sortSelection sort is a simple sorting algorithm that improves on the performance of bubble sort. It works by first finding the smallest element using a linear scan and swapping it into the first position in the list, then finding the second smallest element by scanning the remaining elements, and so on. Selection sort is unique compared to almost any other algorithm in that its running time is not affected by the prior ordering of the list: it performs the same number of operations because of its simple structure. Selection sort also requires only n swaps, and hence just T(n) memory writes, which is optimal for any sorting algorithm. Thus it can be very attractive if writes are the most expensive operation, but otherwise selection sort will usually be outperformed by insertion sort or the more complicated algorithms. 3. Insertion sortInsertion sort is a simple sorting algorithm that is relatively efficient for small lists and mostly-sorted lists, and often is used as part of more sophisticated algorithms. It works by taking elements from the list one by one and inserting them in their correct position into a new sorted list. In arrays, the new list and the remaining elements can share the array’s space, but insertion is expensive, requiring shifting all following elements over by one. The insertion sort works just like its name suggests – it inserts each item into its proper place in the final list. The simplest implementation of this requires two list structures – the source list and the list into which sorted items are inserted. To save memory, most implementations use an in-place sort that works by moving the current item past the already sorted items and repeatedly swapping it with the preceding item until it is in place.Shell sort (see below) is a variant of insertion sort that is more efficient for larger lists. This method is much more efficient than the bubble sort, though it has more constraints. 4. Shell sortShell sort was invented by Donald Shell in 1959. It improves upon bubble sort and insertion sort by moving out of order elements more than one position at a time. One implementation can be described as arranging the data sequence in a two-dimensional array and then sorting the columns of the array using insertion sort. Although this method is inefficient for large data sets, it is one of the fastest algorithms for sorting small numbers of elements (sets with less than 1000 or so elements). Another advantage of this algorithm is that it requires relatively small amounts of memory. 5. Merge sortMerge sort takes advantage of the ease of merging already sorted lists into a new sorted list. It starts by comparing every two elements (i.e., 1 with 2, then 3 with 4…) and swapping them if the first should come after the second. It then merges each of the resulting lists of two into lists of four, then merges those lists of four, and so on; until at last two lists are merged into the final sorted list. Of the algorithms described here, this is the first that scales well to very large lists, because its worst-case running time is O(n log n). 6. HeapsortHeapsort is a much more efficient version of selection sort. It also works by determining the largest (or smallest) element of the list, placing that at the end (or beginning) of the list, then continuing with the rest of the list, but accomplishes this task efficiently by using a data structure called a heap, a special type of binary tree. Once the data list has been made into a heap, the root node is guaranteed to be the largest element. When it is removed and placed at the end of the list, the heap is rearranged so the largest element remaining moves to the root. Using the heap, finding the next largest element takes O(log n) time, instead of O(n) for a linear scan as in simple selection sort. This allows Heapsort to run in O(n log n) time. 7. QuicksortQuicksort is a divide and conquer algorithm which relies on a partition operation: to partition an array, we choose an element, called a pivot, move all smaller elements before the pivot, and move all greater elements after it. This can be done efficiently in linear time and in-place. We then recursively sort the lesser and greater sublists. Efficient implementations of quicksort (with in-place partitioning) are typically unstable sorts and somewhat complex, but are among the fastest sorting algorithms in practice. Together with its modest O(log n) space usage, this makes quicksort one of the most popular sorting algorithms, available in many standard libraries. The most complex issue in quicksort is choosing a good pivot element; consistently poor choices of pivots can result in drastically slower (O(n²)) performance, but if at each step we choose the median as the pivot then it works in O(n log n). 8. Bucket sortBucket sort is a sorting algorithm that works by partitioning an array into a finite number of buckets. Each bucket is then sorted individually, either using a different sorting algorithm, or by recursively applying the bucket sorting algorithm. A variation of this method called thesingle buffered count sort is faster than the quick sort and takes about the same time to run on any set of data. More information is available. 9. Radix sortRadix sort is an algorithm that sorts a list of fixed-size numbers of length k in O(n · k) time by treating them as bit strings. We first sort the list by the least significant bit while preserving their relative order using a stable sort. Then we sort them by the next bit, and so on from right to left, and the list will end up sorted. Most often, the counting sort algorithm is used to accomplish the bitwise sorting, since the number of values a bit can have is small.
Hashing Function A data object called a symbol table is required to be defined and implemented in many applications, such as compiler/assembler writing. A symbol table is nothing but a set of pairs (name, value), where value represents a collection of attributes associated with the name, and the collection of attributes depends on the program element identified by the name. For example, if a name x is used to identify an array in a program, then the attributes associated with x are the number of dimensions, lower bound and upper bound of each dimension, and element type. Therefore, a symbol table can be thought of as a linear list of pairs (name, value), and we can use a list data object for realizing a symbol table. A symbol table is referred to or accessed frequently for adding a name, or for storing or retrieving the attributes of a name. Therefore, accessing efficiency is a prime concern when designing a symbol table. The most common method of implementing a symbol table is to use a hash table. Hashing is a method of directly computing the index of the table by using a suitable mathematical function called a hash function. NoteThe hash function operates on the name to be stored in the symbol table, or whose attributes are to be retrieved from the symbol table. If h is a hash function and x is a name, then h(x) gives the index of the table where x, along with its attributes, can be stored. If x is already stored in the table, then h(x) gives the index of the table where it is stored, in order to retrieve the attributes of x from the table. There are various methods of defining a hash function. One is the division method. In this method, we take the sum of the values of the characters, divide it by the size of the table, and take the remainder. This gives us an integer value lying in the range of 0 to (n-1), if the size of the table is n. Another method is the mid-square method. In this method, the identifier is first squared and then the appropriate number of bits from the middle of the square is used as the hash value. Since the middle bits of the square usually depend on all the characters in the identifier, it is expected that different identifiers will result in different values. The number of middle bits that we select depends on the table size. Therefore, if r is the number of middle bits that we are using to form the hash value, then the table size will be 2r. So when we use this method, the table size is required to be a power of 2. A third method is folding, in which the identifier is partitioned into several parts, all but the last part being of the same length. These parts are then added together to obtain the hash value.To store the name or to add attributes of the name, we compute the hash value of the name, and place the name or attributes, as the case may be, at that place in the table whose index is the hash value of the name. To retrieve the attribute values of the name kept in the symbol table, we apply the hash function of the name to that index of the table where we get the attributes of the name. So we find that no comparisons are required to be done; the time required for the retrieval is independent of the table size. The retrieval is possible in a constant amount of time, which will be the time taken for computing the hash function. Therefore a hash table seems to be the best for realization of the symbol table, but there is one problem associated with the hashing, and that is collision. Hash collision occurs when the two identifiers are mapped into the same hash value. This happens because a hash function defines a mapping from a set of valid identifiers to the set of those integers that are used as indices of the table. Therefore we see that the domain of the mapping defined by the hash function is much larger than the range of the mapping, and hence the mapping is of a many-to-one nature. Therefore, when we implement a hash table, a suitable collision-handling mechanism is to be provided, which will be activated when there is a collision. Collision handling involves finding an alternative location for one of the two colliding symbols. For example, if x and y are the different identifiers and h(x = h(y), x and y are the colliding symbols. If x is encountered before y, then the ith entry of the table will be used for accommodating the symbol x, but later on when y comes, there is a hash collision. Therefore we have to find a suitable alternative location either for x or y. This means we can either accommodate y in that location, or we can move x to that location and place y in the ith location of the table. Various methods are available to obtain an alternative location to handle the collision. They differ from each other in the way in which a search is made for an alternative location. The following are commonly used collision-handling techniques: Linear Probing or Linear Open Addressing In this method, if for an identifier x, h(x) = i, and if the ith location is already occupied, we search for a location close to the ith location by doing a linear search, starting from the(i+1)th location to accommodate x. This means we start from the (i+1)th location and do the linear search until we get an empty location; once we get an empty location we accommodate x there. Rehashing In rehashing we find an alternative empty location by modifying the hash function and applying the modified hash function to the colliding symbol. For example, if x is the symbol and h(x) = i, and if the ith location is already occupied, then we modify the hash function h to h1, and find out h1(x), if h1(x) = j. If the jth location is empty, then we accommodate x in the jth location. Otherwise, we once again modify h1 to some h2 and repeat the process until the collision is handled. Once the collision is handled, we revert to the original hash function before considering the next symbol. Overflow chaining Overflow chaining is a method of implementing a hash table in which the collisions are handled automatically. In this method, we use two tables: a symbol table to accommodate identifiers and their attributes, and a hash table, which is an array of pointers pointing to symbol table entries. Each symbol table entry is made of three fields: the first for holding the identifier, the second for holding the attributes, and the third for holding the link or pointer that can be made to point to any symbol table entry. The insertions into the symbol table are done as follows:If x is the symbol to be inserted, it will be added to the next available entry of the symbol table. The hash value of x is then computed. If h(x) = i, then the ith hash table pointer is made to point to the symbol table entry in which x is stored, if the ith hash table pointer does not point to any symbol table entry. If the ith hash table pointer is already pointing to some symbol table entry, then the link field of the symbol table entry containing x is made to point to that symbol table entry to which the ith hash table pointer is pointing to, and the ith hash table pointer is made to point to the symbol entry containing x. This is equivalent to building a linked list on the ith index of the hash table. The retrieval of attributes is done as follows:If x is a symbol, then we obtain h(x), use this value as the index of the hash table, and traverse the list built on this index to get that entry which contains x. A typical hash tableimplemented using this technique is shown here.
The symbols to b stored are x1,y1,z1,x2,y2,z2. The hash function that we use is h(symbol) = (value of first letter of the symbol) mod n, where n is the size of table.if h(x1) = i
h(y1) = j
h(z1) = k thenh(x2) = i
h(y2) = j
h(z2) = k Therefore, the contents of the symbol table will be the one shown in image below:
Linear Search Linear search is a search algorithm, also known as sequential search, that is suitable for searching a set of data for a particular value. It operates by checking every element of a list one at a time in sequence until a match is found. Linear search runs in O(N). If the data are distributed randomly, on average(N+1)/2 comparisons will be needed. The best case is that the value is equal to the first element tested, in which case only 1 comparison is needed. The worst case is that the value is not in the list (or is the last item in the list), in which case N comparisons are needed. The simplicity of the linear search means that if just a few elements are to be searched it is less trouble than more complex methods that require preparation such as sorting the list to be searched or more complex data structures, especially when entries may be subject to frequent revision. Another possibility is when certain values are much more likely to be searched for than others and it can be arranged that such values will be amongst the first considered in the list. There are many applications requiring a search for a particular element. Searching refers to finding out whether a particular element is present in the list. The method that we use for this depends on how the elements of the list are organized. If the list is an unordered list, then we use linear or sequential search, whereas if the list is an ordered list, then we usebinary search. The search proceeds by sequentially comparing the key with elements in the list, and continues until either we find a match or the end of the list is encountered. If we find a match, the search terminates successfully by returning the index of the element in the list which has matched. If the end of the list is encountered without a match, the search terminates unsuccessfully. Example Program:
#include <stdio.h>
#define MAX 10
/*Linear Search Function*/
void lsearch(int list[],int n,int element)
{
int i, flag = 0;
for(i=0;i<n;i++)
if( list[i] == element)
{
printf(” The element whose value is %d is present at position %d in list\n”, element,i+1);
flag =1;
break;
}
if( flag == 0)
printf(“The element whose value is %d is not present in theist\n”, element);
}
void readlist(int list[],int n)
{
int i;
printf(“Enter the elements\n”);
for(i=0;i<n;i++)
scanf(“%d”,&list[i]);
}
/*Function to print content of list */
void printlist(int list[],int n)
{
int i;
printf(“The elements of the list are: \n”);
for(i=0;i<n;i++)
printf(“%d\t”,list[i]);
}
main()
{
int list[MAX], n, element;
printf(“Enter the number of elements in the list max = 10\n”);
scanf(“%d”,&n);
readlist(list,n);
printf(“\nThe list before sorting is:\n”);
printlist(list,n);
printf(“\nEnter the element to be searched\n”);
scanf(“%d”,&element);
lsearch(list,n,element);
}
Output:
Binary Search A Binary Search algorithm (or binary chop) is a technique for finding a particular value in a sorted list. It makes progressively better guesses, and closes in on the sought value by selecting the median element in a list, comparing its value to the target value, and determining if the selected value is greater than, less than, or equal to the target value. A guess that turns out to be too high becomes the new top of the list, and a guess that is too low becomes the new bottom of the list. Pursuing this strategy iteratively, it narrows the search by a factor of two each time, and finds the target value. A Binary Search is an example of a dichotomic divide and conquer search algorithm The prerequisite for using binary search is that the list must be a sorted one. We compare the element to be searched with the element placed approximately in the middle of the list. If a match is found, the search terminates successfully. Otherwise, we continue the search for the key in a similar manner either in the upper half or the lower half. If the elements of the list are arranged in ascending order, and the key is less than the element in the middle of the list, the search is continued in the lower half. If the elements of the list are arranged in descending order, and the key is greater than the element in the middle of the list, the search is continued in the upper half of the list. The procedure for the binary search is given in the following program. The algorithm The most common application of binary search is to find a specific value in a sorted list. To cast this in the frame of the guessing game (see Example below), realize that we are now guessing the index, or numbered place, of the value in the list. This is useful because, given the index, other data structures will contain associated information. Suppose a data structure containing the classic collection of name, address, telephone number and so forth has been accumulated, and an array is prepared containing the names, numbered from one to N. A query might be: what is the telephone number for a given name X. To answer this the array would be searched and the index (if any) corresponding to that name determined, whereupon it would be used to report the associated telephone number and so forth. Appropriate provision must be made for the name not being in the list (typically by returning an index value of zero), indeed the question of interest might be only whether Xis in the list or not. If the list of names is in sorted order, a binary search will find a given name with far fewer probes than the simple procedure of probing each name in the list, one after the other in a linear search, and the procedure is much simpler than organizing a hash table though that would be faster still, typically averaging just over one probe. This applies for a uniform distribution of search items but if it is known that some few items are much more likely to be sought for than the majority then a linear search with the list ordered so that the most popular items are first may do better. The binary search begins by comparing the sought value X to the value in the middle of the list; because the values are sorted, it is clear whether the sought value would belong before or after that middle value, and the search then continues through the correct half in the same way. Only the sign of the difference is inspected: there is no attempt at aninterpolation search based on the size of the differences. Example Program
#include <stdio.h>
#define MAX 10
void bsearch(int list[],int n,int element)
{
int l,u,m, flag = 0;
l = 0;
u = n-1;
while(l <= u)
{
m = (l+u)/2;
if( list[m] == element)
{
printf(” The element whose value is %d is present at position %d in list\n”, element,m+1);
flag =1;
break;
}
else
if(list[m] < element)
l = m+1;
else
u = m-1;
}
if( flag == 0)
printf(“The element whose value is %d is not present in the list\n”, element);
}
void readlist(int list[],int n)
{
int i;
printf(“Enter the elements\n”);
for(i=0;i<n;i++)
scanf(“%d”,&list[i]);
}
void printlist(int list[],int n)
{
int i;
printf(“The elements of the list are: \n”);
for(i=0;i<n;i++)
printf(“%d\t”,list[i]);
}
int main()
{
int list[MAX], n, element;
printf(“Enter the number of elements in the list max = 10\n”);
scanf(“%d”,&n);
readlist(list,n);
printf(“\nThe list before sorting is:\n”);
printlist(list,n);
printf(“\nEnter the element to be searched\n”);
scanf(“%d”,&element);
bsearch(list,n,element);
}
Output
Bubble Sort Bubble sort is a simple sorting algorithm. It works by repeatedly stepping through the list to be sorted, comparing two items at a time and swapping them if they are in the wrong order. The pass through the list is repeated until no swaps are needed, which indicates that the list is sorted. Bubble sorting is a simple sorting technique in which we arrange the elements of the list by forming pairs of adjacent elements. That means we form the pair of the ith and (i+1)thelement. If the order is ascending, we interchange the elements of the pair if the first element of the pair is greater than the second element. That means for every pair(list[i],list[i+1]) for i :=1 to (n-1) if list[i] > list[i+1], we need to interchange list[i] and list[i+1]. Carrying this out once will move the element with the highest value to the last or nth position. Therefore, we repeat this process the next time with the elements from the first to (n-1)th positions. This will bring the highest value from among the remaining (n-1) values to the (n-1)th position. We repeat the process with the remaining (n-2) values and so on. Finally, we arrange the elements in ascending order. This requires to perform (n-1) passes. In the first pass we have (n-1) pairs, in the second pass we have (n-2) pairs, and in the last (or (n-1)th) pass, we have only one pair. Therefore, the number of probes or comparisons that are required to be carried out is example:5 1 4 2 8 – unsorted array
1 4 2 5 8 – after one pass
1 2 4 5 8 – sorted array The algorithm gets its name from the way smaller elements “bubble” to the top (i.e. the beginning) of the list via the swaps. (Another opinion: it gets its name from the way greater elements “bubble” to the end.) Because it only uses comparisons to operate on elements, it is a comparison sort. This is the easiest comparison sort to implement.
Quick Sort In the quick sort method, an array a[1],…..,a[n] is sorted by selecting some value in the array as a key element. We then swap the first element of the list with the key element so that the key will be in the first position. We then determine the key’s proper place in the list. The proper place for the key is one in which all elements to the left of the key are smaller than the key, and all elements to the right are larger. To obtain the key’s proper position, we traverse the list in both directions using the indices i and j, respectively. We initialize i to that index that is one more than the index of the key element. That is, if the list to be sorted has the indices running from m to n, the key element is at index m, hence we initialize i to (m+1). The index i is incremented until we get an element at the ith position that is greater than the key value. Similarly, we initialize j to n and go on decrementing j until we get an element with a value less than the key’s value. We then check to see whether the values of i and j have crossed each other. If not, we interchange the elements at the key (mth)position with the elements at the jth position. This brings the key element to the jth position, and we find that the elements to its left are less than it, and the elements to its right are greater than it. Therefore we can split the list into two sublists. The first sublist is composed of elements from the mth position to the (j–1)th position, and the second sublist consists of elements from the (j+1)th position to the nth position. We then repeat the same procedure on each of the sublists separately. Choice of the key We can choose any entry in the list as the key. The choice of the first entry is often a poor choice for the key, since if the list has already been sorted, there will be no element less than the first element selected as the key. So, one of the sublists will be empty. So we choose a key near the center of the list in the hope that our choice will partition the list in such a manner that about half of the elements will end up on one side of the key, and half will end up on the other. Therefore the function getkeyposition is
int getkeyposition(int i,j)
{
return(( i+j )/ 2);
}
The choice of the key near the center is also arbitrary, so it is not necessary to always divide the list exactly in half. It may also happen that one sublist is much larger than the other. So some other method of selecting a key should be used. A good way to choose a key is to use a random number generator to choose the position of the next key in each activation of quick sort. Therefore, the function getkeyposition is:
int getkeyposition(int i,j)
{
return(random number in the range of i to j);
}
Example: Program
#include <stdio.h>
#define MAX 10
void swap(int *x,int *y)
{
int temp;
temp = *x;
*x = *y;
*y = temp;
}
int getkeyposition(int i,int j )
{
return((i+j) /2);
}
void qsort(int list[],int m,int n)
{
int key,i,j,k;
if( m < n)
{
k = getkeyposition(m,n);
swap(&list[m],&list[k]);
key = list[m];
i = m+1;
j = n;
while(i <= j)
{
while((i <= n) && (list[i] <= key))
i++;
while((j >= m) && (list[j] > key))
j-;
if( i < j)
swap(&list[i],&list[j]);
}
swap(&list[m],&list[j]);
qsort(list[],m,j-l);
qsort(list[],j+1,n);
}
}
void readlist(int list[],int n)
{
int i;
printf(“Enter the elements\n”);
for(i=0;i<n;i++)
scanf(“%d”,&list[i]);
}
void printlist(int list[],int n)
{
int i;
printf(“The elements of the list are: \n”);
for(i=0;i<n;i++)
printf(“%d\t”,list[i]);
}
void main()
{
int list[MAX], n;
printf(“Enter the number of elements in the list max = 10\n”);
scanf(“%d”,&n);
readlist(list,n);
printf(“The list before sorting is:\n”);
printlist(list,n);
qsort(list,0,n-1);
printf(“\nThe list after sorting is:\n”);
printlist(list,n);
}
Output:Try it yourself.
Merge Sort Merge sort or mergesort is a O(n log n) sorting algorithm. It is easy to implement merge sort such that it is stable, meaning. This is another sorting technique having the same average-case and worst-case time complexities, but requiring an additional list of size n. The technique that we use is the merging of the two sorted lists of size m and n to form a single sorted list of size (m + n). Given a list of size n to be sorted, instead of viewing it to be one single list of size n, we start by viewing it to be n lists each of size 1, and merge the first list with the second list to form a single sorted list of size 2. We then check to see whether the values of i and j have crossed each other. If not, we Similarly, we merge the third and the fourth lists to form a second single sorted list of size 2, and so on. This completes one pass. We then consider the first sorted list of size 2 and the second sorted list of size 2, and merge them to form a single sorted list of size 4. Similarly, we merge the third and the fourth sorted lists, each of size 2, to form the second single sorted list of size 4, and so on. This completes the second pass. In the third pass, we merge these adjacent sorted lists, each of size 4, to form sorted lists of size 8. We continue this process until we finally obtain a single sorted list of size n as shown next.
Heap Sort Heapsort is a comparison-based sorting algorithm, and is part of the selection sort family. Although somewhat slower in practice on most machines than a good implementation of quicksort, it has the advantage of a worst-case O(n log n) runtime. Heapsort is an in-place algorithm, but is not a stable sort. Heapsort is a sorting technique that sorts a contiguous list of length n with O(n log2 (n)) comparisons and movement of entries, even in the worst case. Hence it achieves the worst-case bounds better than those of quick sort, and for the contiguous list, it is better than merge sort, since it needs only a small and constant amount of space apart from the list being sorted. Heapsort proceeds in two phases. First, all the entries in the list are arranged to satisfy the heap property, and then the top of the heap is removed and another entry is promoted to take its place repeatedly. Therefore, we need a function that builds an initial heap to arrange all the entries in the list to satisfy the heap property. The function that builds an initial heap uses a function that adjusts the ith entry in the list, whose entries at 2i and 2i + 1 positions already satisfy the heap property in such a manner that the entry at the ith position in the list will also satisfy the heap property. Example:
#include <stdio.h>
#define MAX 10
void swap(int *x,int *y)
{
int temp;
temp = *x;
*x = *y;
*y = temp;
}
void adjust( int list[],int i, int n)
{
int j,k,flag;
k = list[i];
flag = 1;
j = 2 * i;
while(j <= n && flag)
{
if(j < n && list[j] < list[j+1])
j++;
if( k >= list[j])
flag =0;
else
{
list[j/2] = list[j];
j = j *2;
}
}
list [j/2] = k;
}
void build_initial_heap( int list[], int n)
{
int i;
for(i=(n/2);i>=0;i-)
adjust(list,i,n-1);
}
void heapsort(int list[],int n)
{
int i;
build_initial_heap(list,n);
for(i=(n-2); i>=0;i-)
{
swap(&list[0],&list[i+1]);
adjust(list,0,i);
}
}
void readlist(int list[],int n)
{
int i;
printf(“Enter the elements\n”);
for(i=0;i<n;i++)
scanf(“%d”,&list[i]);
}
void printlist(int list[],int n)
{
int i;
printf(“The elements of the list are: \n”);
for(i=0;i<n;i++)
printf(“%d\t”,list[i]);
}
void main()
{
int list[MAX], n;
printf(“Enter the number of elements in the list max = 10\n”);
scanf(“%d”,&n);
readlist(list,n);
printf(“The list before sorting is:\n”);
printlist(list,n);
heapsort(list,n);
printf(“The list after sorting is:\n”);
printlist(list,n);
}
Output:Try it yourself.
