আইটি, কম্পিউটার ইঞ্জিনিয়ার তথা ইলেকট্রিক্যাল এন্ড ইলেকট্রনিক্স গ্রেজুয়েট যারা গভারমেন্ট,স্বায়ত্তশাসিত,পাবলিক লিমিটেড তথা প্রতিষ্ঠিত সফটওয়ার ফার্মে যারা চাকুরি খুজছেন তাদের জন্য আমরা যারা বিভিন্ন সরকারি প্রতিষ্ঠানে ভিন্ন ভিন্ন পোস্টে কমরত তাদের কিছু দায়িত্ব থেকেই যায়, আমাদের জুনিয়রদের গাইড করার ব্যাপারে। আমরা মনে প্রানে বিশ্বাস করি যে, আমাদের জুনিয়রা আমাদের চাইতে অনেক অনেকগুন পারদর্শী তারপরও যদি এই গাইডলাইন গুলো হয়ত আত্মবিশ্বাস আরো বাড়িয়ে দিবে।

Data Structure for IT Job

What is Data structure? 
Data may be organized in different way. Logical or mathematical representation of data is called data structure.

 
Operations on linear Data Structures
Traversal : Visit every part of the data structure
Search : Traversal through the data structure for a given element
Insertion : Adding new elements to the data structure
Deletion : Removing an element from the data structure.
Sorting : Rearranging the elements in some type of order(e.g Increasing or Decreasing)
Merging : Combining two similar data structures into one
Difference between ArrayList and LinkedList
ArrayList and LinkedList both implements List interface and maintains insertion order. Both are non synchronized classes.
But there are many differences between ArrayList and LinkedList classes that are given below.

ArrayList

LinkedList

1) ArrayList internally uses dynamic array to store the elements.

LinkedList internally uses doubly linked list to store the elements.

2) Manipulation with ArrayList is slow because it internally uses array. If any element is removed from the array, all the bits are shifted in memory.

Manipulation with LinkedList is faster than ArrayList because it uses doubly linked list so no bit shifting is required in memory.

3) ArrayList class can act as a list only because it implements List only.

LinkedList class can act as a list and queue both because it implements List and Deque interfaces.

4) ArrayList is better for storing and accessing data.

LinkedList is better for manipulating data.


Basic Operations on  STACK:
Stack operations may involve initializing the stack, using it and then de-initializing it. Apart from these basic stuffs, a stack is used for the following two primary operations −
push() − Pushing (storing) an element on the stack.
pop() − Removing (accessing) an element from the stack.
When data is PUSHed onto stack.
To use a stack efficiently, we need to check the status of stack as well. For the same purpose, the following functionality is added to stacks −
peek() − get the top data element of the stack, without removing it.
isFull() − check if stack is full.
isEmpty() − check if stack is empty.
At all times, we maintain a pointer to the last PUSHed data on the stack. As this pointer always represents the top of the stack, hence named top. The top pointer provides top value of the stack without actually removing it.
First we should learn about procedures to support stack functions −
peek()
Example
int peek(){
return stack[top];
}
isfull()
Example
bool isfull(){
if(top == MAXSIZE)
returntrue;
else
returnfalse;
}
isempty()
Example
bool isempty(){
if(top ==-1)
returntrue;
else
returnfalse;
}

Push Operation
The process of putting a new data element onto stack is known as a Push Operation. Push operation involves a series of steps −
 
If the linked list is used to implement the stack, then in step 3, we need to allocate space dynamically.
Algorithm for PUSH Operation
Example
void push(int data){
if(!isFull()){
      top = top +1;
      stack[top]= data;
}else{
      printf("Could not insert data, Stack is full.\n");
}
}
Pop Operation
Accessing the content while removing it from the stack, is known as a Pop Operation. In an array implementation of pop() operation, the data element is not actually removed, instead top is decremented to a lower position in the stack to point to the next value. But in linked-list implementation, pop() actually removes data element and deallocates memory space.
 
Algorithm for Pop Operation
Example
int pop(int data){

if(!isempty()){
      data = stack[top];
      top = top -1;
return data;
}
else{
      printf("Could not retrieve data, Stack is empty.\n");
}
}

All of these operations runs in O(1) time.
Queue Data Structures
Queue is also an abstract data type or a linear data structure, in which the first element is inserted from one end called REAR(also called tail), and the deletion of exisiting element takes place from the other end called as FRONT(also called head). This makes queue as FIFO data structure, which means that element inserted first will also be removed first.
The process to add an element into queue is called Enqueue and the process of removal of an element from queue is called Dequeue.

 
Basic Operations
Queue operations may involve initializing or defining the queue, utilizing it, and then completely erasing it from the memory. Here we shall try to understand the basic operations associated with queues −
enqueue() − add (store) an item to the queue.
dequeue() − remove (access) an item from the queue.
Few more functions are required to make the above-mentioned queue operation efficient. These are −
peek() − Gets the element at the front of the queue without removing it.
isfull() − Checks if the queue is full.
isempty() − Checks if the queue is empty.
In queue, we always dequeue (or access) data, pointed by front pointer and while enqueing (or storing) data in the queue we take help of rear pointer.

peek()
Example
int peek(){
return queue[front];
}
isfull()
Example
bool isfull(){
if(rear == MAXSIZE -1)
returntrue;
else
return false;
}
isempty()
Example
bool isempty(){
if(front <0|| front > rear)
return true;
else
return false;
}
Enqueue Operation
Queues maintain two data pointers, front and rear. Therefore, its operations are comparatively difficult to implement than that of stacks.

 
Example
int enqueue(int data)
if(isfull())
return 0;

   rear = rear +1;
   queue[rear]= data;
return 1;
end procedure

Dequeue Operation
Accessing data from the queue is a process of two tasks − access the data where front is pointing and remove the data after access. The following steps are taken to perform dequeue operation −

 
Example
int dequeue(){
if(isempty())
return 0;
int data = queue[front];
   front = front +1;
return data;
}

Queue variations
The standard queue data structure has the following variations:
1. Double-ended queue
2. Circular queue
Double-ended queue
In a standard queue, a character is inserted at the back and deleted in the front. However, in a double-ended queue, characters can be inserted and deleted from both the front and back of the queue.
Functions supported
The following functions are supported by double-ended queues:
Insert at back
void insert_at_back(int queue[], int element, int &rear, int array_size){
    if(rear == array_size)
        printf("Overflow\n");
    else{
        queue[rear] = element;
        rear = rear + 1;
    }
}
Delete from back
void delete_from_back(int queue[], int &rear, int front){
    if(front == rear)
        printf("Underflow\n");
    else{
        rear = rear - 1;
        queue[rear] = 0;
    }
}
Insert at front
void insert_at_front(int queue[], int &rear, int &front, int element, int array_size){
    if(rear == array_size)
        printf("Overflow\n");
    else{
        for(int i=rear; i>front; i--)
            queue[i] = queue[i-1];
        queue[front] = element;
        rear = rear+1;
    }
}
Delete from front
void delete_front_front(int queue[], int &front, int &rear){
    if(front == rear)
        printf("Underflow\n");
    else{
        queue[front] = 0;
        front = front + 1;
    }
}
Get front element
int get_front(int queue[], int front){
    return queue[front];
}
Get rear element
int get_rear(int queue[], int rear){
    return queue[rear-1];
}
What is Binary Insertion Sort?
We can use binary search to reduce the number of comparisons in normal insertion sort. Binary Insertion Sort find use binary search to find the proper location to insert the selected item at each iteration. In normal insertion, sort it takes O(i) (at ith iteration) in worst case. we can reduce it to O(logi) by using binary search. The algorithm as a whole still has a running worst case running time of O(n2) .
Selection Sort
The idea of selection sort is rather simple: we repeatedly find the next largest (or smallest) element in the array and move it to its final position in the sorted array. Assume that we wish to sort the array in increasing order, i.e. the smallest element at the beginning of the array and the largest element at the end. We begin by selecting the largest element and moving it to the highest index position. We can do this by swapping the element at the highest index and the largest element. We then reduce the effective size of the array by one element and repeat the process on the smaller (sub)array. The process stops when the effective size of the array becomes 1 (an array of 1 element is already sorted). 
For example,
consider the following array, shown with array elements in sequence separated by commas: 
 
The leftmost element is at index zero, and the rightmost element is at the highest array index, in our case, 4 (the effective size of our array is 5). The largest element in this effective array (index 0-4) is at index 2. We have shown the largest element and the one at the highest index in bold. We then swap the element at index 2 with that at index 4. The result is: 
 

We reduce the effective size of the array to 4, making the highest index in the effective array now 3. The largest element in this effective array (index 0-3) is at index 1, so we swap elements at index 1 and 3 (in bold): 
 

The next two steps give us: 
 

The last effective array has only one element and needs no sorting. The entire array is now sorted. The algorithm for an array, x, with lim elements is easy to write down: 
for (eff_size = lim; eff_size > 1; eff_size--)
          find maxpos, the location of the largest element in the effective
               array: index 0 to eff_size - 1
          swap elements of x at index maxpos and index eff_size – 1
Seletion sort example:
 

Sort two array one is in ascending order another is in descending order. [Bangladesh Power Development Board, Asst. Engr.-2018]
#include<stdio.h>
#include <string.h>
#define m 3
#define n 3
using namespace std;
int main()
{
 int ar1[m], ar2[n], ar3[m+n];
        for(int i=0;i<m;i++)
        scanf("%d",&ar1[i]);
        printf("second array \n ");
 for(int i=0;i<n;i++)
        scanf("%d",&ar2[i]);
        int i=0,j=n-1,k=0;
        while(i!=m && j!= -1)
        {
            if(ar1[i]<ar2[j])

                ar3[k++]=ar1[i++];
            else
                ar3[k++]=ar2[j--];
        }
 while(i!=m)
        {
           ar3[k]=ar1[i];
           k++;
           i++;
        }
 while(j!= -1)
        {
            ar3[k]=ar2[j];
            k++;
            j--;
        }
    printf("sorted array is \n ");
for(int c=0;c<k;c++)
    printf("%d ", ar3[c]);
    return 0;
}
Radix Sort
The lower bound for Comparison based sorting algorithm (Merge Sort, Heap Sort, Quick-Sort .. etc) is Ω(nLogn), i.e., they cannot do better than nLogn.
Counting sort is a linear time sorting algorithm that sort in O(n+k) time when elements are in range from 1 to k.
What if the elements are in range from 1 to n2? 
We can’t use counting sort because counting sort will take O(n2) which is worse than comparison based sorting algorithms. Can we sort such an array in linear time?
Radix Sort is the answer. The idea of Radix Sort is to do digit by digit sort starting from least significant digit to most significant digit. Radix sort uses counting sort as a subroutine to sort.

The Radix Sort Algorithm
1) Do following for each digit i where i varies from least significant digit to the most significant digit.
………….a) Sort input array using counting sort (or any stable sort) according to the i’th digit.
Example:
Original, unsorted list:
170, 45, 75, 90, 802, 24, 2, 66
Sorting by least significant digit (1s place) gives: [*Notice that we keep 802 before 2, because 802 occurred before 2 in the original list, and similarly for pairs 170 & 90 and 45 & 75.]

170, 90, 802, 2, 24, 45, 75, 66

Sorting by next digit (10s place) gives: [*Notice that 802 again comes before 2 as 802 comes before 2 in the previous list.]
802, 2, 24, 45, 66, 170, 75, 90
Sorting by most significant digit (100s place) gives:
2, 24, 45, 66, 75, 90, 170, 802
What is the running time of Radix Sort?
Let there be d digits in input integers. Radix Sort takes O(d*(n+b)) time where b is the base for representing numbers, for example, for decimal system, b is 10. What is the value of d? If k is the maximum possible value, then d would be O(logb(k)). So overall time complexity is O((n+b) * logb(k)). Which looks more than the time complexity of comparison based sorting algorithms for a large k. Let us first limit k. Let k <= nc where c is a constant. In that case, the complexity becomes O(nLogb(n)). But it still doesn’t beat comparison based sorting algorithms.
What if we make value of b larger?. What should be the value of b to make the time complexity linear? If we set b as n, we get the time complexity as O(n). In other words, we can sort an array of integers with range from 1 to nc if the numbers are represented in base n (or every digit takes log2(n) bits).

Is Radix Sort preferable to Comparison based sorting algorithms like Quick-Sort?
If we have log2n bits for every digit, the running time of Radix appears to be better than Quick Sort for a wide range of input numbers. The constant factors hidden in asymptotic notation are higher for Radix Sort and Quick-Sort uses hardware caches more effectively. Also, Radix sort uses counting sort as a subroutine and counting sort takes extra space to sort numbers.
Code of Radix sort
#include<iostream>
using namespace std; 
int getMax(int arr[], int n) 
int mx = arr[0]; 
    for (int i = 1; i < n; i++) 
if (arr[i] > mx) 
            mx = arr[i]; 
return mx; 
void countSort(int arr[], int n, int exp) 
    int output[n]; // output array 
int i, count[10] = {0}; 
    for (i = 0; i < n; i++) 
        count[ (arr[i]/exp)%10 ]++; 
    for (i = 1; i < 10; i++) 
        count[i] += count[i - 1]; 
for (i = n - 1; i >= 0; i--) 
    { 
        output[count[ (arr[i]/exp)%10 ] - 1] = arr[i]; 
        count[ (arr[i]/exp)%10 ]--; 
    } 
for (i = 0; i < n; i++) 
        arr[i] = output[i]; 
void radixsort(int arr[], int n) 
int m = getMax(arr, n); 
for (int exp = 1; m/exp > 0; exp *= 10) 
        countSort(arr, n, exp); 
void print(int arr[], int n) 
for (int i = 0; i < n; i++) 
        cout << arr[i] << " "; 
int main() 
    int arr[] = {170, 45, 75, 90, 802, 24, 2, 66}; 
    int n = sizeof(arr)/sizeof(arr[0]); 
    radixsort(arr, n); 
    print(arr, n); 
return 0; 
}

The number of diagonals of an n-sided polygon is:    [Bangladesh Power Development Board, Asst. Engr.-2018]
n(n − 3) / 2
An octagon has 8(8−3)/2 = 8×5/2 = 20 diagonals.

There are infinitive number of note of 1 tk, 5 tk, 20 tk and 50 tk. If given an amount then have to find the ossible summation of the amount using 1,5,20,50 tk note, such give an amount 12 then you have to return 3. [EGCB-2018]
Solution:- 
Because amount can be make in three ways.
1) 12 one taka ones makes 12 tk.
2) 5tk two notes make 10 tk and reminig two 1 taka node make 12 taka.
3) 5 tk one nodes and remaining one taka note 7 then total 12 tk.

merge sort algorithm and code/pseudocode
Merge Sort is a Divide and Conquer algorithm. It divides input array in two halves, calls itself for the two halves and then merges the two sorted halves. The merge() function is used for merging two halves. The merge(arr, l, m, r) is key process that assumes that arr[l..m] and arr[m+1..r] are sorted and merges the two sorted sub-arrays into one. See following C implementation for details.
MergeSort(arr[], l,  r)
If r > l
1. Find the middle point to divide the array into two halves:  
             middle m = (l+r)/2
 2. Call mergeSort for first half:   
             Call mergeSort(arr, l, m)
3. Call mergeSort for second half:
             Call mergeSort(arr, m+1, r)
4. Merge the two halves sorted in step 2 and 3:
             Call merge(arr, l, m, r)
The following diagram from wikipedia shows the complete merge sort process for an example array {38, 27, 43, 3, 9, 82, 10}. If we take a closer look at the diagram, we can see that the array is recursively divided in two halves till the size becomes 1. Once the size becomes 1, the merge processes comes into action and starts merging arrays back till the complete array is merged.
 

#include <stdio.h>
#define max 10
int a[11] = { 10, 14, 19, 26, 27, 31, 33, 35, 42, 44, 0 };
int b[10];
void merging(int low, int mid, int high) {
   int l1, l2, i;            //L1, L2
for(l1 = low, l2 = mid + 1, i = low; l1<= mid && l2<= high; i++) {
if(a[l1] <= a[l2])
         b[i] = a[l1++];
else
         b[i] = a[l2++];
         }
while(l1<= mid)
      b[i++] = a[l1++];

while(l2<= high)
      b[i++] = a[l2++];

for(i = low; i <= high; i++)
      a[i] = b[i];
}
void sort(int low, int high) {
   int mid;
   if(low < high) {
      mid = (low + high) / 2;
      sort(low, mid);
      sort(mid+1, high);
      merging(low, mid, high);
   } else {
return;
   }
}
int main() {
   int i;
   printf("List before sorting\n");
   for(i = 0; i <= max; i++)
      printf("%d ", a[i]);
   sort(0, max);
   printf("\nList after sorting\n");
   for(i = 0; i <= max; i++)
      printf("%d ", a[i]);
}
another code
#include <stdio.h>
#include <stdlib.h>
void PrintArray(int *array, int n) {
  for (int i = 0; i < n; ++i)
    printf("%d ", array[i]);
  printf("\n");
void Merger(int arr[], int lo, int  mi, int hi){
    int *temp = new int[hi-lo+1];//temporary merger array
    int i = lo, j = mi + 1;//i is for left-hand,j is for right-hand
    int k = 0;//k is for the temporary array
    while(i <= mi && j <=hi){
  if(arr[i] <= arr[j])
            temp[k++] = arr[i++];
        else
            temp[k++] = arr[j++];
    }
while(i <= mi)
        temp[k++] = arr[i++];
    while(j <= hi)
        temp[k++] = arr[j++];
    for(k = 0, i = lo; i <= hi; ++i, ++k)
        arr[i] = temp[k];
delete []temp;
}
void MergeSortHelper(int arr[], int lo, int hi){
int mid;
    if(lo < hi){
        mid = (lo + hi) >>1;
        MergeSortHelper(arr, lo, mid);
        MergeSortHelper(arr, mid+1, hi);
        Merger(arr, lo, mid, hi);
    }
}
void MergeSort(int arr[], int arr_size){
    MergeSortHelper(arr, 0, arr_size-1);
}
int main() {
  int array[] = {94, 42, 50, 95, 333, 65, 54, 456, 1, 1234};
  int n = sizeof(array)/sizeof(array[0]);
  printf("Before Merge Sort :\n");
  PrintArray(array, n);
  MergeSort(array, n);
  printf("After Merge Sort :\n");
  PrintArray(array, n);
  return (0);
}
Time Complexity:
The list of size N
is divided into a max of logN parts, and the merging of all sublists into a single list takes O(N) time, the worst case run time of this algorithm is O(NLogN)
Quick Sort
Quick sort is based on the divide-and-conquer approach based on the idea of choosing one element as a pivot element and partitioning the array around it such that: Left side of pivot contains all the elements that are less than the pivot element Right side contains all elements greater than the pivot 
: Pivot element 
int partition ( int A[],int start ,int end) {
    int i = start + 1;
    int piv = A[start] ;          //make the first element as pivot element.
    for(int j =start + 1; j <= end ; j++ )  {
          if ( A[ j ] < piv) {
                 swap (A[ i ],A [ j ]);
            i += 1;
        }
   }
   swap ( A[ start ] ,A[ i-1 ] )
   return i-1;                      //return the position of the pivot
}
Now, let us see the recursive function Quick_sort :
void quick_sort ( int A[ ] ,int start , int end ) {
   if( start < end ) {
        //stores the position of pivot element
         int piv_pos = partition (A,start , end ) ;     
         quick_sort (A,start , piv_pos -1);  //sorts the left side of pivot.
         quick_sort ( A,piv_pos +1 , end) ; //sorts the right side of pivot.
   }
}

Write a sorting algorithm that can always sort n elements in O(n log n) time, Analyze the best-case and worst-case time complexity of the algorithm [BUET Academic Data Structure -2014]
Merge sort, Heap sort are such kind of sorting algorithm which can sort n element in O(n log n) time, Here describe the Merge sort in below:- 
void mergeSort(int arr[], int l, int r) 
if (l < r) 
    { 
        int m = l+(r-l)/2; 
        mergeSort(arr, l, m); 
        mergeSort(arr, m+1, r); 
        merge(arr, l, m, r); 
    } 
}
void merge(int arr[], int l, int m, int r) 
    int i, j, k; 
    int n1 = m - l + 1; 
    int n2 =  r - m; 
    int L[n1], R[n2]; 
for (i = 0; i < n1; i++) 
        L[i] = arr[l + i]; 
    for (j = 0; j < n2; j++) 
        R[j] = arr[m + 1+ j]; 
    i = 0; // Initial index of first subarray 
    j = 0; // Initial index of second subarray 
    k = l; // Initial index of merged subarray 
while (i < n1 && j < n2) 
    { 
        if (L[i] <= R[j]) 
        { 
            arr[k] = L[i]; 
            i++; 
        } 
        else
        { 
            arr[k] = R[j]; 
            j++; 
        } 
        k++; 
    } 
while (i < n1) 
    { 
        arr[k] = L[i]; 
        i++; 
        k++; 
    } 
while (j < n2) 
    { 
        arr[k] = R[j]; 
        j++; 
        k++; 
    } 
}
Best case, Worse case and average case time complexity of Merge sort is same and that is O( n log n) 

Prove that any comparison sort algorithm requires Q(n log n) comparisons in the worst case [BUET Academic Data Structure -2014]

This is very, very classical. Here is the idea. You can describe a run of your algorithm as a decision tree. At each node there is a comparison operation performed by your algorithm, and the three children correspond to the three different answers. Every execution of your algorithm ends at a leaf. Since at the end the list is sorted, in particular we can tell what the original order of the list was. So we can annotate each leaf by a permutation on nn elements. Since there are n!n! different permutations, there must be at least n!n! leaves, so the tree must have height at least Ω(log3n!)=Ω(nlogn)Ω(log3⁡n!)=Ω(nlog⁡n). This means that there is some computation path that will result in Ω(nlogn)Ω(nlog⁡n) comparisons.

Write the counting sort algorithm. Prove that the algorithm runs in linear time; Explain why counting sort is simple. [BUET Academic Data Structure -2014]
Counting sort is a very time-efficient (and somewhat space-inefficient) algorithm for sorting that avoids comparisons and exploits the O(1)O(1) time insertions and lookups in an array.
The idea is simple: if you're sorting  integers and you know they all fall in the range 1..1001..100, you can generate a sorted array this way:
COUNTING SORT(A,B,k)
for i = 1 to k
      do c[i] = 0
for j = 1 to length[A]
      do C[A[j]] = C[A[j]] + 1
//C[i] now contains the number of elements equal to i
for i = 2 to k
      do C[i] = C[i] + C[i-1]
//C[i] now contains the number of elements less than or equal to i
for j = length[A] downto 1
      do B[C[A[j]]] = A[j]
       C[A[j]] = C[A[J]] – 1
Write down time complexities of different sorting algorithm? [Dutch Bangla Bank-2017]

Algorithm

Time Complexity

Best

Average

Worst

Quicksort

Ω(n log(n))

Θ(n log(n))

O(n^2)

Mergesort

Ω(n log(n))

Θ(n log(n))

O(n log(n))

Timsort

Ω(n)

Θ(n log(n))

O(n log(n))

Heapsort

Ω(n log(n))

Θ(n log(n))

O(n log(n))

Bubble Sort

Ω(n)

Θ(n^2)

O(n^2)

Insertion Sort

Ω(n)

Θ(n^2)

O(n^2)

Selection Sort

Ω(n^2)

Θ(n^2)

O(n^2)

Bucket Sort

Ω(n+k)

Θ(n+k)

O(n^2)

Radix Sort

Ω(nk)

Θ(nk)

O(nk)

Counting Sort

Ω(n+k)

Θ(n+k)

O(n+k)

Cubesort

Ω(n)

Θ(n log(n))

O(n log(n))


The running time of comparison-based sorting algorithms is bounded by O(n log n).
A comparison sort   can be   modeled as a large binary tree called a    decision tree   where each node represents a single comparison. Because   the sorted list is some permutation of the input list, for an input list of length n, there are n! Possible permutations of that list. This is a decision tree because each of the n! is represented by a leaf, and the path the algorithm take to get to each leaf is the series of comparisons and outcomes that yield that particular ordering.
At each level of the tree, a comparison is made. Comparisons happen, and we keep traveling down the tree; until the algorithm
reaches the leaves of the tree, there will be a leaf for each permutation, so there are 7! leaves.
Each comparison halves the number of future comparisons the algorithm must do (since if the algorithm selects the right ex out of a node at a given step, it will not search the nodes and paths connected to the left edge). Therefore, the algorithm performs O(log n!) comparisons. Anybinary tree, with height h, has a number of leaves that is less than or equal to gr
From this,
Taking the logarithm results in
From Stirling's approximation,
Therefore,
 

Binary Search
int binary_search(int a[], int low, int high, int target) {
if (high < low)
return -1;
int middle = (low + high)/2;
if (target < a[middle])
return binary_search(a, low, middle-1, target);
elseif (target > a[middle])
return binary_search(a, middle+1, high, target);
elseif (target == a[middle])
return middle;
}
Binary search using recursive [GTCL-2016]
#include<stdio.h>
int binarySearch(int x[],int element,int start,int end);
int main(){
int x[20],n,i,index,start=0,end,element;
printf("Enter number of  elements: ");
scanf("%d",&n);
end = n;
printf("Enter array elements: ");
for(i=0;i<n;i++){
scanf("%d",&x[i]);
}
printf("Enter the element to search: ");
scanf("%d",&element);
index = binarySearch(x,element,start,end-1);
if(index == -1)
printf("Element Not Found.\n");
else
printf("Element found at index : %d\n",index);
return0;
}
int binarySearch(int x[],intelement,intstart,int end)
{
int mid,noOfElements,i;
mid = (int)(start+end)/2;
if(start > end)
return -1;
if(x[mid] == element)
return mid;
else if(x[mid] < element){
start = mid+1;
binarySearch(x,element,start,end);
}
else{
start = 0;
end = mid-1;
binarySearch(x,element,start,end);
}

}
 (i) Divide and Conquer method is used in  [DESCO  BUET Asst. Eng 2016]
        (a) Merge Sort
        (b) Bubble Sort
        (c) Quick Sort
         (d) Both a & c
  (ii) Dynamic programming approach is used to solve
        (a) Dijkstra algorithm 
        (b) Kruskal’s Algoritm
        (c) Prim’s Algorithm 
        (d) None of these
(iii) Binary search worst time complexity is 
      (a) O(n)
      (b) O(1)
      (c) O(log n)
      (d) O(n^2)
Depth First Search (DFS) Algorithm
n ← number of nodes
Initialize visited[ ] to false (0)
for(i=0;i<n;i++)
    visited[i] = 0;
void DFS(vertex i) [DFS starting from i]
{
    visited[i]=1;
    for each w adjacent to i
        if(!visited[w])
            DFS(w);
}

 
The graph shown above is taken as input in both the programs mentioned below:
Depth First Search (DFS) Program in C [Adjacency Matrix]

#include<stdio.h>
void DFS(int);
int G[10][10],visited[10],n;    //n is no of vertices and graph is sorted in array G[10][10]
void main()
{
    int i,j;
    printf("Enter number of vertices:");
    scanf("%d",&n);
    //read the adjecency matrix
    printf("\nEnter adjecency matrix of the graph:");
for(i=0;i<n;i++)
 for(j=0;j<n;j++)
            scanf("%d",&G[i][j]);
 for(i=0;i<n;i++)
        visited[i]=0; //visited is initialized to zero
    DFS(0);
}
void DFS(int i)
{
    int j;
    printf("\n%d",i);
    visited[i]=1;
 for(j=0;j<n;j++)
if(!visited[j]&&G[i][j]==1)
            DFS(j);
}
 


BFS
#include<stdio.h>
#include<conio.h>
int a[20][20],q[20],visited[20],n,i,j,f=0,r=-1;
void bfs(int v) {
for (i=1;i<=n;i++)
  if(a[v][i] && !visited[i])
   q[++r]=i;
if(f<=r) {
visited[q[f]]=1;
bfs(q[f++]);
}
}
int main() {
int v;
clrscr();
printf("\n Enter the number of vertices:");
scanf("%d",&n);
for (i=1;i<=n;i++) {
q[i]=0;
visited[i]=0;
}
printf("\n Enter graph data in matrix form:\n");
for (i=1;i<=n;i++)
  for (j=1;j<=n;j++)
   scanf("%d",&a[i][j]);
printf("\n Enter the starting vertex:");
scanf("%d",&v);
bfs(v);
printf("\n The node which are reachable are:\n");
for (i=1;i<=n;i++)
  if(visited[i])
   printf("%d\t",i); else
   printf("\n Bfs is not possible");
return 0;
}


Heap:
Heap is a special case of balanced binary tree data structure where the root-node key is compared with its children and arranged accordingly. If α has child node β then −
key(α) ≥ key(β)
As the value of parent is greater than that of child, this property generates Max Heap. Based on this criteria, a heap can be of two types −
For Input → 35 33 42 10 14 19 27 44 26 31



Min-Heap − Where the value of the root node is less than or equal to either of its children.
 
Max-Heap − Where the value of the root node is greater than or equal to either of its children.

 
Both trees are constructed using the same input and order of arrival.
Binary Search Tree (BST)
Binary Search Tree is a binary tree in which every node contains only smaller values in its left subtree and only larger values in its right subtree.

left_subtree (keys)  ≤  node (key)  ≤  right_subtree (keys)
 

Example
Construct a Binary Search Tree by inserting the following sequence of numbers...
Build a binary search tree where have 11 node, and give the algorithm for BST.  [EGCB-2018]

10,12,5,4,20,8,7,15 and 13
 

In a binary tree, every node can have maximum of two children but there is no order of nodes based on their values. In binary tree, the elements are arranged as they arrive to the tree, from top to bottom and left to right.
Node
Define a node having some data, references to its left and right child nodes.
struct node {
int data;
struct node *leftChild;
struct node *rightChild;
};
Search Operation
Whenever an element is to be searched, start searching from the root node. Then if the data is less than the key value, search for the element in the left subtree. Otherwise, search for the element in the right subtree. Follow the same algorithm for each node.
Algorithm
struct node* search(int data){
struct node *current = root;
   printf("Visiting elements: ");
while(current->data != data){
if(current != NULL){
         printf("%d ",current->data);
if(current->data > data){
            current = current->leftChild;
}//else go to right tree
else{
            current = current->rightChild;
}
//not found
if(current == NULL){
return NULL;
}
}
}
return current;
}
Insert Operation
Whenever an element is to be inserted, first locate its proper location. Start searching from the root node, then if the data is less than the key value, search for the empty location in the left subtree and insert the data. Otherwise, search for the empty location in the right subtree and insert the data.
Algorithm
void insert(int data){
struct node *tempNode =(struct node*) malloc(sizeof(struct node));
struct node *current;
struct node *parent;

   tempNode->data = data;
   tempNode->leftChild = NULL;
   tempNode->rightChild = NULL;
if(root == NULL){
      root = tempNode;
}else{
      current = root;
      parent = NULL;

while(1){
         parent = current;
if(data < parent->data){
            current = current->leftChild;
if(current == NULL){
               parent->leftChild = tempNode;
return;
}
}
else{
            current = current->rightChild;
if(current == NULL){
               parent->rightChild = tempNode;
return;
}
}
}
}
}

Write an algorithm to search stored integer in binary search tree [eastern refinery -2016]
Binary Search Tree
Binary Search Tree, is a node-based binary tree data structure which has the following properties:
The left subtree of a node contains only nodes with keys lesser than the node’s key.
The right subtree of a node contains only nodes with keys greater than the node’s key.
The left and right subtree each must also be a binary search tree.
There must be no duplicate nodes.

 
The above properties of Binary Search Tree provide an ordering among keys so that the operations like search, minimum and maximum can be done fast. If there is no ordering, then we may have to compare every key to search a given key.
 
Searching a key
To search a given key in Binary Search Tree, we first compare it with root, if the key is present at root, we return root. If key is greater than root’s key, we recur for right subtree of root node. Otherwise we recur for left subtree.

// C function to search a given key in a given BST
struct node* search(struct node* root, int key)
{
    if (root == NULL || root->key == key)
       return root;
    if (root->key < key) // Key is greater than root's key
       return search(root->right, key);
        return search(root->left, key); // Key is smaller than root's key
}

Illustration to search 6 in below tree:
1. Start from root.
2. Compare the inserting element with root, if less than root, then recurse for left, else recurse for right.
3. If element to search is found anywhere, return true, else return false.
 

Insertion of a key
A new key is always inserted at leaf. We start searching a key from root till we hit a leaf node. Once a leaf node is found, the new node is added as a child of the leaf node.
         100                               100
        /   \        Insert 40            /    \
      20     500    --------->          20     500 
     /  \                              /  \  
    10   30                           10   30
                                              \   
                                              40
#include<stdio.h>
#include<stdlib.h>
struct node
{
    int key;
    struct node *left, *right;
};
  
// A utility function to create a new BST node
struct node *newNode(int item)
{
    struct node *temp =  (struct node *)malloc(sizeof(struct node));
    temp->key = item;
    temp->left = temp->right = NULL;
    return temp;
}
void inorder(struct node *root)
{
    if (root != NULL)
    {
        inorder(root->left);
        printf("%d \n", root->key);
        inorder(root->right);
    }
}
struct node* insert(struct node* node, int key)
{
       if (node == NULL) return newNode(key);
    if (key < node->key)
        node->left  = insert(node->left, key);
    else if (key > node->key)
        node->right = insert(node->right, key);   
    return node;
}
int main()
{
    /* Let us create following BST
              50
           /     \
          30      70
         /  \    /  \
       20   40  60   80 */
    struct node *root = NULL;
    root = insert(root, 50);
    insert(root, 30);
    insert(root, 20);
    insert(root, 40);
    insert(root, 70);
    insert(root, 60);
    insert(root, 80);
    inorder(root);
    return 0;
}
Output:
20
30
40
50
60
70
80
Illustration to insert 2 in below tree:
1. Start from root.
2. Compare the inserting element with root, if less than root, then recurse for left, else recurse for right.
3. After reaching end,just insert that node at left(if less than current) else right.
 

Time Complexity: The worst case time complexity of search and insert operations is O(h) where h is height of Binary Search Tree. In worst case, we may have to travel from root to the deepest leaf node. The height of a skewed tree may become n and the time complexity of search and insert operation may become O(n).
Some Interesting Facts:
Inorder traversal of BST always produces sorted output.
We can construct a BST with only Preorder or Postorder or Level Order traversal. Note that we can always get inorder traversal by sorting the only given traversal.
Q. Simplify the Boolean function F using don’t care condition d. Implement using AND, NOT, OR gate. F=A’B’D+A’CD+A’BC d= A’BC’D+ACD+ 

Hashing? How can solve hashing collision by linear probing and quadric probing? 
 

Collision Resolution Methods
1. Open Hashing
Chaining
2. Closed Hashing
Open Addressing
Linear Probing
Quadratic Probing
 

  Separate chaining.
Easier to implement delete.
Performance degrades gracefully.
Clustering less sensitive to poorly-designed hash function.
  Linear probing.
Less wasted space.
Better cache performance
 
 



  
Double Hashing
Double hashing is a collision resolving technique in Open Addressed Hash tables. Double hashing uses the idea of applying a second hash function to key when a collision occurs.
Double hashing can be done using :
(hash1(key) + i * hash2(key)) % TABLE_SIZE
Here hash1() and hash2() are hash functions and TABLE_SIZE
is size of hash table. (We repeat by increasing i when collision occurs)
First hash function is typically hash1(key) = key % TABLE_SIZE
A popular second hash function is : hash2(key) = PRIME – (key % PRIME) where PRIME is a prime smaller than the TABLE_SIZE.
A good second Hash function is:
It must never evaluate to zero
Must make sure that all cells can be probed



B+ Tree construction 
 

Link List
A linked list is a data structure that can store an indefinite amount of items. These items are connected using pointers in a sequential manner.
There are two types of linked list; singly-linked list, and doubly-linked list. In a singly-linked list, every element contains some data and a link to the next element. On the other hand, every node in a doubly-linked list contains some data, a link to the next node and a link to the previous node. The elements of a linked list are called the nodes. A node has two fields i.e. data and next. The data field contains the data being stored in that specific node. It cannot just be a single variable. There may be many variables presenting the data section of a node. The next field contains the address of the next node. So this is the place where the link between nodes is established.
Implementation of Linked List Using C++
As linked list consists of nodes, we need to declare a structure which defines a single node. Our structure should have at least one variable for data section and a pointer for the next node. In C++, our code would look like this:
  struct node
  {
    int data;
    node *next;
  };

Creation of Linked List Using C++
Now, we need a class which will contain the functions to handle the nodes. This class should have two important pointers, i.e. head and tail. The constructer will make them NULL to avoid any garbage value.
  class list
  {
    Private:
    node *head, *tail;
    public:
    list()
    {
      head=NULL;
      tail=NULL;
    }
  };
The creation of a new node at the end of linked list has 2 steps:
Linking the newly created node with tail node. Means passing the address of a new node to the next pointer of a tail node.
The tail pointer should always point to the last node. So we will make our tail pointer equal to a new node.
 

The C++ code for the creation of new a node would like this:
  void createnode(int value) // add data end of the node
    {
      node *temp=new node;
      temp->data=value;
      temp->next=NULL;
      if(head==NULL)
      {
        head=temp;
        tail=temp;
        temp=NULL;
      }
      else
      {
        tail->next=temp;
        tail=temp;
      }
    }
Displaying Linked List Using C++
Now we have a working linked list which allows creating nodes. If we want to see that what is placed in our linked list then we will have to make a display function. The logic behind this function is that we make a temporary node and pass the address of the head node to it. Now we want to print all the nodes on the screen. So we need a loop which runs as many times as nodes exist. Every node contains the address of the next node so the temporary node walks through the whole linked list. If the temporary node becomes equal to NULL then the loop would be terminated. 
 

singly linked list The code for displaying nodes of linked list is given below:
  void display()
  {
    node *temp=new node;
    temp=head;
    while(temp!=NULL)
    {
      cout<<temp->data<<"\t";
      temp=temp->next;
    }
  }
Both createnode() and display() functions would be written under public section of class.
The basic framework of a singly-linked list is ready. Now it is the time to perform some other operations on the list. Basically, two operations are performed on linked lists:
1. Insertion
2. Deletion
There are three cases considered while inserting a node:
Insertion at the start
Insertion at the end
Insertion at a particular position
Insertion at the Start
Insertion of a new node is quite simple. It is just a 2-step algorithm which is performed to insert a node at the start of a singly linked list.
New node should be connected to the first node, which means the head. This can be achieved by putting the address of the head in the next field of the new node.
New node should be considered as a head. It can be achieved by declaring head equals to a new node.The diagrammatic demonstration of this process is given below:
 

The code for this process is:
  void insert_start(int value)
  {
    node *temp=new node;
    temp->data=value;
    temp->next=head;
    head=temp;
  }
Insertion at the End
The insertion of a node at the end of a linked list is the same as we have done in node creation function. If you noticed then, we inserted the newly created node at the end of the linked list. So this process is the same.
Insertion at Particular Position
The insertion of a new node at a particular position is a bit difficult to understand. In this case, we don’t disturb the head and tail nodes. Rather, a new node is inserted between two consecutive nodes. So, these two nodes should be accessible by our code. We call one node as current and the other as previous, and the new node is placed between them. This process is shown in a diagram below:
 

Now the new node can be inserted between the previous and current node by just performing two steps:
1. Pass the address of the new node in the next field of the previous node.
2. Pass the address of the current node in the next field of the new node.
We will access these nodes by asking the user at what position he wants to insert the new node. Now, we will start a loop to reach those specific nodes. We initialized our current node by the head and move through the linked list. At the end, we would find two consecutive nodes.
C++ code for insertion of node would be as follows:
  void insert_position(int pos, int value)
  {
    node *pre=new node;
    node *cur=new node;
    node *temp=new node;
    cur=head;
    for(int i=1;i<pos;i++)
    {
      pre=cur;
      cur=cur->next;
    }
    temp->data=value;
    pre->next=temp;
    temp->next=cur;
  }
Deletion:
So, you have become familiar with linked list creation. Now, it’s time to do some manipulation on the linked list created. Linked lists provide us the great feature of deleting a node. The process of deletion is also easy to implement. The basic structure is to declare a temporary pointer which points the node to be deleted. Then a little bit of working on links of nodes. There are also three cases in which a node can be deleted:
Deletion at the start
Deletion at the end
Deletion at a particular position
Deletion at the Start
In this case, the first node of the linked list is deleted. I know, you remember that the first node is called the head. So, we are going to delete the head node. The process of deletion includes:
1. Declare a temp pointer and pass the address of the first node, i.e. head to this pointer.
2. Declare the second node of the list as head as it will be the first node of linked list after deletion.
3. Delete the temp node.
 

The C++ code for this process is given below:
  void delete_first()
  {
    node *temp=new node;
    temp=head;
    head=head->next;
    delete temp;
  }
Deletion at the End
Deletion of the last node is a bit difficult to understand than the first node. In the case of the first node, you just need access to the head and you can delete it. But in the case of the last node, you also need access to the second to the last node of the linked list as you will delete the last node and make the previous node as the tail of linked list.
 

Our algorithm should be capable of finding a node that comes before the last node. This can be achieved by traversing the linked list. We would make two temporary pointers and let them move through the whole linked list. At the end, the previous node will point to the second to the last node and the current node will point to the last node, i.e. node to be deleted. We would delete this node and make the previous node as the tail.
  void delete_last()
  {
    node *current=new node;
    node *previous=new node;
    current=head;
    while(current->next!=NULL)
    {
      previous=current;
      current=current->next;
    }
    tail=previous;
    previous->next=NULL;
    delete current;
  }
Deletion at a Particular Position
In linked list, we can delete a specific node. The process of deletion is simple. Here we don’t use the head and tail nodes. We ask the user to input the position of the node to be deleted. After that, we just move two temporary pointers through the linked list until we reach our specific node. Now, we delete our current node and pass the address of the node after it to the previous pointer. This way, the current node is removed from the linked list and the link is established between its previous and next node.
 

The deletion can be done in C++ by using code given below:
  void delete_position(int pos)
  {
    node *current=new node;
    node *previous=new node;
    current=head;
    for(int i=1;i<pos;i++)
    {
      previous=current;
      current=current->next;
    }
    previous->next=current->next;
  }
Combining all of the operation of Link list 
#include <iostream>
using namespace std;
 struct node
  {
    int data;
    node *next;
  };
class list
  {
    private:
    node *head, *tail;
    public:
    list()
    {
      head=NULL;
      tail=NULL;
    }
void createnode(int value)   // insert at last;
    {
      node *temp=new node;
      temp->data=value;
      temp->next=NULL;
if(head==NULL)
      {
        head=temp;
        tail=temp;
        temp=NULL;
      }
else
      {
        tail->next=temp;
        tail=temp;
      }
    }
void display()
  {
    node *temp=new node;
    temp=head;
    while(temp!=NULL)
    {
      cout<<temp->data<<"\t";
      temp=temp->next;
    }

  }
void insert_start(int value)
  {
    node *temp=new node;
    temp->data=value;
    temp->next=head;
    head=temp;
  }
void insert_position(int pos, int value)
  {
    node *pre=new node;
    node *cur=new node;
    node *temp=new node;
    cur=head;
for(int i=1;i<pos;i++)
    {
      pre=cur;
      cur=cur->next;
    }
    temp->data=value;
    pre->next=temp;
    temp->next=cur;
  }
void delete_first()
  {
    node *temp=new node;
    temp=head;
    head=head->next;
    delete temp;
  }
void delete_last()
  {
    node *current=new node;
    node *previous=new node;
    current=head;
    while(current->next!=NULL)
    {
      previous=current;
      current=current->next;
    }
    tail=previous;
    previous->next=NULL;
delete current;
  }
  void delete_position(int pos)
  {
    node *current=new node;
    node *previous=new node;
    current=head;
    for(int i=1;i<pos;i++)
    {
      previous=current;
      current=current->next;
    }
    previous->next=current->next;
  }

  };
int main()
{
    list obj;
    obj.createnode(8);
    obj.createnode(78);
    obj.createnode(2);
    obj.display();
    cout<<endl<<"After inserting in the first position ";
    obj.insert_start(4);
    obj.display();
    cout<<endl<<"After deleting the last data item";
    obj.delete_last();
    obj.display();
    cout<<endl<<"After inserting position at 2 data will be ";
    obj.insert_position(2,56);
    obj.display();
    cout<<endl<<"After deleting the first position data ";
    obj.delete_first();
    return 0;
}
Draw the l1-item hash table (hat results from using the hash function h(i)= (2i + 5) mod 11, to hash the keys 12, 44. 13, 88, 23, 94, 11, 39, 20, 16, and 5, assuming collisions are handled by double hashing using a secondary hash function h'(k)=7- (k mod7). [BUET DS -2014]
Double Hashing
Double Hashing is works on a similar idea to linear and quadratic probing. Use a big table and hash into it. Whenever a collision occurs, choose another spot in table to put the value. The difference here is that instead of choosing next opening, a second hash function is used to determine the location of the next spot. For example, given hash function H1 and H2 and key. do the following:
Check location hash1(key). If it is empty, put record in it.
If it is not empty calculate hash2(key).
check if hash1(key)+hash2(key) is open, if it is, put it in
repeat with hash1(key)+2hash2(key), hash1(key)+3hash2(key) and so on, until an opening is found.
like quadratic probing, you must take care in choosing hash2. hash2 CANNOT ever return 0. hash2 must be done so that all cells will be probed eventually.
Double Hashing Example-
 

Solution:- 
Given function h(i)=(2i+5) mod 11 and secondary hash function h'(k)=7- (k mod7). So total hash function is h(k,i)=(h1(k)+ih2(k)) mod 11
So h1(12) = (24+5) mod11 = 29 mod 11 = 7
h1(44) = (88+5) mod11 = 93 mod 11 = 5
h1(13) = (26+5) mod11 = 31 mod 11 = 9
h1(88) = (176+5) mod11 = 181 mod 11 = 5
h2(88) = 7- (88 mod 7) = 4 then double hash function  h(88,1)=(5+1*4) mod 11=9,
again h(88,1)=(5+2*4) mod 11=2
h1(23) = (46+5) mod11 = 51 mod 11 = 7
h2(23) = 7- (23 mod 7) = 5 then double hash function  h(88,1)=(7+1*5) mod 11=1, 
h1(94) = (188+5) mod11 = 193 mod 11 = 6
h1(11) = (22+5) mod11 = 27 mod 11 = 5
h2(11) = 7- (11 mod 7) = 3 then double hash function  h(11,1)=(5+1*3) mod 11=8, 
h1(39) = (78+5) mod11 = 83 mod 11 = 6
h2(39) = 7- (39 mod 7) = 3 then double hash function  h(39,1)=(6+1*3) mod 11=9, 
again h(39,2)=(6+2*3) mod 11=1
again h(39,3)=(6+3*3) mod 11=4
h1(20) = (40+5) mod11 = 45 mod 11 = 1
h2(20) = 7- (20 mod 7) = 1 then double hash function  h(20,1)=(1+1*1) mod 11=2, 
again h(20,2)=(1+2*2) mod 11=5
again h(20,3)=(1+3*2) mod 11=7
again h(20,4)=(1+4*2) mod 11=9
again h(20,5)=(1+5*2) mod 11=0
h1(16) = (32+5) mod11 = 37 mod 11 = 4, 
h2(16) = 7- (16 mod 7)=5then double hash function  h(16,1)=(4+1*5) mod 11=9, 
again h(16,2)=(4+2*5) mod 11=3
h1(5) = (10+5) mod11 = 15 mod 11 = 4
h2(5) = 7- (5 mod 7) = 2 then double hash function  h(5,1)=(4+1*2) mod 11=6, 
again h(5,2)=(4+2*2) mod 11=8
again h(5,3)=(4+3*2) mod 11=10

0

1

2

3

4

5

6

7

8

9

10

20

23

88

16

39

44

94

12

11

13

5

 



Compare linked lists and skip lists [BUET DS -2014]
Skip List 
In computer science, a skip list is a data structure that allows fast search within an ordered sequence of elements. Fast search is made possible by maintaining a linked hierarchy of subsequences, with each successive subsequence skipping over fewer elements than the previous one (see the picture below).


 
Link list 
In computer science, a linked list is a linear collection of data elements, whose order is not given by their physical placement in memory. Instead, each element points to the next. It is a data structureconsisting of a collection of nodes which together represent a sequence.
 


Write the differences between a binary-search tree and a min-heap. [BUET DS]
A Binary Search Tree (BST) is a 2-dimensional data structure which follows a special arrangement of elements. For each node having a value ‘x’, left subtree must have a values ‘less than or equal to x’, and the right subtrees must have values ‘greater than x’.
 


Heap on the other hand follows just one property, For Min-heap, all the elements in the child (both left and right) should be greater than the root value.
A heap is a complete tree ( only last level may not be complete ). An example of min heap.
 


 
Write two procedures for finding the tree-successor and the tree-predecessor in a binary- search tree. For the set of {11, 4, 25, 10, 36, 17,21} of keys. draw a binary search tree of height 3. [BUET DS -2014]

 
Write the properties of a red-black tree. Show the red-black trees that result after successively inserting the keys 51, 46, 41, 22, 29,18 into an initially empty red-black tree [BUET DS -2014]
Solution –
Red-Black Tree is a self-balancing Binary Search Tree (BST) where every node follows following rules.
1) Every node has a color either red or black.
2) Root of tree is always black.
3) There are no two adjacent red nodes (A red node cannot have a red parent or red child).
4) Every path from root to a NULL node has same number of black nodes.


 

Explain the divide-and-conqucr tcchnique. Write the quick sort algoritlun, and analyze the time-complexity of the algorithm[BUET DS -2014]
In computer science, divide and conquer is an algorithm design paradigm based on multi-branchedrecursion. A divide and conquer algorithm works by recursively breaking down a problem into two or moresub-problems of the same or related type, until these become simple enough to be solved directly.
 

Write the properties of a B-tree. [BUET DS -2014]

 

B+ tree
 

Comparison between B Tree and B+ Tree:

B Tree

B+ Tree

A B tree is an organizational structure for information storage and retrieval in the form of a tree in which all terminal nodes are at the same distance from the base, and all non-terminal nodes have between n and 2 n sub-trees or pointers (where n is an integer).

B+ tree is an n-array tree with a variable but often large number of children per node. A B+ tree consists of a root, internal nodes and leaves. The root may be either a leaf or a node with two or more children.

Balanced tree.

B plus tree.

Space O(n) 

Space O(n)

Search O(log n)

Search O(logb n)

Insert O(log n)

Insert O(logb n)

Delete in O(log n)

Delete in O(logb n)

In a B tree, search keys and data stored in internal or leaf nodes.

In a B+ tree, data stored only in leaf nodes.

These trees waste space

There trees do not waste space.

In B tree, the leaf node cannot store using linked list.

In B+ tree, leaf node data are ordered in a sequential linked list.

Here in B tree the search is not that easy as compared to a B+ tree.

Here in B+ tree the searching becomes easy.

They do not store redundant search key.

They store redundant search key.

They are an older version and are not that advantageous as compared to the B+ trees.

Many database system implementers prefer the structural simplicity of a B+ tree.


What is a data structure? When is a data structure called a linear data structure? Write 4 (four) properties of a linear data structure    [BUET DS -2014]
Define a linear and non linear data structure.
Linear data structure: A linear data structure traverses the data elements sequentially, in which only one data element can directly be reached. Ex: Arrays, Linked Lists
Non-Linear data structure: Every data item is attached to several other data items in a way that is specific for reflecting relationships. The data items are not arranged in a sequential structure. Ex: Trees, Graphs

A data structure is said to be linear if its elements form a sequence or a linear list.
Examples:
Array
Linked List
Stacks
Queues
Let a: array[i1 ...u i2.. u2] be a'2D integer array. Assume b is the base address of the array a. Then write the Array Mapping Function to calculate address(a [i, j]) in
(i) Column-major order (ii) Row-major order [BUET DS -2014]
1) One - Dimensional Array :
By the previous definition of 1-Dimensional array, we can say that the compiler limits the storage region to storing set of element, and the first location is individual element of array , and this called the Base Address.
For example :
let’s be as 500. Base Address (501) and like for the all elements and used the index I, by its value are range 1<= I => N according to Base Index (500), by using this relation:
    Location ( X[I] ) = Base Address + (I-1)
For example :
When the requirement is to bound the forth element, I = 4 :
    Location(X[4])= 500+(4-1)
                   = 500 +3
                   = 503
So the address of forth element is 503 because the first element in 500.
When the program indicate or dealing with element of array in any instruction like (write (X [I]), read (X [I] ) ), the compiler depend on going relation to bounding the requirement address.
2) Two - Dimensional Array :
A two dimensional Array A is the collection of 'm X n' elements. Programming language stores the two dimensional array in one dimensional memory in either of two ways -

1) Row Major Order:
First row of the array occupies the first set of memory locations reserved for the array; Second row occupies the next set, and so forth.
To determine element address A[i,j]: 
    Location ( A[ i,j ] ) =Base Address + ( N x ( I - 1 ) ) + ( j - 1 )
For example :
Given an array [1…5,1…7] of integers. Calculate address of element T[4,6], where BA=900.
Solution:- I = 4 , J = 6 ,M= 5 , N= 7
          Location (T [4,6]) = BA + (7 x (4-1)) + (6-1)
          = 900+ (7 x 3) +5
          = 900+ 21 + 5
          = 926
2) Column Major Order:
Order elements of first column stored linearly and then comes elements of next column.
To determine element address A[i,j]:
    Location ( A[ i,j ] ) =Base Address + ( M x ( j - 1 ) ) + ( i - 1 )
For example :
Given an array [1…6,1…8] of integers. Calculate address element T[5,7], where BA=300.
Solution:- I = 5 , J = 7, M= 6 , N= 8
          Location (T [4,6]) = BA + (6 x (7-1)) + (5-1)
          = 300+ (6 x 6) +4
          = 300+ 36+4
          = 340
2) Three - Dimensional Array :
In three - dimensional array also address is calculated through two methods i.e; row-major order and column-major method.
To calculate address of element X[ i,j,k] using row-major order :
    Location ( X[i,j,k] )=BA + MN (k-1) + N (i-1) + (j-1)
To calculate address of element X[ i,j,k] using column-major order
    Location ( X[i,j,k] )=BA + MN (k-1) + M (j-1) + (i-1)
For example :
Given an array [ 1..8, 1..5, 1..7 ] of integers. Calculate address of element A[5,3,6], by using rows and columns methods, if BA=900?
Solution:- The dimensions of A are :
          M=8 , N=5, R=7, i=5, j=3, k=6
          Rows - wise : 
          Location (A[i,j,k]) = BA + MN(k-1) + N(i-1) + (j-1)
          Location(A[5,3,6])= 900 + 8x5(6-1) + 5(5-1) + (3-1)
                      = 900 + 40 x 5 +5 x 4 + 2
                      = 900 + 200 +20 +2
                      = 1122
          Columns - wise : 
          Location (A[i,j,k]) = BA + MN(k-1) + M(j-1) + (i-1)
          Location (A[5,3,6]) = 900 + 8x5(6-1) + 8(3-1) + (5-1)
                      = 900 + 40 x 5 +8 x 2 + 4
                      = 900 + 200 +16 +4
                      = 1120

Tree Traversals (Inorder, Preorder and Postorder)
Unlike linear data structures (Array, Linked List, Queues, Stacks, etc) which have only one logical way to traverse them, trees can be traversed in different ways. Following are the generally used ways for traversing trees.
 

Example Tree
Depth First Traversals:
(a) Inorder (Left, Root, Right) : 4 2 5 1 3
(b) Preorder (Root, Left, Right) : 1 2 4 5 3
(c) Postorder (Left, Right, Root) : 4 5 2 3 1
Types of Binary Tree
Following are common types of Binary Trees.
Full Binary Tree A Binary Tree is full if every node has 0 or 2 children. Following are examples of full binary tree. We can also say a full binary tree is a binary tree in which all nodes except leaves have two children.
               18
           /       \  
         15         30  
        /  \        /  \
      40    50    100   40

             18
           /    \   
         15     20    
        /  \       
      40    50   
    /   \
   30   50

               18
            /     \  
          40       30  
                   /  \
                 100   40
In a Full Binary, number of leaf nodes is number of internal nodes plus 1
       L = I + 1
Where L = Number of leaf nodes, I = Number of internal nodes
Complete Binary Tree: A Binary Tree is complete Binary Tree if all levels are completely filled except possibly the last level and the last level has all keys as left as possible
Following are examples of Complete Binary Trees
               18
           /       \  
         15         30  
        /  \        /  \
      40    50    100   40


               18
           /       \  
         15         30  
        /  \        /  \
      40    50    100   40
     /  \   /
    8   7  9 
Perfect Binary Tree A Binary tree is Perfect Binary Tree in which all internal nodes have two children and all leaves are at same level.
Following are examples of Perfect Binaryr Trees.
               18
           /       \  
         15         30  
        /  \        /  \
      40    50    100   40


               18
           /       \  
         15         30  
A Perfect Binary Tree of height h (where height is number of nodes on path from root to leaf) has 2h – 1 node.
Example of Perfect binary tree is ancestors in family. Keep a person at root, parents as children, parents of parents as their children.

Balanced Binary Tree
A binary tree is balanced if height of the tree is O(Log n) where n is number of nodes. For Example, AVL tree maintain O(Log n) height by making sure that the difference between heights of left and right subtrees is 1. Red-Black trees maintain O(Log n) height by making sure that the number of Black nodes on every root to leaf paths are same and there are no adjacent red nodes. Balanced Binary Search trees are performance wise good as they provide O(log n) time for search, insert and delete.

A degenerate (or pathological) tree A Tree where every internal node has one child. Such trees are performance-wise same as linked list.
      10
      /
    20
     \
     30
      \
      40     
What is a proper binary tree? Construct the proper binary tree whose inorder and postorder traversals are given as:   [BUET DS -2014]
Inorder : BUETCSEl2 
Postorder: BEUCTES21
 

proper binary tree  
A full binary tree (sometimes proper binary tree or 2-tree) is a tree in which every node other than the leaves has two children. A complete binary tree is a binary tree in which every level, except possibly the last, is completely filled, and all nodes are as far left as possible.
Example-1
 

Example-2
 

Example -3
 
 
 

 




Example -4 
 

 

 
 



Q. How do you implement a Queue using a Stack? What are the running time ';If enqueue and dequeue operations of such implementation? [BUET DS -2014]
Queue using Stacks
The problem is opposite of this post. We are given a stack data structure with push and pop operations, the task is to implement a queue using instances of stack data structure and operations on them.
 

A queue can be implemented using two stacks. Let queue to be implemented be q and stacks used to implement q be stack1 and stack2. q can be implemented in two ways:
Method 1 (By making enQueue operation costly) This method makes sure that oldest entered element is always at the top of stack 1, so that deQueue operation just pops from stack1. To put the element at top of stack1, stack2 is used.
enQueue(q, x)
  1) While stack1 is not empty, push everything from stack1 to stack2.
  2) Push x to stack1 (assuming size of stacks is unlimited).
  3) Push everything back to stack1.
Here time complexity will be O(n)
deQueue(q)
  1) If stack1 is empty then error
  2) Pop an item from stack1 and return it
Here time complexity will be O(1)
#include <bits/stdc++.h>
using namespace std; 
struct Queue {
    stack<int> s1, s2;
    void enQueue(int x)
    {
        while (!s1.empty()) {
            s2.push(s1.top());
            s1.pop();
        }
        s1.push(x);
        while (!s2.empty()) {
            s1.push(s2.top());
            s2.pop();
        }
    }
    int deQueue()
    {
        if (s1.empty()) {
            cout << "Q is Empty";
            exit(0);
        }
        int x = s1.top();
        s1.pop();
        return x;
    }
};
int main()
{
    Queue q;
    q.enQueue(1);
    q.enQueue(2);
    q.enQueue(3);
 
    cout << q.deQueue() << '\n';
    cout << q.deQueue() << '\n';
    cout << q.deQueue() << '\n';
 
    return 0;
}
Method 2 (By making deQueue operation costly)In this method, in en-queue operation, the new element is entered at the top of stack1. In de-queue operation, if stack2 is empty then all the elements are moved to stack2 and finally top of stack2 is returned.
enQueue(q,  x)
  1) Push x to stack1 (assuming size of stacks is unlimited).
Here time complexity will be O(1)
deQueue(q)
  1) If both stacks are empty then error.
  2) If stack2 is empty
       While stack1 is not empty, push everything from stack1 to stack2.
  3) Pop the element from stack2 and return it.
Here time complexity will be O(n)
Method 2 is definitely better than method 1.
Method 1 moves all the elements twice in enQueue operation, while method 2 (in deQueue operation) moves the elements once and moves elements only if stack2 empty.
Implementation of method 2:
#include <bits/stdc++.h>
using namespace std;
 
struct Queue {
    stack<int> s1, s2;
    void enQueue(int x)
    {
        s1.push(x);
    }
    int deQueue()
    {
        if (s1.empty() && s2.empty()) {
            cout << "Q is empty";
            exit(0);
        }
        if (s2.empty()) {
            while (!s1.empty()) {
                s2.push(s1.top());
                s1.pop();
            }
        }
        int x = s2.top();
        s2.pop();
        return x;
    }
};
int main()
{
    Queue q;
    q.enQueue(1);
    q.enQueue(2);
    q.enQueue(3);
    cout << q.deQueue() << '\n';
    cout << q.deQueue() << '\n';
    cout << q.deQueue() << '\n';
    return 0;
}
Output:
1 2 3 
Q. Which type of traversal will traverse the binary tree in figure for Question 6(b) as: BUETCSE12? Write an algoritlun for such traversal of a binary tree using a Stack or Queue. [BUET DS -2014]
 

Solution :- Here used Preorder traversal. 
Pre-order Traversal
In this traversal method, the root node is visited first, then the left subtree and finally the right subtree.
 

We start from A, and following pre-order traversal, we first visit A itself and then move to its left subtree B. B is also traversed pre-order. The process goes on until all the nodes are visited. The output of pre-order traversal of this tree will be −
A → B → D → E → C → F → G
Algorithm
Until all nodes are traversed −
Step 1 − Visit root node.
Step 2 − Recursively traverse left subtree.
Step 3 − Recursively traverse right subtree.
I wrote the code for an iterative pre-order traversal using stack for binary search tree. Below is the code for same. Please help me verify the correctness of the algorithm. Here t.t is the value of node t.
staticvoid preorder(TreeNode t)
{
MyConcurrentStack<TreeNode> s =newMyConcurrentStack<TreeNode>();
while(s.peek()!=null|| t!=null)
{
if(t!=null)
{
System.out.print(t.t+":");
            s.push(t);
            t = t.left;
}
else
{
            t = s.pop();
            t = t.right;
}
}
}
Q. Give a comparison of adjacency matrix and adjacency list representation of a graph [BUET DS -2014]
Adjacency Matrix:
Adjacency Matrix is a 2D array of size V x V where V is the number of vertices in a graph. Let the 2D array be adj[][], a slot adj[i][j] = 1 indicates that there is an edge from vertex i to vertex j. Adjacency matrix for undirected graph is always symmetric. Adjacency Matrix is also used to represent weighted graphs. If adj[i][j] = w, then there is an edge from vertex i to vertex j with weight w.
 

Adjacency List:
An array of lists is used. Size of the array is equal to the number of vertices. Let the array be array[]. An entry array[i] represents the list of vertices adjacent to the ith vertex. This representation can also be used to represent a weighted graph. The weights of edges can be represented as lists of pairs. Following is adjacency list representation of the above graph.
 

Lets assume we have a graph which has n number of nodes and m number of edges,
Example graph
 

Adjacency Matrix: We are creating a matrix that has nnumber of rows and columns so in memory it will take space that is proportional to n2. Checking if two nodes named as uand v has an edge between them will take Θ(1) time. For example checking for (1, 2) is an edge will look like as follows in the code:
if(matrix[1][2]==1)
If you want to identify all edges, you have to iterate over matrix at this will require two nested loops and it will take Θ(n2). (You may just use the upper triangular part of the matrix to determine all edges but it will be again Θ(n2))
Adjacency List: We are creating a list that each node also points to another list. Your list will have n elements and each element will point to a list that has number of items that is equal to number of neighbors of this node (look image for better visualization). So it will take space in memory that is proportional to n+m. Checking if (u, v) is an edge will take O(deg(u)) time in which deg(u) equals number of neighbors of u. Because at most, you have to iterate over the list that is pointed by the u. Identifying all edges will take Θ(n+m).
Adjacency list of example graph
 

You should make your choice according to your needs.Because of my reputation I couldn't put image of matrix, sorry for that
Q. Write pseudocodes for enqueue and dequeue operations for the array implementations of a circular queue [BUET DS -2014]
Write two functions for the insertion and deletion operations in a circular linked list [BUET DS 2014-1]
enQueue(value) - Inserting value into the Circular Queue
In a circular queue, enQueue() is a function which is used to insert an element into the circular queue. In a circular queue, the new element is always inserted at rear position. The enQueue() function takes one integer value as parameter and inserts that value into the circular queue. We can use the following steps to insert an element into the circular queue...
 


void enQueue(int element)
{
    if(isFull()) printf("\n Queue is full!! \n");
    else
    {
        if(front == -1) front = 0;
        rear = (rear + 1) % SIZE;
        items[rear] = element;
        printf("\n Inserted -> %d", element);
    }
}
deQueue() - Deleting a value from the Circular Queue
In a circular queue, deQueue() is a function used to delete an element from the circular queue. In a circular queue, the element is always deleted from front position. The deQueue() function doesn't take any value as parameter. We can use the following steps to delete an element from the circular queue...
int deQueue()
{
    int element;
    if(isEmpty()) {
        printf("\n Queue is empty !! \n");
        return(-1);
    } else {
        element = items[front];
        if (front == rear){
            front = -1;
            rear = -1;
        } /* Q has only one element, so we reset the queue after dequeing it. ? */
        else {
            front = (front + 1) % SIZE;      
        }
        printf("\n Deleted element -> %d \n", element);
        return(element);
}
}
Q. Suppose we create a binary search tree by inserting the following values in the given order: 50, 10, 13, 45, 55, 110, 5, 31, 64, and 47. Answer the following questions: [BUET DS -2014]
(1)Draw the binary search tree.
(2) Show the output values if we visit the tree using pre-order traversal technique.
(3) Show the output values if we visit the tree using post-order traversal technique.
(4) Show the resulting trees after we delete 47, 110, and 50. (Each deletion is
applied on the original tree.)
(1)
 

(2) visit the tree using pre-order traversal(Root, left, Right) 50, 10, 5, 13, 45, 31, 47, 55, 110, 64
(3) visit the tree using post-order traversal (left, Right, Root) 5, 31, 47, 45, 13, 10, 64, 110, 55, 50
(4) Show the resulting trees after we delete 47, 110, and 50.
After delete 47
 

After delete 110
 


After delete 50
 




 
If u=14 and E=11 T=13 and so on then solution would be (1)
 

Insert (36,B) 
 

 


Solution of (b) Preorder:  BUETCSE12
Inorder: BUETCSE12


 

Write an algorithm for building a heap. Show that an n-e1ement heap can be built using O(n) time. [BUET DS -2016]
What is Heap Data Structure ?
Heap is a special tree-based data structure. A binary tree is said to follow a heap data structure if
it is a complete binary tree
All nodes in the tree follow the property that they are greater than their children i.e. the largest element is at the root and both its children and smaller than the root and so on. Such a heap is called a max-heap. If instead all nodes are smaller than their children, it is called a min-heap
Following example diagram shows Max-Heap and Min-Heap.
How to “heapify” a tree
Starting from a complete binary tree, we can modify it to become a Max-Heap by running a function called heapify on all the non-leaf elements of the heap.
Since heapfiy uses recursion, it can be difficult to grasp. So let’s first think about how you would heapify a tree with just three elements.
heapify(array)
Root = array[0]
Largest = largest( array[0] , array [2*0 + 1]. array[2*0+2])
if(Root != Largest)
Swap(Root, Largest)
 
The example above shows two scenarios - one in which the root is the largest element and we don’t need to do anything. And another in which root had larger element as a child and we needed to swap to maintain max-heap property.

What are the main operations of a priority queue? How could we implement these operations using a heap? [BUET DS -2016]
Priority Queue is more specialized data structure than Queue. Like ordinary queue, priority queue has same method but with a major difference. In Priority queue items are ordered by key value so that item with the lowest value of key is at front and item with the highest value of key is at rear or vice versa. So we're assigned priority to item based on its key value. Lower the value, higher the priority. Following are the principal methods of a Priority Queue.
Basic Operations
insert / enqueue − add an item to the rear of the queue.
remove / dequeue − remove an item from the front of the queue.
following five operations:
Priority Queue Representation
 

We're going to implement Queue using array in this article. There is few more operations supported by queue which are following.
Peek − get the element at front of the queue.
isFull − check if queue is full.
isEmpty − check if queue is empty.
Insert / Enqueue Operation
Whenever an element is inserted into queue, priority queue inserts the item according to its order. Here we're assuming that data with high value has low priority.
 

void insert(int data){
int i =0;
if(!isFull()){
// if queue is empty, insert the data 
if(itemCount ==0){
         intArray[itemCount++]= data;
}else{
// start from the right end of the queue 
for(i = itemCount -1; i >=0; i--){
// if data is larger, shift existing item to right end 
if(data > intArray[i]){
               intArray[i+1]= intArray[i];
}else{
break;
}
}
// insert the data 
         intArray[i+1]= data;
         itemCount++;
}
}
}
Priority Queue is an extension of queue with following properties.
1) Every item has a priority associated with it.
2) An element with high priority is dequeued before an element with low priority.
3) If two elements have the same priority, they are served according to their order in the queue.
A typical priority queue supports following operations.
insert(item, priority): Inserts an item with given priority.
getHighestPriority(): Returns the highest priority item.
deleteHighestPriority(): Removes the highest priority item.
How to implement priority queue?
Using Array: A simple implementation is to use array of following structure.
struct item {
   int item;
   int priority;
}
Applications of Priority Queue:
1) CPU Scheduling
2) Graph algorithms like Dijkstra’s shortest path algorithm, Prim’s Minimum Spanning Tree, etc
3) All queue applications where priority is involved.
A priority queue is implemented using Heap. Please refer below articles for our own implementation and library implementations.
Binary Heap (The most common implementation of priority queue)

Draw the skip-list for the following table that shows the keys and consecutive numbers of the heads found in toss while inserting the keys in the skip-list. [BUET DS -2016]
 

AVL Tree
What if the input to binary search tree comes in a sorted (ascending or descending) manner? It will then look like this −
 

It is observed that BST's worst-case performance is closest to linear search algorithms, that is Ο(n). In real-time data, we cannot predict data pattern and their frequencies. So, a need arises to balance out the existing BST.
Named after their inventor Adelson, Velski & Landis, AVL trees are height balancing binary search tree. AVL tree checks the height of the left and the right sub-trees and assures that the difference is not more than 1. This difference is called the Balance Factor.
Here we see that the first tree is balanced and the next two trees are not balanced −
 

In the second tree, the left subtree of C has height 2 and the right subtree has height 0, so the difference is 2. In the third tree, the right subtree of A has height 2 and the left is missing, so it is 0, and the difference is 2 again. AVL tree permits difference (balance factor) to be only 1.
BalanceFactor = height(left-sub tree) − height(right-sub tree)
If the difference in the height of left and right sub-trees is more than 1, the tree is balanced using some rotation techniques.
AVL Rotations
To balance itself, an AVL tree may perform the following four kinds of rotations −
Left rotation
Right rotation
Left-Right rotation
Right-Left rotation
The first two rotations are single rotations and the next two rotations are double rotations. To have an unbalanced tree, we at least need a tree of height 2. With this simple tree, let's understand them one by one.

Explain, with examples, the single rotation and double rotation operations on AVL trees [BUET DS- 2016]
Left Rotation
If a tree becomes unbalanced, when a node is inserted into the right subtree of the right subtree, then we perform a single left rotation −

 
In our example, node A has become unbalanced as a node is inserted in the right subtree of A's right subtree. We perform the left rotation by making A the left-subtree of B.
Right Rotation
AVL tree may become unbalanced, if a node is inserted in the left subtree of the left subtree. The tree then needs a right rotation.
 

As depicted, the unbalanced node becomes the right child of its left child by performing a right rotation.
Left-Right Rotation
Double rotations are slightly complex version of already explained versions of rotations. To understand them better, we should take note of each action performed while rotation. Let's first check how to perform Left-Right rotation. A left-right rotation is a combination of left rotation followed by right rotation.

State

Action



A node has been inserted into the right subtree of the left subtree. This makes C an unbalanced node. These scenarios cause AVL tree to perform left-right rotation.



We first perform the left rotation on the left subtree of C. This makes A, the left subtree of B.



Node C is still unbalanced, however now, it is because of the left-subtree of the left-subtree.



We shall now right-rotate the tree, making Bthe new root node of this subtree. C now becomes the right subtree of its own left subtree.



The tree is now balanced.


Right-Left Rotation
The second type of double rotation is Right-Left Rotation. It is a combination of right rotation followed by left rotation.

State

Action



A node has been inserted into the left subtree of the right subtree. This makes A, an unbalanced node with balance factor 2.



First, we perform the right rotation along Cnode, making C the right subtree of its own left subtree B. Now, B becomes the right subtree of A.



Node A is still unbalanced because of the right subtree of its right subtree and requires a left rotation.



A left rotation is performed by making B the new root node of the subtree. A becomes the left subtree of its right subtree B.



The tree is now balanced.


What is an AVL tree? Draw the AVL tree that results after successively inserting the keys 11,4, 15,25,36,47,61,53,75 into an initially empty AVL tree. [BUET Academic Data Structure -2016-1]
 

What is a Multiway Search tree? How do we search a key in a Multiway Search tree? [BUET DS -2016]
A multiway tree is a tree that can have more than two children. A multiway tree of order m (or an m-way tree) is one in which a tree can have m children. Like B- tree or B+ tree. 
As with the other trees that have been studied, the nodes in an m-way tree will be made up of key fields, in this case m-1 key fields, and pointers to children.
multiway tree of order 5
 

To make the processing of m-way trees easier some type of order will be imposed on the keys within each node, resulting in a multiway search tree of order m ( or an m-way search tree). By definition an m-way search tree is a m-way tree in which:
Each node has m children and m-1 key fields
The keys in each node are in ascending order.
The keys in the first i children are smaller than the ith key
The keys in the last m-i children are larger than the ith key
4-way search tree
 

M-way search trees give the same advantages to m-way trees that binary search trees gave to binary trees - they provide fast information retrieval and update. However, they also have the same problems that binary search trees had - they can become unbalanced, which means that the construction of the tree becomes of vital importance.
What are Multi-way search trees
Multi-way search trees generalize binary search tree by allowing more than one key to be stored in a given node
      There will always be one more pointer than key
    So if a node has two keys, it would have three pointers, with possibility three children
If a tree has more than two children, you can decrease the height of the tree with the same number of nodes
The keys in a node are maintained in order
 What makes a good hash function? Explain the three commonly used techniques to compute the probe sequences required for open addressing in a hash table. [BUET DS -2016]

Draw the binary search tree that results after successively inserting the keys 15, 8, 24, 12, 40, 18, 20, 23 into an initially empty binary search tree. Then, by using necessary rotations, redraw the binary search tree so that it becomes an AVL tree.  [BUET DS -2016]
Solution:-  After constructing binary search Tree
 

But it is not a balance binary tree because in root balance factor = 2-4=-2 so it is not a balance binary tree, so it need to rotations to create an AVL tree. 
 

Write the properties of a red-black tree. Show that the height of a red-black tree T storing n items is O(log n) . [BUET DS -2016]
Red-Black Tree is a self-balancing Binary Search Tree (BST) where every node follows following rules or properties.
1) Every node has a color either red or black.
2) Root of tree is always black.
3) There are no two adjacent red nodes (A red node cannot have a red parent or red child- If node is red then its both children must be black or If a node is red, then its parent must be black.).
4) Every path from root to a NULL node has same number of black nodes.
Insertion :-
Insert(New node must be Red)
Arrange color according to RBT properties 
Root is always black 
Every leaf which is NIL is black
If node is red then its both children must be black or If a node is red, then its parent must be black. 
Rotation 
If New node is Root node
If parent of New Node is Red 
Recolor and Rotation 
 
 












 

Explain, with examples, the four cases that may arise for deleting a node from a red- black tree[BUET DS -2016]
Deletion Steps
Following are detailed steps for deletion.
1) Perform standard BST delete. When we perform standard delete operation in BST, we always end up deleting a node which is either leaf or has only one child (For an internal node, we copy the successor and then recursively call delete for successor, successor is always a leaf node or a node with one child). So we only need to handle cases where a node is leaf or has one child. Let v be the node to be deleted and u be the child that replaces v (Note that u is NULL when v is a leaf and color of NULL is considered as Black).
2) Simple Case: If either u or v is red, we mark the replaced child as black (No change in black height). Note that both u and v cannot be red as v is parent of u and two consecutive reds are not allowed in red-black tree.
 

3) If Both u and v are Black.
3.1) Color u as double black.  Now our task reduces to convert this double black to single black. Note that If v is leaf, then u is NULL and color of NULL is considered as black. So the deletion of a black leaf also causes a double black.
 

3.2) Do following while the current node u is double black and it is not root. Let sibling of node be s.
….(a): If sibling s is black and at least one of sibling’s children is red, perform rotation(s). Let the red child of s be r. This case can be divided in four sub-cases depending upon positions of s and r.
…………..(i) Left Left Case (s is left child of its parent and r is left child of s or both children of s are red). This is mirror of right right case shown in below diagram.
…………..(ii) Left Right Case (s is left child of its parent and r is right child). This is mirror of right left case shown in below diagram.
…………..(iii) Right Right Case (s is right child of its parent and r is right child of s or both children of s are red)
 

…………..(iv) Right Left Case (s is right child of its parent and r is left child of s)


 
…..(b): If sibling is black and its both children are black, perform re-coloring, and recur for the parent if parent is black.
 

In this case, if parent was red, then we didn’t need to recur for prent, we can simply make it black (red + double black = single black)
…..(c): If sibling is red, perform a rotation to move old sibling up, recolor the old sibling and parent. The new sibling is always black (See the below diagram). This mainly converts the tree to black sibling case (by rotation) and  leads to case (a) or (b). This case can be divided in two subcases.
…………..(i) Left Case (s is left child of its parent). This is mirror of right right case shown in below diagram. We right rotate the parent p.
…………..(iii) Right Case (s is right child of its parent). We left rotate the parent p.
 

3.3) If u is root, make it single black and return (Black height of complete tree reduces by 1).
Q. What is a stable sorting? Write a sorting algorithm that is stable. Analyze the time- complexity of the algorithm.  [BUET DS -2016-1]
A sorting algorithm is said to be stable if two objects with equal keys appear in the same order in sortedoutput as they appear in the input unsorted array. Somesorting algorithms are stable by nature like Insertionsort, Merge Sort, Bubble Sort, etc.
A stable sorting algorithm is the one that sorts the identical elements in their same order as they appears in the input, whilst unstable sorting may not satisfy the case.
Stable Sorting Algorithms:
Insertion Sort
Merge Sort
Bubble Sort
Tim Sort
Counting Sort
Unstable Sorting Algorithms:
Heap Sort
Selection sort
Shell sort
Quick Sort
 


Can we make any sorting algorithm stable?
Any given sorting algo which is not stable can be modified to be stable. There can be sorting algo specific ways to make it stable, but in general, any comparison based sorting algorithm which is not stable by nature can be modified to be stable by changing the key comparison operation so that the comparison of two keys considers position as a factor for objects with equal keys.



Q Write the Insertion Sort algorithm Analyze the worst case and best case time- complexities of the algorithm. [BUET DS -2016-1]
Insertion sort is a simple sorting algorithm that builds the final sorted array (or list) one item at a time. It is much less efficient on large lists than more advanced algorithms such as quicksort, heapsort, or merge sort.
void insertionSort(int arr[], int n) 
   int i, key, j; 
   for (i = 1; i < n; i++) 
   { 
       key = arr[i]; 
       j = i-1; 
       while (j >= 0 && arr[j] > key) 
       { 
           arr[j+1] = arr[j]; 
           j = j-1; 
       } 
       arr[j+1] = key; 
   } 

Write a function to find the largest element in a min binary heap.   Assume that the array representing a min binary heap contains the values 2, 8, 3, 10, 16,7,18, 13, 15. Show the contents of the array after deleting the minimum element. [BUET DS -2016-1]
int FindMaxInMinHeap(struct Heap *h) {
int Max = -1;
for(int i = (h→count+1)/2; i < h→count; i++)
if(h→array[i] > Max)
Max = h→array[i];
}


This would give the time Complexity: O(n/2)=O(n)
After inserting the values 2, 8, 3, 10, 16,7,18, 13, 15 in min heap 

 
After deleting smallest element 
 


What is an In-place sorting? Write an In-place sorting algorithm. Analyze the worst case and best case time-complexities of the algorithm. [BUET DS -2016-1]
In-place sorting means sorting without any extra space requirement. According to wiki , it says. an in-place algorithm is an algorithm which transforms input using a data structure with a small, constant amount of extra storage space. Quicksort is one example of In-Place Sorting
What is in-place sorting?
An in-place sorting algorithm uses constant extra space for producing the output (modifies the given array only). It sorts the list only by modifying the order of the elements within the list.
For example, Insertion Sort and Selection Sorts are in-place sorting algorithms as they do not use any additional space for sorting the list and a typical implementation of Merge Sort is not in-place, also the implementation for counting sort is not in-place sorting algorithm.
In-Place Algorithm
In-place has more than one definitions. One strict definition is.
An in-place algorithm is an algorithm that does not need an extra space and produces an output in the same memory that contains the data by transforming the input ‘in-place’. However, a small constant extra space used for variables is allowed.

Which Sorting Algorithms are In-Place and which are not?
In Place : Bubble sort, Selection Sort, Insertion Sort, Heapsort.
Not In-Place : Merge Sort. Note that merge sort requires O(n) extra space.
What about QuickSort? Why is it called In-Place?
QuickSort uses extra space for recursive function calls. It is called in-place according to broad definition as extra space required is not used to manipulate input, but only for recursive calls.

Write two procedures for finding the successor element and the predecessor element in a binary-search tree. [BUET DS -2016-1]
Finding the Successor of a Node
We find the successor node by performing the following steps:
1. We need to find the next highest key so we will move right from the node we are going to delete.
2. We check the left child of the current node. If we have no node then this node is the successor
3. If we have a left node and then we move down into this node.
4. We continue steps 2 and 3 until we find the successor node.
The following diagram shows these steps in action:
 

With Binary Search Tree, the algorithm to find the next highest node of a given node is basically finding the lowest node of the right sub-tree of that node.
The algorithm can just be simply:
1. Start with the right child of the given node (make it the temporary current node)
2. If the current node has no left child, it is the next highest node.
3. If the current node has a left child, make it the current node.
Repeat 2 and 3 until we find next highest node.



Case 1 : If the x has a right child then its inorder successor will the left most element in the right sub tree of x.
 


Case 2: If the x doesn’t have a right child then its inorder successor will the one of its ancestors, use the binary search technique to find the node x, start from the root, if root is bigger than the x then go left, if root is less than x, go right. while traveling whenever you go left , store the node and call it successor.
 




Prove that a min binary heap can be built in linear-time. [BUET DS -2016-1]
Time Complexity of building a heap
Consider the following algorithm for building a Heap of an input array A.
BUILD-HEAP(A) 
    heapsize := size(A); 
    for i := floor(heapsize/2) downto 1 
        do HEAPIFY(A, i); 
    end for 
END
A quick look over the above algorithm suggests that the running time is O(nlog(n)), since each call to Heapify costs O(logn) and Build-Heap makes O(n) such calls.
This upper bound, though correct, is not asymptotically tight.
We can derive a tighter bound by observing that the running time of Heapify depends on the height of the tree ‘h’ (which is equal to lg(n), where n is number of nodes) and the heights of most sub-trees are small.
The height ’h’ increases as we move upwards along the tree. Line-3 of Build-Heap runs a loop from the index of the last internal node (heapsize/2) with height=1, to the index of root(1) with height = lg(n). Hence, Heapify takes different time for each node, which is O( h).

By writing a procedure, explain the three cases that may arise for deleting a node from a binary search tree. [BUET DS -2016-1]
There are 3 cases that need to be considered while deleting a node from Binary Search Tree.
 1. Node to delete has no children that is no left child and no right child present. 
 Case 1 in below image. 
 2. Node to delete has only one child either left child or right child present. Case 2 in below image.
 3. Node to delete has both child that is left child and right child present. Case 3 in below image.
 


Case 1:
 For case 1, it is very much straightforward, 
 1. Search for the node that need to be deleted.
 2. Node to be deleted is either on left side or on right side of its parent node.
 3. If node to be deleted is on left side of its parent node then mark Left side of parent node to null.
 Eg: parent.setLeft(null);
 4. If node to be deleted is on right side of its parent node then mark Right side of parent node 
 to null. Eg: parent.setRight(null);
 


Case 2:
 For case 2, it is very much straightforward, 
 1. Search for the node that need to be deleted.
 2. Node to be deleted is either on left side or on right side of its parent node.
 3. Node to be deleted has only child present that can be on left side or right side.
 4. If node to be deleted has left child present, 
 then directly connect its parent node to left child node of node to be deleted. 
 Eg: parent.setLeft(child.getLeft());
 5. If node to be deleted has right child present, 
 then directly connect its parent node to right child node of node to be deleted. 
 Eg: parent.setRight(child.getRight());
 6. In this way node to be deleted will be removed from the Tree and parent node 
 will be directly connected to child node of node to be deleted.
 

Case 3:
 For case 3, it is little bit tricky
 1. Search for the node that need to be deleted. 
 2. Now, instead of deleting the node, we will replace the data of node to be deleted 
 with the data that will be best fit in that place.
 
 3. Which data will be best fit? Search the data from the sub tree of node to be deleted and 
 find the node whose data when placed at the place of node to be deleted will 
 keep the Binary Search Tree property(key in each node must be greater 
 than all keys stored in the left sub-tree, and smaller than all keys in right sub-tree) intact.
 
 4. Find minimum element from the right sub tree of node to be deleted or 
 find maximum element from the left sub tree of node to be deleted and
 copy that Node data to the data of node to be deleted.
 5. Delete the node which we found the best fit in step 5.
 


Show the adjacency matrix and adjacency list for the graph in Figure , Calculate the number of bytes required in both representations. Assume that the storage requirement of a pointer and an integer is 4 bytes.
 

Solution:-

Col/Row

1

2

3

4

5

6

1

0

10

0

20

0

2

2

10

0

3

5

0

0

3

0

0

3

0

0

15

4

20

5

0

0

11

10

5

0

0

15

11

0

3

6

2

0

0

10

3

0


Your group has to implement an array based stack. You have proposed that the beginning of the array would be the top of the stack. Your friend is claiming that this is inefficient. Do you agree with your friend? Justify your answer. [BUET DS -2016-1]

Yes! I will agree with my friend because If the beginning of the array would be the top of the stack then first position cannot be used, That is why it would be inefficient. If beginning of the array and top of the stack assign as -1 or unreal value then it would be more efficient. 

Discuss the Drifting Queue Problem with the help of an illustrative example. Propose a solution to the problem. Will there be any problem in recognizing when the queue is empty or full in your solution? If yes, then solve the problem as well. [BUET DS -2016-1]

The “drifting queue” problem can be solved by pretending that the array is circular and so allow the queue to continue directly from the highest-numbered position in the array to the lowest-numbered position. This is easily implemented through use of the modulus operator (denoted by % in C++). In this way, positions in the array are numbered from 0 through size−1, and position size−1 is defined to immediately precede position 0 (which is equivalent to position size % size). 
There remains one more serious, though subtle, problem to the array-basedqueue implementation. How can we recognize when the queue is empty or full?Assume that front stores the array index for the front element in the queue, and rear stores the array index for the rear element. If both front and rear have the same position, then with this scheme there must be one element in the queue. Thus, an empty queue would be recognized by having rear be one less than front (taking into account the fact that the queue is circular, so position size−1 is actually considered to be one less than position 0). But what if the queue is completely full? In other words, what is the situation when a queue with n array positions available contains n elements? In this case, if the front element is in position 0, then the rear element is in position size−1. But this means that the value for rear is one less than the value for front when the circular nature of the queue is taken into account. In other words, the full queue is indistinguishable from the empty queue!

How can you search a number in O(1) average complexity? 
If you use c++ then the alternative would be to use std::unordered_map it is internally implemented as Hash Table and hence gives you an O(1) search time in average case.
If your data points are from a set of discrete values (such as integers from 1 to 1,000,000) you can keep an array of the count of each possible value. This is simpler than a hash table and doesn’t need rehashing or collision detection, but uses more memory.
Build a binary search tree where have 11 node, and give the algorithm for BST.[EGCB-2018]

Explain the divide-and-conquer technique? What is the function of the pivot element in the quick sort algorithm? [BUET DS 2014-1]

The quicksort algorithm is a sorting algorithm that sorts a collection by choosing a pivot point, and partitioning the collection around the pivot, so that elements smaller than the pivot are before it, and elements larger than the pivot are after.
One can think of the pivot as the “central point” of the array below which the elements in the array are lesser than the pivot and above which the elements are bigger than the array.The essence of Quicksort lies on fact that the pivot ends up in it’s “rightful position”. We can recursively sort the left and right part of the array a.k.a the 2 sub-arrays to get our desired result.
 
What is a tree? Write a linear-time algorithm for finding the height of a tree. Analyze the time-complexity of the algorithm. [BUET DS 2014-1]
Construct the binary tree whose in-order and post-order traversals are given as:
Inorder :  i f d j g c a b k h m e
Post-order : i j g d f k m h e b a c
 


Sort the following dataset using "quicksort" in descending order when pivot element is the first element.  [BUET DS 2014-1]
D = {6, 0, 2, 0, 1, 3, 6, 1} (show the steps)
How Quick Sorting Works?
Following are the steps involved in quick sort algorithm:
1. After selecting an element as pivot, which is the last index of the array in our case, we divide the array for the first time.
2. In quick sort, we call this partitioning. It is not simple breaking down of array into 2 subarrays, but in case of partitioning, the array elements are so positioned that all the elements smaller than the pivot will be on the left side of the pivot and all the elements greater than the pivot will be on the right side of it.
3. And the pivot element will be at its final sorted position.
4. The elements to the left and right, may not be sorted.
5. Then we pick subarrays, elements on the left of pivot and elements on the right of pivot, and we perform partitioning on them by choosing a pivot in the subarrays.
Let's consider an array with values {9, 7, 5, 11, 12, 2, 14, 3, 10, 6}
Below, we have a pictorial representation of how quick sort will sort the given array.
 
In step 1, we select the last element as the pivot, which is 6 in this case, and call for partitioning, hence re-arranging the array in such a way that 6 will be placed in its final position and to its left will be all the elements less than it and to its right, we will have all the elements greater than it.
Then we pick the subarray on the left and the subarray on the right and select a pivot for them, in the above diagram, we chose 3 as pivot for the left subarray and 11 as pivot for the right subarray. And we again call for partitioning.
Write pseudo-code for finding a key, inserting a key and deleting a key in a hash table using quadratic probing. Explain your method with an example. [BUET DS 2014-1]
Search in hash table with quadratic probing
void search(string s)
{
int index = hashFunc(s);
int h =1;
while(hashTable[index]!= s and hashTable[index]!="")
{
            index =(index + h*h)% hashTableSize;
                 h++;
}
if(hashTable[index]== s)
            cout << s <<" is found!"<< endl;
else
            cout << s <<" is not found!"<< endl;
}
Quadratic Probing algorithm  to insert key in the hash table

1. Retrieve key k
2. Compute hash function h[k] = k % size of the table
3. If hash table is empty at the computed hash value place
          then insert key at h[k]
    else
     we need to find another empty place in the hash table to insert the key in the hash table
        3.1    Use quadratic probing to compute the hash value of the key again   
        3.2    If   hash table place is empty then insert key at h[k] and exit
                 else
                 Repeat 3.1 step again   
 

Design an AVL tree with keys 5, 12, 3, -2, -5, -3, 1 show the steps.
 

What is the height of AVL tree ?
AVL trees are self-balancing binary search trees (another example is red-black trees). By definition, a self-balancing binary tree maintains the property that its height is Θ(logn)Θ(log⁡n), where nn is the number of leaves or vertices (the two properties corresponding to the two different definitions of nn are equivalent).
Build 2-3-4 tree with the keys 12,5,6,10,15,8,13,17,11,14,4. Delete elements in the following sequence: 4,12,13 [BUET DS]
Properties of a 2-3-4 Tree
A 2-3-4 Tree has following properties.
1. Each node stores three values at most sorted from smallest to greatest.
2. All leaf nodes are at the same level (perfectly balanced).
3. An internal (non-leaf) node can either have 2, 3 or 4 children. More precisely, nodes can be of the following three types.
o 2-Node: Node has two child pointers and 1 data element.
o 3-Node: Node has three child pointers and 2 data elements.
o 4-Node: Node has four child pointers and 3 data elements.
4. A leaf node can have 2, 3 or 4 items but no children. In other words, a leaf is 2-Node, 3-Node or 4-Node where all children are Null.

Greedy Approach
An algorithm is designed to achieve optimum solution for a given problem. In greedy algorithm approach, decisions are made from the given solution domain. As being greedy, the closest solution that seems to provide an optimum solution is chosen.
Greedy algorithms try to find a localized optimum solution, which may eventually lead to globally optimized solutions. However, generally greedy algorithms do not provide globally optimized solutions.
Examples
Most networking algorithms use the greedy approach. Here is a list of few of them −
Travelling Salesman Problem
Prim's Minimal Spanning Tree Algorithm
Kruskal's Minimal Spanning Tree Algorithm
Dijkstra's Minimal Spanning Tree Algorithm
Graph - Map Coloring
Graph - Vertex Cover
Knapsack Problem
Job Scheduling Problem
There are lots of similar problems that uses the greedy approach to find an optimum solution.
Minimum Spanning Tree (MST)
In a weighted graph, a minimum spanning tree is a spanning tree that has minimum weight than all other spanning trees of the same graph. In real-world situations, this weight can be measured as distance, congestion, traffic load or any arbitrary value denoted to the edges.
Minimum Spanning-Tree Algorithm
We shall learn about two most important spanning tree algorithms here −
Kruskal's Algorithm
Prim's Algorithm
Kruskal's Algorithm:
1. It is an Algorithm in graph theory that finds a minimum spanning tree for a connected weighted graph.
2. Kruskal is where we order the nodes from smallest to largest and pick accordingly.
3. Kruskal allows both new-new nodes and old-old nodes to get connected.
4. Kruskal’s algorithm builds a minimum spanning tree by adding one edge at a time. The next line is always the shortest, only if it does not create a cycle.
5. Kruskal’s require us to sort the edge weight's first.
6. Time Complexity of Kruskal’s Algorithm is O(ElogV)
Algorithm for Kruskal’s Algorithm:
 Step 1:  Choose the arc of least weight.
 Step 2:  Choose from those arcs remaining the arc of least weight which does not forma cycle with already chosen arcs. (If there are several such arcs, choose one arbitrarily.)
Step 3:  Repeat Step 2 until n – 1 arcs have been chosen





Prim's Algorithm:

1. It is the Algorithm that finds a minimum spanning tree for a connected weighted unidirected graph.
2. In Prim's algorithm we select an arbitrary node then correct the ones nearest to it.
3. Prim’s always joins a new vertex to old vertex.
4. Prim’s builds a minimum spanning tree by adding one vertex at a time. The next vertex to be added is always the one nearest to a vertex already on a graph.
5. In Prim's algorithm we select the shortest edge when executing the algorithm.
6. Time Complexity of Prim’s Algorithm is O(E+VlogV).
Algorithm for Prim’s Algorithm:
   Step 1:  Select any node to be the first node of T.
   Step 2:  Consider the arcs which connect nodes in T to nodes outside T. Pick the one with minimum weight. Add this arc and the extra node to T. (If there are two or more arcs of minimum weight, choose any one of them.)
Step 3:  Repeat Step 2 until T contains every node of the graph.



Sum= 2+1+3=6

Difference between Kruskal’s and Prim’s Algorithm

Prim’s Algorithm

Kruskal’s Algorithm

In Prim’s Algorithm, the tree that we are making or growing always remains connected.

In kruskal’s Algorithm, the tree that we are making or growing usually remains disconnected.

Prim’s Algorithm will grow a solution from a random vertex by adding the next cheapest vertex to the existing tree.

Kruskal’s Algorithm will grow a solution from the cheapest edge by adding the next cheapest edge to the existing tree / forest.

Prim’s Algorithm is faster for dense graphs.

Kruskal’s Algorithm is faster for sparse graphs.

Kruskal’s Algorithm is faster for sparse graphs.

Type of path problem
1) Single source shortest path
(Cost of shortest path from a source vertex ‘U’ to a destination vertex ‘V’)
Dijkastra Algorithm
Bellman ford algorithm
2) All pair shortest path
(Cost of Shortest path from each vertex to every other vertex.)
Floyd warshall`s Algorithm
 

Example-1
 

Example-2
 

Example-3
 

Bellman-Ford vs Dijkstra: Under what circumstances is Bellman-Ford better?
Bellman-Ford algorithm is a single-source shortest path algorithm, so when you have negative edge weight then it can detect negative cycles in a graph.
The only difference between two is that Bellman Ford is capable also to handle negative weights whereas Dijkstra Algorithm can only handle positives.
Backtracking
Backtracking is a general algorithmic technique that considers searching every possible combination in order to solve an optimization problem. Backtracking is also known as depth-first search or branch and bound.
Example: N- Queen Problem, dfs, bfs etc.
Backtracking – N Queens Problem
Objective : In chess, a queen can move as far as she pleases, horizontally, vertically, or diagonally. A chess board has 8 rows and 8 columns. The standard 8 by 8 Queen’s problem asks how to place 8 queens on an ordinary chess board so that none of them can hit any other in one move.
Here we are solving it for N queens in NxN chess board.
 

Approach: In this approach we will see the basic solution with O(N^2) extra space we will improve it further to O(N) space.
Create a solution matrix of the same structure as chess board.
Whenever place a queen in the chess board, mark that particular cell in solution matrix.
At the end print the solution matrix, the marked cells will show the positions of the queens in the chess board.
Algorithm:
1) Start in the leftmost column
2) If all queens are placed
    return true
3) Try all rows in the current column.  Do following for every tried row.
    a) If the queen can be placed safely in this row then mark this [row, 
        column] as part of the solution and recursively check if placing queen here leads to a solution.
    b) If placing the queen in [row, column] leads to a solution then return 
        true.
    c) If placing queen doesn't lead to a solution then umark this [row, 
        column] (Backtrack) and go to step (a) to try other rows.
3) If all rows have been tried and nothing worked, return false to trigger 
    backtracking.

The idea is to place queens one by one in different columns, starting from the leftmost column. When we place a queen in a column, we check for clashes with already placed queens. In the current column, if we find a row for which there is no clash, we mark this row and column as part of the solution. If we do not find such a row due to clashes then we backtrack and return false.

DFS algorithm
A standard DFS implementation puts each vertex of the graph into one of two categories:
1. Visited
2. Not Visited
The purpose of the algorithm is to mark each vertex as visited while avoiding cycles.
The DFS algorithm works as follows:
1. Start by putting any one of the graph's vertices on top of a stack.
2. Take the top item of the stack and add it to the visited list.
3. Create a list of that vertex's adjacent nodes. Add the ones which aren't in the visited list to the top of stack.
4. Keep repeating steps 2 and 3 until the stack is empty.
DFS(G, u)
    u.visited =true
for each v ∈ G.Adj[u]
if v.visited ==false
            DFS(G,v)
init(){
For each u ∈ G
        u.visited =false
For each u ∈ G
       DFS(G, u)
}

   
   














 

   
   

   
   
 
 
Example-2



   
Example-3
 

 

   
Difference between BFS and DFS

BASIS FOR COMPARISON

BFS

DFS

Basic

Vertex-based algorithm

Edge-based algorithm

Data structure used to store the nodes

Queue

Stack

Memory consumption

Inefficient

Efficient

Structure of the constructed tree

Wide and short

Narrow and long

Traversing fashion

Oldest unvisited vertices are explored at first.

Vertices along the edge are explored in the beginning.

Optimality

Optimal for finding the shortest distance, not in cost.

Not optimal

Application

Examines bipartite graph, connected component and shortest path present in a graph.

Examines two-edge connected graph, strongly connected graph, acyclic graph and topological order.


 
How works Huffman coding? Explain with example. (Greedy approach)
Huffman coding is a lossless data compression algorithm. The idea is to assign variable-length codes to input characters, lengths of the assigned codes are based on the frequencies of corresponding characters. The most frequent character gets the smallest code and the least frequent character gets the largest code.
Huffman (C)
1. n=|C|
2. Q=C
3. for i=1 to n-1
4. do
5. z=allocate_Node()
6. x=left[z]=Extract_Min(Q)
7. y=right[z]=Extract_Min(Q)
8. f[z]=f[x]+f[y]
9. Insert(Q,z)
10. return Extract_Min(Q)

Huffman Coding Algorithm Example
Construct a Huffman tree by using these nodes.  (There would be construct a BST from data)
 

Value

A

B

C

D

E

F

Frequency

5

25

7

15

4

12

 
Solution:
 
Step 1: According to the Huffman coding we arrange all the elements (values) in ascending order of the frequencies.
 

Value

E

A

C

F

D

B

Frequency

4

5

7

12

15

25

 
Step 2: Insert first two elements which have smaller frequency.
 

Value

C

EA

F

D

B

Frequency

7

9

12

15

25

Step 3: Taking next smaller number and insert it at correct place.
 

Value

F

D

CEA

B

Frequency

12

15

16

25

Step 4: Next elements are F and D so we construct another subtree for F and D.
 
  

Value

CEA

B

FD

Frequency

16

25

27

 
Step 5: Taking next value having smaller frequency then add it with CEA and insert it at correct place.
 

Value

FD

CEAB

Frequency

27

41

  
Value FD CEAB
Frequency 27 41
 
Step 6: We have only two values hence we can combined by adding them.
 
 

Huffman Tree

Value

FDCEAB

Frequency

68

 
Now the list contains only one element i.e. FDCEAB having frequency 68 and this element (value) becomes the root of the Huffman tree.

Type 1. Conceptual questions based on Huffman Encoding –
Here are the few key points based on Huffman Encoding:
It is a lossless data compressing technique generating variable length codes for different symbols.
It is based on greedy approach which considers frequency/probability of alphabets for generating codes.
It has complexity of nlogn where n is the number of unique characters.
The length of the code for a character is inversely proportional to frequency of its occurrence.
No code is prefix of another code due to which a sequence of code can be unambiguously decoded to characters.
Que – 1. Which of the following is true about Huffman Coding?
(A) Huffman coding may become lossy in some cases
(B) Huffman Codes may not be optimal lossless codes in some cases
(C) In Huffman coding, no code is prefix of any other code.
(D) All of the above
Solution: As discussed, Huffman encoding is a lossless compression technique. Therefore, option (A) and (B) are false. Option (C) is true as this is the basis of decoding of message from given code.

Type 2. To find number of bits for encoding a given message –
To solve this type of questions:
First calculate frequency of characters if not given
Generate Huffman Tree



Calculate number of bits using frequency of characters and number of bits required to represent those characters.
Que – 2. How many bits may be required for encoding the message ‘mississippi’?
Solution: Following is the frequency table of characters in ‘mississippi’ in non-decreasing order of frequency:
  
The generated Huffman tree is:
 
Following are the codes:

 

Total number of bits
= freq(m) * codelength(m) + freq(p) * code_length(p) + freq(s) * code_length(s) + freq(i) * code length(i)
= 1*3 + 2*3 + 4*2 + 4*1 = 21
Also, average bits per character can be found as:
Total number of bits required / total number of characters = 21/11 = 1.909
Type 3. Decoding from code to message –
To solve this type of question:
Generate codes for each character using Huffman tree (if not given)
Using prefix matching, replace the codes with characters.

Source:

Tutorial point- https://www.tutorialspoint.com/

Java T point- https://www.javatpoint.com/

Geeksforgeeks- https://www.geeksforgeeks.org

 Techopedia - https://www.techopedia.com/

guru99- https://www.guru99.com/

techterms - https://techterms.com/

webopedia - https://www.webopedia.com/

study - https://study.com/

wikipedia - https://en.wikipedia.org/

cprogramming - https://www.cprogramming.com/

w3schools - https://www.w3schools.com/

Electronic hub- https://www.electronicshub.org/

 


একটি মন্তব্য পোস্ট করুন

0 মন্তব্যসমূহ