๐Ÿšจ Limited Offer: First 50 users get 500 credits for free โ€” only ... spots left!
Algorithms & Data Structures Flashcards

Free Algorithms & Data Structures flashcards, exportable to Notion

Learn faster with 50 Algorithms & Data Structures flashcards. One-click export to Notion.

Learn fast, memorize everything, master Algorithms & Data Structures. No credit card required.

Want to create flashcards from your own textbooks and notes?

Let AI create automatically flashcards from your own textbooks and notes. Upload your PDF, select the pages you want to memorize fast, and let AI do the rest. One-click export to Notion.

Create Flashcards from my PDFs

Algorithms & Data Structures

50 flashcards

The time complexity of the bubble sort algorithm is O(n^2) in the average and worst cases, where n is the number of items to be sorted.
The time complexity of the merge sort algorithm is O(n log n) in all cases, where n is the number of items to be sorted.
The time complexity of a linear search algorithm is O(n) in the worst case, where n is the number of elements in the data structure.
The time complexity of a binary search algorithm is O(log n) in the average and best cases, where n is the number of elements in the sorted data structure.
The space complexity of the quicksort algorithm is O(log n) in the best case and O(n) in the worst case, where n is the number of items to be sorted.
An array is a linear data structure with elements stored in contiguous memory locations, allowing constant-time access to any element. A linked list is a linear data structure where elements are not necessarily stored contiguously, requiring linear-time traversal to access elements.
A binary search tree (BST) is a tree data structure in which the value of each node is greater than or equal to the values in all the nodes in that node's left subtree, and less than or equal to the values in all the nodes in that node's right subtree.
The time complexity of searching for an element in a balanced binary search tree is O(log n), where n is the number of nodes in the tree.
A graph is a non-linear data structure that consists of a finite set of vertices (or nodes) and a set of edges connecting pairs of vertices.
Breadth-first search (BFS) explores all the vertices at the current depth before moving on to vertices at the next depth level, while depth-first search (DFS) explores as far as possible along each branch before backtracking.
The time complexity of the insertion sort algorithm is O(n^2) in the average and worst cases, where n is the number of items to be sorted.
The time complexity of the selection sort algorithm is O(n^2) in all cases, where n is the number of items to be sorted.
In-place sorting algorithms perform sorting with a small, constant extra space, while not-in-place sorting algorithms require extra space proportional to the size of the input.
A hash table is a data structure that stores key-value pairs, providing constant-time average-case complexity for insert, delete, and search operations.
The time complexity of a hash table lookup operation is O(1) on average, assuming a good hash function that distributes keys uniformly.
A heap is a tree-based data structure that satisfies the heap property, where the value of each node is greater than or equal to (or less than or equal to) the values of its children. It is commonly used for implementing priority queues.
The time complexity of inserting an element into a max-heap or min-heap is O(log n), where n is the number of elements in the heap.
Both AVL trees and red-black trees are self-balancing binary search trees. AVL trees maintain a strict height balance, while red-black trees have a more relaxed balance condition, making them easier to implement and more efficient for insertions and deletions.
Pre-order traversal visits the root node first, then the left subtree, and finally the right subtree. In-order traversal visits the left subtree, then the root node, and finally the right subtree. Post-order traversal visits the left subtree, then the right subtree, and finally the root node.
The time complexity of Dijkstra's algorithm for finding the shortest path in a weighted graph is O((V + E) log V), where V is the number of vertices and E is the number of edges in the graph.
A complete binary tree is a tree in which every level, except possibly the last, is completely filled, and all nodes are as far left as possible. A full binary tree is a tree where every node has either 0 or 2 child nodes.
A trie, also called a prefix tree, is a tree-based data structure used for efficient information retrieval operations like finding a particular key in a dataset of strings. It is commonly used for implementing autocomplete and IP routing table features.
The time complexity of inserting a key into a trie is O(m), where m is the length of the key string.
The worst-case time complexity is the maximum amount of time an algorithm may take for any input of a given size. The amortized time complexity is the average time complexity over a sequence of operations, where the occasional expensive operation is averaged out by a series of cheaper operations.
A segment tree is a tree-based data structure used for storing information about intervals or segments. It allows for efficient querying and updating of intervals, and is commonly used for range query problems.
The time complexity of building a segment tree is O(n), where n is the number of elements in the input array.
In a min-heap, the value of each node is less than or equal to the values of its child nodes, while in a max-heap, the value of each node is greater than or equal to the values of its child nodes.
A topological sort algorithm is a linear ordering of the vertices in a directed acyclic graph (DAG), such that for every directed edge (u, v) from vertex u to vertex v, u comes before v in the ordering.
The time complexity of Kruskal's algorithm for finding the minimum spanning tree of a graph is O(E log E), where E is the number of edges in the graph.
An adjacency list represents a graph using a list of vertices and their adjacent vertices, while an adjacency matrix represents a graph using a 2D matrix, where each element indicates whether a pair of vertices is connected by an edge or not.
A union-find data structure, also known as a disjoint-set data structure, is a data structure that keeps track of a collection of disjoint (non-overlapping) sets and supports two main operations: finding the set that a particular element belongs to, and merging two sets into a single set.
With path compression, the amortized time complexity of both the union and find operations in a union-find data structure is O(ฮฑ(n)), where ฮฑ(n) is the inverse Ackermann function, which grows very slowly and is essentially constant for practical purposes.
A greedy algorithm makes the locally optimal choice at each step, with the hope of finding a global optimum, while a dynamic programming algorithm solves a complex problem by breaking it down into simpler subproblems and solving each subproblem once, storing the solutions in a table to avoid recomputing them.
The time complexity of Prim's algorithm for finding the minimum spanning tree of a graph is O((V + E) log V), where V is the number of vertices and E is the number of edges in the graph.
A B-tree is a self-balancing tree data structure that keeps data sorted and allows for efficient insertion, deletion, and search operations. It is commonly used in databases and file systems to store large amounts of data.
The time complexity of searching for a key in a B-tree is O(log n), where n is the number of keys in the B-tree.
A depth-first search (DFS) algorithm explores as far as possible along each branch before backtracking, while a breadth-first search (BFS) algorithm explores all the vertices at the current depth before moving on to vertices at the next depth level.
A skip list is a data structure that allows efficient search, insertion, and deletion operations, while maintaining a sorted order of elements. It is essentially a probabilistic alternative to a balanced tree data structure.
The time complexity of the insertion operation in a skip list is O(log n) on average, where n is the number of elements in the skip list.
The time complexity of the radix sort algorithm is O(kn), where n is the number of items to be sorted, and k is the number of digits or bytes in the key used for sorting.
The time complexity of the counting sort algorithm is O(n + k), where n is the number of items to be sorted, and k is the range of values in the input data.
An internal sorting algorithm is designed to sort data that can fit entirely in the main memory, while an external sorting algorithm is designed to sort data that is too large to fit entirely in the main memory, and must be partially stored in auxiliary memory, such as a disk.
Bucket sort is a distribution-based sorting algorithm that works by partitioning an array into a number of buckets, sorting the buckets individually, and then concatenating the sorted buckets to obtain the final sorted array.
An in-place algorithm is an algorithm that transforms input using a small, constant amount of additional memory space, while an out-of-place algorithm requires additional memory proportional to the size of the input.
The time complexity of the quickselect algorithm for finding the k-th smallest element in an unsorted array is O(n) on average, where n is the number of elements in the array.
A suffix tree is a compressed trie data structure that stores all suffixes of a given string, allowing for efficient string operations such as pattern matching, finding the longest common substring, and finding the longest repeated substring.
The time complexity of constructing a suffix tree from a string of length n is O(n), assuming the construction algorithm is optimized for the case of constant-size alphabets.
In a max-heap, the value of each node is greater than or equal to the values of its child nodes, while in a min-heap, the value of each node is less than or equal to the values of its child nodes.
A Fibonacci heap is a data structure for implementing a priority queue, designed to provide better amortized time complexity than binary heaps for certain operations, such as decreasing the key value or deleting an arbitrary node.
In a Fibonacci heap, the time complexity of insert, find-min, and decrease-key operations is O(1) amortized, while the time complexity of delete-min and delete operations is O(log n) amortized, where n is the number of elements in the heap.