Space and Time Complexity
I frequently hear that a given algorithm is better than some other, or that something should be done in a given way because "that's the pattern or convention". Following patterns can be useful and the same for conventions as far as everyone who is involved knows the convention (otherwise it could lead to a very obscure way of programming). But if we want to seriously program we must analyze the algorithm being applied and not just use it because that's the pattern or convention, especially if this algorithm will deal with a big volume of data, called many times or if the resources are limited.
So what does it take to an algorithm to be a good one?
First we will ask it to do what is supposed to do at all times! Now what? There are a lot different definitions and criterion to answer this question but what we want is a quantitative way of defining how good an algorithm is. If we don't have a precise way of doing so, then we will probably not be able to justify our criteria and we could not be really sure that a given algorithm is a good one.
Efficiency
By efficiency we understand the amount of resources being used, so the efficiency will be good or high if our algorithm uses few resources.
Okay and what is a resource? The two resources an algorithm has are
space and
time. By space we mean memory and any other place where data can be stored, like a hard disk (which is a slow but bigger version of ram memory) and by time we mean the processors (which will consume physical time in order to execute our algorithm).
So according to this we can define to types of efficiency: the space efficiency and the time efficiency. It is more common to refer to these ideas as space complexity and time complexity.
Space complexity
This will be a number that will represent the memory required by the algorithm, usually we want this value to be hardware and implementation independent so we will express it as a function of the algorithm input size.
Time complexity
This will be a number that will represent the time required by the algorithm, usually we want this value to be hardware and implementation independent so we will express it as a function of the algorithm input size.
Examples
Factorial
We can define the Factorial of a natural number or 0 as follows:
Factorial(0) = 1
Factorial(1) = 1
Factorial(2) = 2 * 1
Factorial(3) = 3 * 2 * 1
Factorial(4) = 4 * 3 * 2 * 1
Factorial(n) = n * (n1) * (n2) * ... * 1
Here is the source code:
public static long factorial(int n) {
long factorial = 1;
for (; n > 1; n) {
factorial *= n;
}
return factorial;
}
Let's start by analyzing the space complexity of this algorithm:
There is not much to do here, we have a long variable plus the input itself and that's a constant value which will no change for any value of n so the space complexity is constant!
Let's now think about the time complexity, the first instruction declares and assigns a value to a long variable this will be done no matter the value of n. Then we have the following operations done n times:
 the comparison: n>1
 a multiplication and assignation: factorial *= n
 a decrement and assignation: n
and finally the return statement.
So here the only thing that changes when n changes is the operations inside the for loop and since those operations are time constant (or at least smaller than a finite amount of time) we have:
timeComplexity(n) = constant1 + n * constant2 + constant3
timeComplexity(n) = n * constant2 + constant
(with constant = constant1 + constant3)
But we usually don't care about constants since they are implementation and hardware dependent so we can say that the time complexity of this factorial implementation is linear.
Note that if by analyzing the code we realize some operations or variables will change the complexity number only by adding a constant or making it proportional (multiplying by a constant) then most of the times we can ignore them. Also by the same reason usually we don't care much about the actual values. These values will be implementation dependent and we are more interested in a high level idea. So we don't really care how many bytes a long or an integer will be. What we want to know is how fast the complexity will change when the input change.
We will be taking the most significant term of the complexity formula and express it without implementation dependent constant, this is known as the Big O notation for example if we have a complexity (space or time) represented by the formula:
then the dominant term (the one that changes faster when the input changes) is:
and since we don't care about implementation constants then we have:
On the other hand we will actually need to do a much more detailed analysis if we need to find the precise number for a given algorithm but the overall idea will remain exactly the same.
The Factorial is a very popular example of recursion since it can be recursively defined as follows:
Factorial(0) = 1;
Factorial(n) = n * Factorial(n1); with n > 0
Just for this mathematical definition we can work out the space and time complexity:
One thing we can do is to try to get a formula for the first values of n and see if we can extend it to any number:
 For n = 0:
 We just have the value so there was nothing to do and so this will be 0 operations.
 For n = 1:

Factorial(1) = 1 * Factorial(11)
This means that we have the multiplication operation plus a subtraction operation (11) plus whatever operations we have in: Factorial(0)
So we have: the subtraction plus the multiplication plus 0 operation from Factorial(0):
2 operations
 For n = 2:

Factorial(2) = 2 * Factorial(21)
Here we have 2 operations (the subtraction and the multiplication) plus whatever operations we have in Factorial(1)
So we have: 2 operations plus 2 operations =
4 operations
 For n = 3:

Factorial(3) = 3 * Factorial(31)
Here we have 2 operations (the subtraction and the multiplication) plus whatever operations we have in Factorial(2)
So we have: 2 operations plus 4 operations =
6 operations
 For n = 4:

Factorial(4) = 4 * Factorial(41)
Here we have 2 operations (the subtraction and the multiplication) plus whatever operations we have in Factorial(3)
So we have: 2 operations plus 6 operations =
8 operations
 For n = 5:

Factorial(5) = 5 * Factorial(51)
Here we have 2 operations (the subtraction and the multiplication) plus whatever operations we have in Factorial(4)
So we have: 2 operations plus 8 operations =
10 operations
You might be wondering if this procedure we did was correct. We were counting first the subtraction and multiplication operations and latter we added the number of operations of the recursive call. This is not exactly what a computer will be doing or even you will be doing if you have to execute this algorithm definition since before multiplying you need to know the value of factorial recursive call. So we have been altering the order of these statements. Nevertheless we know that Math let us change the order in which numbers are added and the result will be the same (when adding a finite amount of numbers, like we are doing here (n finite)).
So it seems that the formula is:
space complexity(n) = 2n
But which complexity are we talking about: time, space, both, something else?
We were talking about operations and since we have a number of operations and we assume these operations take a constant amount of time then we can say that this is the time complexity.
And what about the space complexity?
Let's go back to the definition we have:
Factorial(0) = 1;
Factorial(n) = n * Factorial(n1); with n > 0
Here we can do the same than before and try to get a formula of the space consumed as a function of n:
 For n = 0:
 We just have the value so we just needed some constant space to hold or return this value
 For n = 1:

Factorial(1) = 1 * Factorial(11)
This means that we need space for the number 1 plus the space from Factorial(0)
So we have: 2 constant spaces
 For n = 2:

Factorial(2) = 2 * Factorial(21)
This means that we need space for the number 2 plus the space from Factorial(1)
So we have: 3 constant spaces
 For n = 3:

Factorial(3) = 3 * Factorial(31)
This means that we need space for the number 3 plus the space from Factorial(2)
So we have: 4 constant spaces
So it seems that the formula is:
time complexity(n) = n+1
Note that this is not the same time complexity we got for the first (iterative) version.
Is there something wrong? No. So why the difference?
Even these algorithms do the same thing (compute the factorial) they do it in different ways and so the complexity can differ!
Permutations
Consider a program to generate all the permutations of a given word:
For example calculating the permutations of “zero” will generate a list like:
zero, zeor, zreo, zroe, zoer, zore, ezro, ezor, erzo, eroz, eozr, eorz, rzeo, rzoe, rezo, reoz, roze, roez, ozer, ozre, oezr, oerz, orze, orez,
Let's do some analysis here: to permute means to change place, in this case we have 4 places (4 letters) and so what we want is to swap the letter places in order to generate all the possible combinations.
Let's see:
For the first position, we can choose between the 4 available ones: z,e,r,o.
For the second position, we only have 3 letters left (since we already have chosen 1).
For the third one, we will have 2 letters left (since we already have chosen 2).
Finally for the fourth and last position, we have only one letter left (since we already have chosen 3).
So if we would like to count all the possible permutations, we have in this case:
4 possibilities for the first position combined with 3 possibilities for the second position combined with 2 for the third combined with 1 for the last.
This is:
4 x 3 x 2 x 1 = factorial(4) = 4!
So no matter how our algorithm will actually manage to get this combinations done we can tell for sure that at least for n letters we will need factorial(n) space (the size of the list).
In the previous example:
size of (permutations(“zero”)) = 4! = 24
or if you prefer the more object oriented syntax:
“zero”.permute().size()=4!=24
Let's now program the algorithm and see what the space complexity looks like:
We can rely on
recursion to program this:
The underlying idea is that to permute the word “zero” is:
permutations of “zero” {
// concatenate each of the chars ('z','e','r','o') to
// the permutations of rest of the chars:
'z' concatenated to all the permutations of “ero”;
'e' concatenated to all the permutations of “zro”;
'r' concatenated to all the permutations of “zeo”;
'o' concatenated to all the permutations of “zer”;
}
Here is the java code that does this:
package ejercicios;
import java.util.ArrayList;
import java.util.List;
/**
* Generates the permutations of a text in 2 slightly different ways.
*
* @author kriche
*/
public class Permutations {
private static final String WORD = "zero";
public static void main(String[] args) {
System.out.print("Permutations of \"" + WORD + "\" first approach:\n\t");
System.out.println(listPermutations(WORD));
System.out.println();
System.out.print("Permutations of \"" + WORD + "\" second approach:\n\t[");
printPermutations("", WORD);
System.out.print("]\n\n");
}
/**
*
* @param word
* @return a list with all the permutations or the given word.
*/
public static List listPermutations(String word) {
List permutations = new ArrayList();
// implicit base case: empty word, an empty list is returned.
if(word.length()==1){
// base case: a word with one character, return a list with just that word.
permutations.add(word);
return permutations;
}
// contatenate each character in word to the permutations of the rest of the characters:
for (int t = 0; t < word.length(); ++t) {
char currentCh = word.charAt(t);
for( String currentPermutation: listPermutations(removeCharAt(word, t))){
permutations.add(currentCh+currentPermutation);
}
}
return permutations;
}
/**
* Prints prefix concatenating it to the permutations of word.
* @param prefix
* @param word
*/
public static void printPermutations(String prefix, String word) {
if (word.equals("")) {
// base case: word is empty so there are no permutations, just print prefix
System.out.print(prefix+", ");
// note that this return is just for clarity since the code bellow has no effect when
// word is empty
return;
}
// for each char in word, start by concatenating it to prefix and
// recursivelly deal with the rest of the chars:
for (int t = 0; t < word.length(); ++t) {
printPermutations(prefix + word.charAt(t), removeCharAt(word, t));
}
}
private static String removeCharAt(String str, int i) {
return str.substring(0, i) + str.substring(i + 1);
}
}
Both methods are recursive, in such cases what we can do is see how much space is needed for an incrementing input size and try to get some formula:
If we look at the first one: each call to listPermutations allocates space for a list (and some other variables) and iterates word.length times calling removeCharAt and calling itself.
So let's analyze what happens at different input sizes:
If word.length is empty some constant space is consumed and there are no recursive calls.
list size = 0
If word.length is 1 some constant space is consumed and there are no recursive calls and one word is added to the list.
list size = 1
If word.length is 2 some constant space is consumed and listPermutations(word.length=1) is called twice and each time 1 word is added to the list.
list size = 2 * 1
If word.length is 3 some constant space is consumed and listPermutations(word.length=2) is called 3 times and each time 2 words are added to the list.
list size = 3*2*1
If word.length is 4 some constant space is consumed and listPermutations(word.length=3) is called 4 times and each time 6 words are added to the list.
list size = 4 * 3 * 2 * 1
And so for n times we got:
list size = n!
Note that in this case the space complexity of the algorithm is of the same nature of the space complexity of the output (the list of permutations) but this will not always be the case. For example, we can have a very complex algorithm that returns a boolean so the output will have a constant complexity while the algorithm itself could be consuming a huge amount of space.
Bubble Sort
Let's see the complexity of one the first sorting algorithms you have or will learn:
package ejercicios.sort;
import java.util.Arrays;
/**
* Bubble sort demo.
* @author kriche
*/
public class BubbleSort {
/**
* runs the bubble sort demo showing an array after and before sorting.
* @param args
*/
public static void main(String args[]){
Integer[] arr = {4,2,8,5,3,0,2,1,7,6,9,1};
System.out.println("unsorted: "+Arrays.asList(arr));
sort(arr);
System.out.println(" sorted: "+Arrays.asList(arr));
}
/**
* Sort an array.
* @param arr the array to sort
*/
public static void sort(Integer[] arr) {
for (int i = 0; i < arr.length; i++) {
for (int j = arr.length  1; j > i; j) {
if (arr[j] < arr[j  1]) {
// swap the values:
Integer temp = arr[j];
arr[j] = arr[j  1];
arr[j  1] = temp;
}
}
}
}
}
What this code is doing is comparing the element from the bottom to the top, it takes the last element and compares it to the previous one to see which is the smallest, then compares the result against the next one (going up) until it reaches the top, at that time the smallest element in the whole array has been bubbled up to the top.
Then it starts again from the bottom up to the second position of the array and so on so forth until all the elements have been bubbled up to their sorted position.
In order to do this, we can see from the code above that there is an outer for loop in the incrementing interval [0 ; array.length 1] (this is: array.length times) containing the inner for loop in the decrementing interval [array.length  1 ; i + 1] (this is: array.length 1  i times) where i is the current index of the outer loop (recall that the amount of integers in the interval [min;max] is max  min +1).
The inner for loop contains what we assume is a group of operations with a total time smaller than some constant finite value in which case we have these operations executed:
(array.length 1  i) times in the inner for loop and the inner loop is executed array.length times.
So if we express this idea mathematically it will be something like:
being L the length or size n;
since operations is a constant, it can be taken outside:
knowing that:
Then we have:
We can take (L+1) as a common factor:
and so using the
Big O notation we get:
Note that we asked to have constant time operations or if not constant at least smaller than some finete value. This is important otherwise our analysis would be wrong.
Let's look at one of these operations: accessing an array element:
array[j]
Is this a constant time operation?
We cannot tell, it will depend on the implementation details. What if the whole array is too big to be kept in memory and the what is going on behind the scenes is that just a piece of it is loaded in memory at some given time? Then if we are accessing elements in memory the operation will be faster than if we are requesting elements that are not.
Sometimes the order in which elements of an array (of one or more dimensions) are being accessed could impact in the time complexity but we can still say (without being wrong) that this time will be smaller than some finite value.
QuickSort & Dual Pivot QuickSort
This sorting algorithm and its variation are quite interesting and the underlying idea is not hard to understand:
QuickSort:
Given an array, pick some element (called the pivot) and move all the elements smaller than the pivot to its left and all the bigger to the right (move the equal ones to the left or right). At this point we can be sure the pivot is in the correct position and all what is left is to recursively sort the 2 subarrays at the left and right of the pivot.
Dual Pivot QuickSort:
What if instead of one pivot we have 2?
Here is a paper describing both:
Conclusions
As programmers it's important that we write code that actually works. This means that when programming we must know which resources we will have available and make sure that our code fit into them.
Computer and Time Complexity help us to measure this in a
deterministic and not subjective manner!
There are other metrics that could be useful and also some "good" practices:
Try to have methods doing "just one thing" and make them as independent as possible from the rest of the algorithm. This tend to make them easier to understand, reuse and maintain. This is related to Cohesion and Coupling. Albeit this rule has its exceptions since sometimes (because of the nature of the problem) 2 things are easier to be done together than separately such as validating and evaluating a math expression.
The name of constants, variables, methods, etc. should be related to what they do and not how they do it or where they are being used or even worse: a name that says nothing like “a1”.
Avoid magic numbers (expect if it's obvious what they represent and used in just one place). Don't use constants like 3.1415 use PI instead.
Unless there is a penalty in your work contract for each new line character you type (some guys seem to love avoiding new lines) then use them!!
Most programers like indenting their code to denote nesting then why not to apply a similar idea vertically? As when writing a text it is useful to divide it into paragraphs, it is also useful to divide your code into paragraphs within the method to denote different parts of it. For example, when writing a unit test you might want to divide into paragraphs the "setup", "exercise" and "verify" pieces of it!
Comments
 Write comments where it's not obvious what the code does!
 Don't write comments where it's obvious what the code does!
 And please, don't write comments that say the same that what the code says: like a comment: "returns the name" for the method "getName()"
Logging
Debugging is great and helps us to understand what is going on. Logging is persistent and not only helps us to understand what's going on but what has happened!
Log critical lines of code!
And here a snippet from log4j framework:
As Brian W. Kernighan and Rob Pike put it in their truly excellent book "The Practice of Programming"
As personal choice, we tend not to use debuggers beyond getting a
stack trace or the value of a variable or two. One reason is that it
is easy to get lost in details of complicated data structures and
control flow; we find stepping through a program less productive
than thinking harder and adding output statements and selfchecking
code at critical places. Clicking over statements takes longer than
scanning the output of judiciouslyplaced displays. It takes less
time to decide where to put print statements than to singlestep to
the critical section of code, even assuming we know where that
is. More important, debugging statements stay with the program;
debugging sessions are transient.