All natural numbers can be uniquely described using the powers of a natural number b (b^0=1,b^1=b,b^2,... ),the so called basis, which should be >1. A natural number n can then be described by the powers of b in the form n=a_k*b^k+a_(k-1)*b^(k-1)+...+a_1*b+a_0*1 ,i from 0 to k ,a_i being numbers between 0 and b-1, being called the digits ,b^k being the biggest power of b, which is smaller than n It is provable, that given a basis b there is exactly one sequence of digits a_k,a_(k-1),...,a_1,a_0'. so that the above formula is correct. Because of this, one can represent the number 'n' by this sequence of digits 'a_ka_(k-1)...a_1a_0'. The bigger 'b' is, the shorter the sequence and vice versa. For generally representing natural numbers to basis b, one thus needs b different symbols for the digits 0,...,b-1.Classicaly one uses the hindo-arabic digits 0,1,2,3,4,5,6,7,8,9 but for example to the base 'sixteen' one uses the symbols 0,1,2,3,4,5,6,7,8,9,A,B,C,D,E,F. As an example, let us represent the number 'twohundredthree' to the basis 'five': 5^3=125 is the biggest power of five smaller than 'twohundredthree',fitting one time into n. Thus a_3 is 1. 5^2=25 fits 3 times into the rest, a_2 being =3, 5^1=5 fitting zero times in the new rest, a_1 being =0, 5^0=1 fitting 3 times into the last rest, a_0 being =3 Thus the representation to the basis 5 for the number 'twohundredthree' gives us 1303. On the hardware scale of information processing, one usually distinguishes between two states, and codes information by sequences of these states, using the digits '0' and '1' to represent them. The sequence of zeros and ones can thus be interpreted as a number to the basis 2, called a binary number. Usually this representation is densified when changing the basis in that way, that a fixed amount of 'binary digits' (usually 3 or 4 of them) correspond to digits in another basis (usually 8 [octal] and 16 [hexadecimal]) and replace them by it.In some programming languages as C they are prefixed by a `0' (octal) or `0x' (hexadecimal). Thus 011011100101 (binary) becomes 03345 (octal) or 0xCE9 (hexadecimal). Because 7 bits ( short for 'binary digits') where used for each character in the ASCII (1) character set and 8 in the extended formats such as LATIN-1, it is useful to interpret 8 bits (called a 'byte') as basic informational unit, which can be described by 2 hexadecimal digits.