If you’ve used a computer for more than 20 minutes you’ve probably heard about bits and bytes. Hard drives, computer chips, networks, and cables (like Cat5e or HDMI cables) often mention bits or bytes in their specifications. And computers aren’t the only things that use bits; TVs, sound systems, smart phones, and pretty much any other electronic device you can think of use them too. So what is a bit? Bit is short for Binary Digit (Binary digIT), and it refers to the basic unit of information in computers and telecommunication.
What is Binary?
As humans, we use a base-10 (or decimal) system, meaning we compute things using decimal digits (all of the digits from 0-9) raised to powers of ten. For example the number 50 is (5 * 10^1) and 354 is (3 * 10^3) + (5 * 10^1) + (4 * 10^0). Since computers use binary they compute things using binary digits (or bits), raised to powers of two. That means that while we use all the numbers from 0-9, computers only use 0 and 1. So the number 12 would be (1 * 2^3) + (1 * 2^2) + (0 * 1^1) + (0 * 1^0), or 1100.
You may ask, “Why binary if we use decimal”? Basically it all boils down to cost. It is a lot easier to use binary with current electronic technology. You could theoretically build a computer that operates in base-10, but they would be horribly expensive right now, while base-2 computers are relatively cheap to build.
Since each bit only offers you one of two options (0 or 1) the only way to get more value possibilities is to group bits together. Adding bits together gives you exponentially more value possibilities. So, while one bit only gives you 2 possibilities, 3 bits give you 8 possibilities, 8 bits give you 256 possibilities, and so on. This concept is especially visible when looking at color depth.
Okay, So What’s a Byte?
Bits are usually grouped together in 8-bit sets called bytes. Why 8 bits in a byte? Good question. The byte is something that people settled on over 50+ years of trial and error. Since bit and byte sound so similar, they often can cause confusion. Luckily, bits are so often grouped together into bytes that you won’t normally hear the term “bit” except in networks. Networks usually use terms like “megabit” or “gigabit” speeds. It’s important not to confuse this with “megabyte” or “gigabyte,” since the numbers are very different. To learn more about bits in networking, check out this article.
Prefixes like “mega” and “giga” are extremely helpful when dealing with massive amounts of bits and bytes. Each prefix is a binary multiplier. Kilo (K) = 2^10 (1,024), Mega (M) = 2^20 (1,048,576), Giga (G) = 2^30 (1,076,741,824), and Tera (T) = 2^40 (1,099,511,627,766). There are more prefixes that get into even more ridiculously large numbers, but these are the ones most commonly seen today. As you can see, each prefix is roughly ten times larger than the previous prefix. You may wonder why we would need so much space, but an average HD video can be several gigabytes. With all of the multimedia people are using today, we need the space. And it won’t be long before we start needing those larger prefixes.
- Bits (binary digits) are basic units of information used by all sorts of electronic devices
- The more bits that you use, the more information you can transfer. That means that more bits equal better color, better sound, better video, more storage space, more memory, etc
- A Byte is a collection of 8 bits
- As you get into larger numbers of bits and bytes you start using prefixes, like mega, giga, and tera