ASCII vs EASCII vs UNICODE

Wamae Benson
2 min readNov 28, 2020

--

ASCII

ASCII defines 128 characters, which map to the numbers 0–127.

In ASCII, every letter, digits, and symbols that mattered (a-z, A-Z, 0–9, +, -, /, “, ! etc.) were represented as a number between 32 and 127. Most computers used 8-bits bytes then. This meant each byte could store ²⁸-1= 255 numbers. So each byte (or unit of storage) had more than enough space to store the basic set of english characters.

Life was good. Heck, they even sent people to the moon! That is if you spoke English — what if you spoke Chinese, Japanese, Korean, Arabic etc — what about EMOJIS?

EASCII (Extended ASCII)

ASCII originally used seven bits to encode each character. This was later increased to eight with Extended ASCII to address the apparent inadequacy of the original.

Enter the UNICODE

https://unicode-table.com/en/

In contrast, Unicode uses a variable-bit encoding program where you can choose between 32, 16, and 8-bit encodings. Using more bits lets you use more characters at the expense of larger files while fewer bits give you a limited choice but you save a lot of space. Using fewer bits (i.e. UTF-8(8 bits) or ASCII) would probably be best if you are encoding a large document in English.

Another major advantage of Unicode is that at its maximum it can accommodate a huge number of characters. Because of this, Unicode currently contains most written languages and still has room for even more. This includes typical left-to-right scripts like English and even right-to-left scripts like Arabic. Chinese, Japanese, and many other variants are also represented within Unicode. So Unicode won’t be replaced anytime soon.

References:

Read more: Difference Between Unicode and ASCII | Difference Between http://www.differencebetween.net/technology/software-technology/difference-between-unicode-and-ascii/#ixzz6f7nTXQhS

--

--

No responses yet