As you know, a computer stores information in binary form, representing it as a sequence of ones and zeros. To translate information into a form convenient for human perception, each unique sequence of numbers is replaced by its corresponding symbol when displayed.
One of the systems for correlating binary codes with printed and control characters is
At the current level of development of computer technology, the user is not required to know the code of each specific character. However, a general understanding of how coding is carried out is extremely useful, and for some categories of specialists, even necessary.
Creating ASCII
The encoding was originally developed in 1963 and then updated twice over the course of 25 years.
In the original version, the ASCII character table included 128 characters; later an extended version appeared, where the first 128 characters were saved, and previously missing characters were assigned to codes with the eighth bit involved.
For many years, this encoding was the most popular in the world. In 2006, Latin 1252 took the leading position, and from the end of 2007 to the present, Unicode has firmly held the leading position.
Computer representation of ASCII
Each ASCII character has its own code, consisting of 8 characters representing a zero or a one. The minimum number in this representation is zero (eight zeros in the binary system), which is the code of the first element in the table.
Two codes in the table were reserved for switching between standard US-ASCII and its national variant.
After ASCII began to include not 128, but 256 characters, an encoding variant became widespread, in which the original version of the table was stored in the first 128 codes with the 8th bit zero. National written characters were stored in the upper half of the table (positions 128-255).
The user does not need to know ASCII character codes directly. A software developer usually only needs to know the element number in the table to calculate its code using the binary system if necessary.
Russian language
After the development of encodings for the Scandinavian languages, Chinese, Korean, Greek, etc. in the early 70s, the Soviet Union began creating its own version. Soon, a version of the 8-bit encoding called KOI8 was developed, preserving the first 128 ASCII character codes and allocating the same number of positions for letters of the national alphabet and additional characters.
Before the introduction of Unicode, KOI8 dominated the Russian segment of the Internet. There were encoding options for both the Russian and Ukrainian alphabet.
ASCII problems
Since the number of elements even in the extended table did not exceed 256, there was no possibility of accommodating several different scripts in one encoding. In the 90s, the “crocozyabr” problem appeared on the Runet, when texts typed in Russian ASCII characters were displayed incorrectly.
The problem was that the different ASCII codes did not match each other. Let us remember that various characters could be located in positions 128-255, and when changing one Cyrillic encoding to another, all letters of the text were replaced with others having an identical number in a different version of the encoding.
Current Status
With the advent of Unicode, the popularity of ASCII began to decline sharply.
The reason for this lies in the fact that the new encoding made it possible to accommodate characters from almost all written languages. In this case, the first 128 ASCII characters correspond to the same characters in Unicode.
In 2000, ASCII was the most popular encoding on the Internet and was used on 60% of web pages indexed by Google. By 2012, the share of such pages had dropped to 17%, and Unicode (UTF-8) took the place of the most popular encoding.
Thus, ASCII is an important part of the history of information technology, but its use in the future seems unpromising.
Character overlay
The BS (backspace) character allows the printer to print one character on top of another. ASCII provided for adding diacritics to letters in this way, for example:
- a BS "→ á
- a BS ` → à
- a BS ^ → â
- o BS / → ø
- c BS , → ç
- n BS ~ → с
Note: in old fonts, the apostrophe " was drawn slanted to the left, and the tilde ~ was shifted upward, so they just fit the role of an acute and a tilde on top.
If the same character is superimposed on a character, the result is a bold font effect, and if an underline is superimposed on a character, the result is underlined text.
- a BS a → a
- aBS_→ a
Note: This is used, for example, in the man help system.
National ASCII variants
The ISO 646 (ECMA-6) standard provides for the possibility of placing national symbols in place @ [ \ ] ^ ` { | } ~ . In addition to this, on site # can be posted £ , and in place $ - ¤ . This system is well suited for European languages where only a few extra characters are needed. The version of ASCII without national characters is called US-ASCII, or "International Reference Version".
Subsequently, it turned out to be more convenient to use 8-bit encodings (code pages), where the lower half of the code table (0-127) is occupied by US-ASCII characters, and the upper half (128-255) by additional characters, including a set of national characters. Thus, the upper half of the ASCII table, before the widespread adoption of Unicode, was actively used to represent localized characters, letters of the local language. The lack of a unified standard for placing Cyrillic characters in the ASCII table caused many problems with encodings (KOI-8, Windows-1251 and others). Other languages with non-Latin scripts also suffered from having several different encodings.
.0 | .1 | .2 | .3 | .4 | .5 | .6 | .7 | .8 | .9 | .A | .B | .C | .D | .E | .F | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0. | NUL | SOM | EOA | EOM | EQT | W.R.U. | RU | BELL | BKSP | HT | LF | VT | FF | CR | SO | S.I. |
1. | DC 0 | DC 1 | DC 2 | DC 3 | DC 4 | ERR | SYNC | L.E.M. | S 0 | S 1 | S 2 | S 3 | S 4 | S 5 | S 6 | S 7 |
2. | ||||||||||||||||
3. | ||||||||||||||||
4. | BLANK | ! | " | # | $ | % | & | " | ( | ) | * | + | , | - | . | / |
5. | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | : | ; | < | = | > | ? |
6. | ||||||||||||||||
7. | ||||||||||||||||
8. | ||||||||||||||||
9. | ||||||||||||||||
A. | @ | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O |
B. | P | Q | R | S | T | U | V | W | X | Y | Z | [ | \ | ] | ← | |
C. | ||||||||||||||||
D. | ||||||||||||||||
E. | a | b | c | d | e | f | g | h | i | j | k | l | m | n | o | |
F. | p | q | r | s | t | u | v | w | x | y | z | ESC | DEL |
On those computers where the minimum addressable unit of memory was a 36-bit word, 6-bit characters were initially used (1 word = 6 characters). After the transition to ASCII, such computers began to contain either 5 seven-bit characters (1 bit remained extra) or 4 nine-bit characters in one word.
ASCII codes are also used to determine which key is pressed during programming. For a standard QWERTY keyboard, the code table looks like this:
[8-bit encodings: ASCII, KOI-8R and CP1251] The first encoding tables created in the United States did not use the eighth bit in a byte. The text was represented as a sequence of bytes, but the eighth bit was not taken into account (it was used for official purposes). The table has become a generally accepted standard ASCII(American Standard Code for Information Interchange). The first 32 characters of the ASCII table (00 to 1F) were used for non-printing characters. They were designed to control a printing device, etc. The rest - from 20 to 7F - are regular (printable) characters. Table 1 - ASCII encoding
As you can easily see, this encoding contains only Latin letters, and those that are used in the English language. There are also arithmetic and other service symbols. But there are neither Russian letters, nor even special Latin ones for German or French. This is easy to explain - the encoding was developed specifically as an American standard. As computers began to be used throughout the world, other characters needed to be encoded. To do this, it was decided to use the eighth bit in each byte. This made 128 more values available (from 80 to FF) that could be used to encode characters. The first of the eight-bit tables is “extended ASCII” ( Extended ASCII) - included various variants of Latin characters used in some languages of Western Europe. It also contained other additional symbols, including pseudographics. Pseudographic characters allow you to provide some semblance of graphics by displaying only text characters on the screen. For example, the file management program FAR Manager works using pseudographics. There were no Russian letters in the Extended ASCII table. Russia (formerly the USSR) and other countries created their own encodings that made it possible to represent specific “national” characters in 8-bit text files - Latin letters of the Polish and Czech languages, Cyrillic (including Russian letters) and other alphabets. In all encodings that have become widespread, the first 127 characters (that is, the byte value with the eighth bit equal to 0) are the same as ASCII. So an ASCII file works in either of these encodings; The letters of the English language are represented in the same way. Organization ISO(International Standardization Organization) adopted a group of standards ISO 8859. It defines 8-bit encodings for different language groups. So, ISO 8859-1 is an Extended ASCII table for the USA and Western Europe. And ISO 8859-5 is a table for the Cyrillic alphabet (including Russian). However, for historical reasons, the ISO 8859-5 encoding did not take root. In reality, the following encodings are used for the Russian language: Code Page 866 ( CP866), aka “DOS”, aka “alternative GOST encoding”. Widely used until the mid-90s; now used to a limited extent. Practically not used for distributing texts on the Internet. The main advantage of the CP866 was the preservation of pseudo-graphics characters in the same places as in Extended ASCII; therefore, foreign text programs, for example, the famous Norton Commander, could work without changes. The CP866 is now used for Windows programs running in text windows or full-screen text mode, including FAR Manager. Texts in CP866 have been quite rare in recent years (but it is used to encode Russian file names in Windows). Therefore, we will dwell in more detail on two other encodings - KOI-8R and CP1251. As you can see, in the CP1251 encoding table, Russian letters are arranged in alphabetical order (with the exception, however, of the letter E). This arrangement makes it very easy for computer programs to sort alphabetically. But in KOI-8R the order of Russian letters seems random. But in reality this is not the case. In many older programs, the 8th bit was lost when processing or transmitting text. (Now such programs are practically “extinct”, but in the late 80s - early 90s they were widespread). To get a 7-bit value from an 8-bit value, just subtract 8 from the most significant digit; for example, E1 becomes 61. Now compare KOI-8R with the ASCII table (Table 1). You will find that Russian letters are placed in clear correspondence with Latin ones. If the eighth bit disappears, lowercase Russian letters turn into uppercase Latin letters, and uppercase Russian letters turn into lowercase Latin ones. So, E1 in KOI-8 is the Russian “A”, while 61 in ASCII is the Latin “a”. So, KOI-8 allows you to maintain the readability of Russian text when the 8th bit is lost. “Hello everyone” becomes “pRIWET WSEM”. Recently, both the alphabetical order of characters in the encoding table and readability with the loss of the 8th bit have lost their decisive importance. The eighth bit in modern computers is not lost during transmission or processing. And alphabetical sorting is done taking into account the encoding, and not by simply comparing codes. (By the way, the CP1251 codes are not completely arranged alphabetically - the letter E is not in its place). Due to the fact that there are two common encodings, when working with the Internet (mail, browsing Web sites), you can sometimes see a meaningless set of letters instead of Russian text. For example, “I AM SBYUFEMHEL.” These are just the words “with respect”; but they were encoded in CP1251 encoding, and the computer decoded the text using the KOI-8 table. If the same words, on the contrary, were encoded in KOI-8, and the computer decoded the text according to the CP1251 table, the result would be “U HCHBTSEOYEN”. Sometimes it happens that a computer deciphers Russian-language letters using a table that is not intended for the Russian language. Then, instead of Russian letters, a meaningless set of symbols appears (for example, Latin letters of Eastern European languages); they are often called “crocozybras”. In most cases, modern programs cope with determining the encodings of Internet documents (emails and Web pages) independently. But sometimes they “misfire”, and then you can see strange sequences of Russian letters or “krokozyabry”. As a rule, in such a situation, to display real text on the screen, it is enough to select the encoding manually in the program menu. Information from the page http://open-office.edusite.ru/TextProcessor/p5aa1.html was used for this article. Material taken from the site: By the way, on our website you can convert any text into decimal, hexadecimal, binary code using the Online Code Calculator. ASCII tableASCII (American Standard Code for Information Interchange) Summary table of ASCII codesASCII Windows Character Code Table (Win-1251)
Extended ASCII Code TableFormatting symbols.
Data transfer.
Dividing marks when transmitting information.Other symbols.
Unicode (Unicode in English) is a character encoding standard. Simply put, this is a table of correspondence between text characters ( , letters, punctuation elements) binary codes. The computer only understands the sequence of zeros and ones. So that it knows what exactly it should display on the screen, it is necessary to assign each character its own unique number. In the eighties, characters were encoded in one byte, that is, eight bits (each bit is a 0 or 1). Thus, it turned out that one table (aka encoding or set) can only accommodate 256 characters. This may not be enough even for one language. Therefore, many different encodings appeared, confusion with which often led to some strange gibberish appearing on the screen instead of readable text. A single standard was required, which is what Unicode became. The most used encoding is UTF-8 (Unicode Transformation Format), which uses 1 to 4 bytes to represent a character. SymbolsCharacters in Unicode tables are numbered with hexadecimal numbers. For example, the Cyrillic capital letter M is designated U+041C. This means that it stands at the intersection of row 041 and column C. You can simply copy it and then paste it somewhere. In order not to rummage through a multi-kilometer list, you should use the search. When you go to the symbol page, you will see its Unicode number and the way it is written in different fonts. You can enter the sign itself into the search bar, even if a square is drawn instead, at least to find out what it was. Also, on this site there are special (and random) sets of the same type of icons, collected from different sections, for ease of use. The Unicode standard is international. It includes characters from almost all scripts of the world. Including those that are no longer used. Egyptian hieroglyphs, Germanic runes, Mayan writing, cuneiform and alphabets of ancient states. Designations of weights and measures, musical notation, and mathematical concepts are also presented. The Unicode Consortium itself does not invent new characters. Those icons that find their use in society are added to the tables. For example, the ruble sign was actively used for six years before it was added to Unicode. Emoji pictograms (emoticons) were also first widely used in Japan before they were included in the encoding. But trademarks and company logos are not added in principle. Even such common ones as the Apple apple or the Windows flag. To date, about 120 thousand characters have been encoded in version 8.0. Read also
|