Open
Close

What is the volume of a RAM cell? RAM. Random access memory (RAM - Random Access Method) is an array of crystalline cells capable of storing data. See what a “Computer memory cell” is in other dictionaries

Memory elements form the basis of the internal functioning of any computing system, since with their help data is stored and can be read again during further processing. The central processor has direct access to data located in RAM (Random Access Memory - RAM - random access memory). RAM is a computer's fast storage medium.

The RAM is tasked with providing the necessary information at the request of the central processor. This means that the data must be available for processing at any time. Memory elements are “temporary” storage devices. This is due not only to the power supply, but also to the structure of the memory modules themselves.

Each element of RAM is a system of electronic keys and a capacitor that stores information in the form of a charge. This capacitor is not ideal, its capacitance is not too large, and due to the fact that it is formed in a semiconductor junction located in the thickness of the silicon crystal, additional resistances appear through which charge flows from the capacitor (at the same time distorting information in neighboring cells). The presence of charge on the capacitor corresponds to a logical one. The time for stable storage of information in a RAM cell is usually several milliseconds. After this, the information must be overwritten. This rewriting procedure is called memory regeneration (Refresh).

The only way to regenerate information stored in memory is to perform read or write operations from memory. If information is stored in RAM and then left unused for a few milliseconds, it will be lost as the storage capacitors are completely discharged.

Memory regeneration occurs with each operation of reading or writing data to RAM. When executing any program, it cannot be guaranteed that all RAM cells will be accessed. Therefore, there is a special circuit that, at certain intervals (for example, every 2 ms), accesses (for reading) all lines of RAM. At these moments, the central processor is in a standby state. In one cycle, the circuit regenerates all lines of RAM.

The principle of operation of RAM is as follows. Typically, memory cells are configured into a matrix of rows and columns and the full data cell address (1 bit of information) is divided into two components - a row address and a column address. To transfer a row address to the memory chip, use the RAS (Row Address Strobe) signal, and for a column address, use the CAS (Column Address Strobe) signal.

In the process of accessing a dynamic memory chip to write and read information, first the address code and, at the same time, the RAS signal are supplied to its address inputs, then, with a slight delay, the column address code, accompanied by the CAS signal. The access time to a RAM block is determined primarily by the reading time (capacitor discharge) and regeneration (capacitor charge). Let's take a closer look at how dynamic memory works. When accessing memory (regardless of whether it is reading or writing), the row address and the RAS signal are supplied to the memory inputs. This means that each column bus is connected to the memory cell of the selected row. Since information is stored in the form of capacitor charge, in order to read the information recorded in the cell, a device with a high input resistance is needed to limit the discharge current of the capacitor in order to avoid leakage current. Such a device is a readout amplifier connected to each bus of the dynamic memory column. Information is read from the entire row of storage elements simultaneously and placed in a register.

Working principle of RAM

As noted above, with a slight delay after the RAS signal, the column address and the CAS signal are supplied to the dynamic memory inputs. When reading according to a column address, data is fetched from the row register and supplied to the dynamic memory output. When reading information from storage cells, the reading amplifiers destroy it, so to save the information it is necessary to rewrite it: the outputs of the row register are again connected to the common buses of the memory columns in order to rewrite the information read from the row.

If a memory write cycle is performed, the WR (Write) signal is applied and information is supplied to the common column bus not from the register, but from the memory information input through a switch determined by the column address. Thus, the passage of data when written is determined by a combination of the column and row address signals and the permission to write data to memory. When writing, data from the row register is not sent to the output (Do).

Memory types

DRAM (Dynamic RAM)- dynamic random access memory gets its name from the principle of operation of its storage cells, which are made in the form of capacitors formed by elements of semiconductor microcircuits. When talking about this type of RAM, we mean a chip with a DIP package (Dual In-line Packade - a package with a 2-row pinout arrangement). DRAM elements, in the form of individual chips, are usually installed on older motherboards. These chips were used as composite memory modules such as SIP and SIMM modules.

DRAM is used in most RAM systems of modern personal computers. The main advantage of this type of memory is that its cells are packed very tightly, i.e., many bits can be packed into a small chip, which means that large-capacity memory can be built on their basis.

The memory cells in a DRAM chip are tiny capacitors that hold charges. This is exactly how the bits are encoded (by the presence or absence of charges). The problems associated with this type of memory are caused by the fact that it is dynamic, i.e. it must be constantly regenerated, otherwise the electrical charges in the memory capacitors will “drain” and data will be lost. A refresh occurs when the system's memory controller takes a tiny break and accesses all the data lines in the memory chips. Most systems have a memory controller (usually built into the motherboard chipset) that is set to an industry standard refresh rate of 15 µs. All data lines are accessed after 128 special regeneration cycles. This means that every 1.92 ms (128?15 μs) all lines in memory are read to ensure data regeneration.

Memory regeneration, unfortunately, takes time from the processor: each regeneration cycle takes several CPU cycles in duration. In older computers, refresh cycles could consume up to 10% (or more) of CPU time, but in modern systems running at hundreds of megahertz, refresh cycles account for 1% (or less) of CPU time. Some systems allow you to change refresh settings using the CMOS setup program, but increasing the time between refresh cycles can cause some memory cells to drain charge, causing memory failures. In most cases, it is safer to stick to the recommended or default regeneration frequency.

Since regeneration costs in modern computers are less than 1%, changing the refresh rate has little impact on the computer's performance. One of the most acceptable options is to use the default values ​​or automatic settings specified using Setup BIOS for memory synchronization. Most modern systems do not allow you to change the specified memory timing, always using automatically set parameters. During automatic installation, the motherboard reads the timing parameters from the serial presence detect (SPD) system in ROM and sets the frequency of the periodic pulses in accordance with the received data.

DRAM devices use only one transistor and a pair of capacitors to store one bit, so they are larger than other types of memory chips. Currently, there are dynamic RAM chips with a capacity of 512 MB or more. This means that such chips contain more than 256 million transistors! But Pentium 4 has only 42 million transistors. Why such a difference? The fact is that in a memory chip all transistors and capacitors are placed in series, usually in the nodes of a square lattice, in the form of very simple, periodically repeating structures, in contrast to the processor, which is a more complex circuit of various structures that does not have a clear organization.

A transistor for each single-bit DRAM register is used to read the state of the adjacent capacitor. If the capacitor is charged, 1 is written in the cell; if there is no charge, 0 is written. Charges in tiny capacitors drain all the time, which is why the memory must be constantly regenerated. Even a momentary interruption in power supply or some failure in the regeneration cycles will result in loss of charge in the DRAM cell, and therefore loss of data. In a working system, this leads to a blue screen, global failures of the security system, file corruption, or a complete system failure.

Dynamic random access memory is used in personal computers; Because it is inexpensive, the chips can be tightly packed, meaning that high-capacity storage can occupy a small space. Unfortunately, this type of memory is not very fast; it is usually much “slower” than the processor. Therefore, there are many different types of DRAM organization that can improve this characteristic.

FPM DRAM(Fast Page Mode Dram) - memory chips that implement page mode. This type of memory appeared in the latest models of computers with the 80486 processor and has become widespread. Processor memory access time when using FPM DRAM chips is reduced by 50% compared to conventional DRAM.

EDO DRAM(Extended Data Output) memory with extended data output. Widely used in Pentium processors. Due to the presence of additional registers for storing data, the amount of data output from memory per unit of time increases. EDO RAM modules are 10-15% faster than FPM DRAM.

SDRAM(Sychronous DRAM) - The main feature of this type of memory is that all operations are synchronized with the processor clock frequency, i.e. Memory and CPU work synchronously. The synchronous interface allows efficient use of the bus and provides peak performance of 100 Mbit/pin at 100 MHz. At 133 MHz, peak performance reaches 1064 MB/s.

Synchronous random access memory (SDRAM) is the first random access random access memory (DRAM) technology designed to synchronize memory operation with CPU clocks on an external data bus. SDRAM is based on standard DRAM and works almost the same as standard DRAM, but it has several distinctive characteristics that make it more advanced:

Synchronous operation SDRAM, unlike standard and asynchronous DRAMs, has an input timer, so the system timer, which incrementally controls the microprocessor's activity, can also control SDRAM operation. This means that the memory controller knows the exact timer cycle on which the requested data will be processed. As a result, this frees the processor from having to wait between memory accesses.

General properties of SDRAM:

  • Synchronized by clock cycles with the CPU
  • Based on standard DRAM, but significantly faster - up to 4 times
  • Specific properties:
- synchronous operation, - alternation of cell banks, - ability to work in batch-conveyor mode

Cell banks are memory cells inside an SDRAM chip that are divided into two, independent cell banks. Since both banks can be active simultaneously, a continuous data flow can be achieved by simply switching between banks. This technique is called interleaving, and it reduces the overall number of memory access cycles and, as a result, increases data transfer speed. Burst acceleration is a fast data transfer technique that automatically generates a block of data (a series of sequential addresses) every time the processor requests one address. Based on the assumption that the next data address to be requested by the processor will be the next one relative to the previous requested address, which is usually true (this is the same prediction that is used in the cache algorithm). Batch mode can be used for both read operations (from memory) and write operations (to memory).

Now about the phrase that SDRAM is faster memory. Even though SDRAM is based on standard DRAM architecture, the combination of the above three characteristics allows for a faster and more efficient data transfer process. SDRAM can already transfer data at speeds up to 100MHz, which is almost four times faster than standard DRAM. This puts SDRAM on par with the more expensive SRAM (static RAM) used as external cache memory.

SDRAM is based on standard DRAM and works the same as standard DRAM - accessing rows and columns of data cells. Only SDRAM combines its specific properties of cell bank synchronous operation and burst operation to effectively eliminate latency-wait conditions. When the processor needs to get data from RAM, it can get it at the right time. Thus, the actual processing time of the data did not directly change, in contrast to the increase in the efficiency of data sampling and transmission.

To understand how SDRAM speeds up the process of fetching and retrieving data in memory, imagine that the central processing unit has a messenger who pushes a cart around the RAM building, and each time he needs to drop or pick up information. In a RAM building, the clerk responsible for forwarding/receiving information typically spends about 60ns to process the request. The messenger only knows how long it takes to process the request after it is received. But he doesn't know whether the clerk will be ready when he arrives, so he usually allows a little time in case of a mistake. He waits until the clerk is ready to receive the request. It then waits for the usual time required to process the request. And then, he pauses to check that the requested data is loaded into his cart before taking the data cart back to the CPU. Suppose, on the other hand, that every 10 nanoseconds the sending clerk in the RAM building must be outside and ready to receive another request or to respond to a request that was previously received. This makes the process more efficient since the messenger can arrive at exactly the right time. Processing of the request begins the moment it is received. Information is sent to the CPU when it is ready.

DDR SDRAM represents a further development of SDRAM. As the name suggests (Dual Data Rate - double data speed), with DDR SDRAM chips the data inside the packet is transferred at double speed - they switch on both edges of the clock pulses. At 100 MHz, DDR SDRAM has a peak performance of 200 Mbit/pin, which in 8-byte DIMM modules gives a performance of 1600 MB/s. In the future, the appearance of DDR-II SDRAM chips is expected, in which the exchange will be at a quadruple synchronization frequency.

RDRAM(Rambus DRAM) developed by the American company Rambus. RDRAM memory is a synchronous interface, 9-bit. The storage core of this memory is built on CMOS dynamic memory cells. The clock frequency is 350-400 MHz and reaches a peak data transfer rate of 1600 Mb/s. Compared to DDR, SDRAM has a more compact interface and greater scalability. NVRAM is used for long-term storage of data that should never be lost under any circumstances. The letters NV in the name stand for Non Volatile, that is, “not temporary.” NVRAM elements do not require power and retain their contents for a long time.

ROM- non-volatile memory with a relatively long rewriting procedure.

Very often, in various applications, it is necessary to store information that does not change during the operation of the device. This is information such as programs in microcontrollers, boot loaders (BIOS) in computers, tables of digital filter coefficients in signal processors, DDC and DUC, tables of sine and cosine in NCO and DDS. Almost always this information is not required at the same time, so the simplest devices for storing permanent information (ROM) can be built on multiplexers. Sometimes in translated literature, permanent storage devices are called ROM (read only memory - read-only memory).

The firmware for controlling a technical device is often written into permanent memory: a TV, a cell phone, various controllers, or a computer BIOS.

BootROM is a firmware such that if it is written into a suitable ROM chip and installed in a network card, it becomes possible to load the operating system onto a computer from a remote local network node. For network cards built into the computer, BootROM can be activated through BIOS.

By type of execution:

  • ROM chip;
  • One of the internal resources of a single-chip microcomputer (microcontroller), usually FlashROM.
  • CD;
  • Card;
  • Punched tape;
  • Mounting “1” and mounting “0”.

By types of ROM chips:

  • ROM - masked ROM, is manufactured using the factory method. There is no possibility to change the recorded data in the future.
  • PROM - ROM, flashed once by the user.
  • EPROM - reprogrammable ROM (EPROM).
  • EEPROM is an electrically erasable programmable ROM. This type of memory can be erased and refilled with data several tens of thousands of times. Used in solid state drives. One of the types of EEPROM is flash memory.

Flash Memory - non-volatile memory with expanded functionality, multiple rewrites are carried out directly in the device; used for BIOS and RAM disks. In addition to the main non-volatile memory, it has a rewritable buffer of the same size for checking and debugging the contents. Rewriting from the buffer to the drive is carried out using a special command in the presence of additional +12 V power.

VRAM- dual-port memory for video adapters, provides access from the bus side simultaneously with reading for image regeneration. CMOS Memory (Complimetary Metal Oxide Semiconductor) - CMOS memory with minimal power consumption and low performance, used with battery power to store system parameters.

Cache memory(Cache Memory) - ultra-RAM, a buffer between the processor and RAM. Completely transparent, undetectable by software. Reduces the total number of processor clock cycles when accessing relatively slow RAM. Cache Level I (Internal, Integrated) - internal cache of some 386 and 486+ processor models. Cache Level 2 (External) - external cache installed on the motherboard. Uses SRAM static memory chips (the fastest and most expensive) in DIP packages that are installed in sockets. The external cache size is from 64 KB to 2 MB. In addition to the memory banks themselves, an additional memory chip (Target Buffer) can be installed, which stores the current list of cached blocks.

Memory modules

SIPP And SIMM- the very first modules with a single-byte organization were used up to 486 processors.

SIPP modules are small boards with several DRAM chips soldered on them. SIPP is an abbreviation for Single Inline Package. SIPP modules are connected to the system board using contact pins. Under the terminal block there are 30 small pins (Fig. 1), which are inserted into the corresponding panel of the system board. The SIPP modules had certain cutouts that prevented them from being inserted into the sockets the wrong way.

Fig.1. SIPP memory module

The abbreviation SIMM stands for Single Inline Memory Module.

SIMM modules can have a capacity of 256 KB, 1, 2, 4, 8, 16 and 32 MB. SIMM modules are connected to the system board using connectors (Fig. 2). The module is inserted into a plastic block at an angle of 70 degrees, and then clamped with a plastic holder. In this case, the board stands vertically. Special cutouts on the memory module will not allow you to place them in the wrong way. SIMM modules for connection to the motherboard do not have pins, but gold-plated strips (the so-called pins).

Fig.2. Memory module SIMM (30pin)

Having examined such a module, you can notice that the contact pads are located on both sides of the module, but on the reverse side of the module, due to the internal metallization of the contact pads, they are duplicated, i.e. However, the module has a one-way contact arrangement.

There is a parameter that characterizes the module specifically. This parameter is the module capacity, i.e. the width of the bus over which a specific module is accessed, or the number of pins along which data bits are transmitted. (For example, a module with 30 legs, naturally, cannot provide 32-bit data exchange - for this you only need 32 legs for data transmission, and also power, addressing, etc.). That is, modules differ from each other primarily in their bit capacity. The width of the 30 pin SIMM is 8 bits (actually 9 bits, but the last, ninth bit is used to transmit so-called parity data, we will talk about this later). The 30 pin SIMM (sometimes called short SIMM) was used in 286, 386 and 486 systems. Let's consider the use of a short SIMM module using the example of a 386 system. The bus width connecting the 386 processor to memory is 32 bits. Is it possible to use 1 SIMM 30 pin as RAM in such a system? Imagine: the processor uses a bus to communicate with memory, in which data is transmitted over 32 wires. Will the system work if only 8 of these wires are used? Of course no! Naturally, the system must use 32-bit memory, otherwise the processor will not be able to work with the memory. But how to implement 32-bit memory if you only have 8-bit modules at your disposal? You need to use several modules at the same time! In fact, the minimum unit of system RAM can be considered a set of memory modules that completely “close” the memory-processor bus. In a 386 system, when using a 30 pin SIMM (each module is 8 bits wide), you must simultaneously use a multiple of four modules in order to ensure system functionality. Therefore, on motherboards of those times, the number of slots for short SIMMs was always a multiple of four: 4 or 8 pieces. A set of connectors that completely cover the memory bus - the processor is called a memory bank. What we just talked about can be said this way: in a system, memory should always be installed in banks only, and at least one bank must be installed.

Fig.3.

This module also has a so-called key - a cutout on the side of the 1st pin, which serves for the correct orientation of the module.

SIMM-72-pin- 4-byte modules used on motherboards for 486 and Pentium

The disadvantages of using 30 pin SIMM in 386 and 486 systems are quite obvious: the memory bank consists of four modules. Therefore, a new type of module was developed: SIMM 72 pin. Such a module, as the name implies, also had contacts located on one side of the module (Single Inline) and at the same time, the increased number of contact pads made it possible to make the module bus width 32 bits (actually 36 bits, the rest again for parity data). Therefore, in 486 systems in which the processor-memory exchange bus width was 32 bits, the memory bank was a single SIMM 72 pin memory module. Thus, 486 systems could be equipped with either 4 30pin SIMMa or one 72 pin SIMM (sometimes called a long SIMM).

This module has 2 keys - a similar 30 pin SIMM cutout on the side of the 1st pin, as well as a cutout in the middle between the 36th and 37th pins.

DIMM-168- 8-byte modules for Pentium and higher. There are two generations, significantly different in interface. DIMM-168-pin Buffered modules (1st generation), as well as slots for them, are rare and are not even mechanically compatible (by keys) with widespread 2nd generation DIMMs. The most popular is the second generation with SDRAM chips. There are modifications depending on the presence of buffers or registers on the control signals: Unbuffered, Buffered and Registered.

With the release of the Pentium processor, whose memory-processor bus width increases to 64 bits, a situation arises again when a bank is not equal to a module. On Pentium systems, memory must again be installed in pairs when using 72pin SIMM. To solve this problem, or rather to use the SDRAM we have already mentioned, a new type of module was developed: DIMM 168 pin (Dual Inline Memory Module). As the name suggests, this module has 168 pads located on both sides of the module, 84 on each side. The 168 pin DIMM is 64-bit, and using a single DIMM allows you to close the memory-processor bus for the Pentium processor, as well as for any modern processor. Thus, 168 pin DIMM modules can be inserted into a modern system one at a time or 72 pin SIMM modules in pairs. 30 pin SIMMs have been completely out of use for a long time, while long SIMMs are used extremely rarely today. The most commonly used module type today is the DIMM module.

The DIMM 168 pin module has 2 keys - 2 cutouts “inside the comb” - between 10 and 11 and between 40 and 41 pins (since there are 84 pins on one side, the position is clearly asymmetrical and, accordingly, determines the 1st pin. These keys, In addition to serving for the correct orientation of the module, they also carry certain information by their location - the first key distinguishes between modules with and without buffering, the second key indicates the module’s supply voltage.

What is a buffered module? Why is buffering needed?

As we already know, in DRAM chips the cell in which information is stored is a capacitor. As a result, simultaneously with the information capacity of memory modules, the electrical capacity also increases. Those who are familiar with the theory of electrical circuits also know that the time constant (roughly speaking, charging time) of a capacitor is directly proportional to the capacitance. As a result, as the capacity (it doesn’t matter which one) of memory modules increases, they need more and more time to receive a signal from the controller. Accordingly, if it is trivial to increase the capacity of memory modules with an existing controller, sooner or later the “lag” of the module will reach such a value that normal joint operation of the two devices will become impossible.

The problem was recognized around the time 168-pin DIMMs appeared, and when developing controllers for systems with these modules, the following solution was proposed - the controller communicates with DRAM not directly, but through a chip called a buffer, which itself has a low capacity, and, Accordingly, it is capable of instantly receiving a signal from the controller, freeing up the system bus. Further charging of DRAM cells occurs without the participation of the controller. The buffer itself is an additional chip, the dimensions of which, in principle, can be different, but usually smaller than the memory chips themselves.

However, before 168-pin DIMMs became firmly established, another event occurred - chips with an operating voltage of 3.3V appeared and became readily available. The same theory states that the charging time of the capacitor is also proportional to the voltage, so lowering the voltage somewhat alleviated the problem.

By the time of mass development of memory controllers for DIMM-oriented systems, the industry turned out to be completely disoriented; as a result, computers from different manufacturers can use DIMMs of almost any buffering/voltage combination. As far as we can tell, unbuffered modules do not work in systems designed for buffered modules, and vice versa (in fact, installing a DIMM with "incorrect buffering" is prevented by the presence of a key). SDRAM DIMMs in a buffered design are not found, however, a design similar to a buffer has been developed for them. It is called register, and the corresponding modules are called registered.

Now a little about the marking of DIMM SDRAM modules. Everything is very simple here. DIMM SDRAM modules are marked as follows: PCxxx, where xxx is the frequency at which the module is certified to operate (it is possible that the chips that make up the module can operate at higher frequencies). Accordingly, there are only 3 DIMM SDRAM specifications:

  • PC66 - DIMM SDRAM designed to operate at a frequency not exceeding 66 MHz;
  • PC100 - DIMM SDRAM designed to operate at frequencies not exceeding 100 MHz;
  • PC133 is a DIMM SDRAM designed to operate at a frequency not exceeding 133 MHz.

Naturally, modules designed for higher frequencies can be used at lower frequencies without problems.

Also, in the PC100 specification, Intel stipulated the mandatory presence of SPD chips on memory modules (Serial Presence Detect, highlighted in the figure) - this is a non-volatile memory chip that stores the characteristics of memory chips and information about the module manufacturer. This information is necessary for the correct configuration of the memory subsystem. However, some modules from "no-name" manufacturers were sometimes not equipped with this chip, which led to malfunctions with some motherboards.

It is necessary to very clearly distinguish and not mix memory types (DRAM, FPM, EDO, SDRAM etc) and memory modules (SIMM30, SIMM72, DIMM168).

DIMM-184- 8-byte DDR SDRAM modules for motherboards of 6-7 generations of processors.

RIMM- 2-byte RDRAM modules for motherboards of 6-7 generations of processors.

RIMMs have similar dimensions to SDRAM DIMMs, but have different cutouts. Rimm modules support SPDs, which are used on DIMMs. Direct Rambus, unlike SDRAM DIMMs, can contain any integer number of RDRAM chips. One Direct Rambus channel can support a maximum of 32 DRDRAM chips. To expand memory beyond 32 devices, they can two repeater chips are used. With one repeater, the channel can support 64 devices with 6 RIMM modules, and with two - 128 devices with 12 modules. Up to three RIMM modules can be used on the motherboard.

SO DIMM And SO RIMM- small-sized variations of modules for notebook PCs.

SODIMM (Small Outline DIMM) are special modules for laptop computers, characterized by a reduced size.

Although laptop PCs use the same memory chips as desktop PCs, the design of memory modules for laptops is different (this type of memory module is also used in communications equipment, where their size is important); SODIMM format modules are installed in all modern laptops.

  • SDRAM SODIMM

There are two types of SDRAM SODIMM modules: with 72 and with 144 pins (regular DIMM modules have 168 pins), this depends on the corresponding bit depth - 32 or 64 bits. Currently, 72-pin modules are no longer used. The standard dimensions of 144-pin modules are 67.6 x 31.75 mm (2.66 x 1.25 inches), the width of the module is fixed, so the second number, its height, is usually considered the most important.

  • DDR SODIMM (DDR2 SODIMM)

DDR SODIMM memory modules have 200 pins (versus 184 for conventional DDR DIMMs). It is noteworthy that although the number of contacts in DIMM modules has increased from 184 (for DDR) to 240 (for DDR2), the number of contacts for DDR2 SODIMM modules remains the same - 200.

The standard dimensions of the module were preserved during the transition from PC133 to DDR, which does not meet the requirements of manufacturers of mini-laptops and other compact electronic devices. Therefore, another standard was developed, more compact than SODIMM modules - MicroDIMM. Their width and height are slightly smaller than those of SODIMMs, but the number of contacts for such modules has been increased to 214.

AIIM- 66-pin 32- or 16-bit SDRAM modules designed to expand the memory of graphics adapters built into the motherboard.

RAM is the largest part of main memory. RAM is designed to store variable (current, rapidly changing) information and allows its contents to change as the processor performs calculations. This means that the processor can select (mode reading) command or data from RAM and, after processing, place the resulting result (mode records) in RAM. New data can be placed in the same places where the original data was previously located. It is clear that the previous data will be erased. RAM allows short-term (until power is turned off) keep recorded information. The data, addresses, and instructions that the processor exchanges with memory are often called operands.

The program currently running on the computer (active) is most often located in RAM (and only sometimes in ROM).

The main component of RAM is an array of memory elements combined into matrix drive. Element memory(ED) can store one bit of information (memorize two states 0 or 1).

Each electronic signature has its own address (in other words, a serial number). To access an electronic signature (for the purpose of writing or reading information), it must be “selected” using the address code. RAM is electronic memory because it is created using microcircuits- microelectronics products.

Memory chips are single-bit and multi-bit.

IN single-digit In memory chips, an address code (sometimes simply called “address”) selects one memory element from many elements located in the drive matrix. After selecting an element, you can write information to it or, conversely, read one bit of information from it. A special control signal Zp/sch (Write/Read) indicates to the microcircuit what it should do: write or read information. Control signals to this input come from the processor. Single-bit memory chips have one input for writing information and one output for reading it.

Address code size m in single-bit memory chips determines the information capacity, i.e., the number of EDs in the storage matrix. The capacity of such a microcircuit is calculated using formula 2 m. For example, if a single-bit memory chip has 10 address inputs, then the information capacity will be N = 2 10 = 1024 bits.

Some memory chips have a multi-bit structure, also called a dictionary structure. Such memory chips have several information inputs and the same number of outputs. Therefore, they allow simultaneous writing (or reading) of a multi-bit code, which is commonly called in a word. One address allows you to read information from several electronic signatures at once. A group of memory elements from which information is simultaneously read is called cell memory. Thus, a memory cell is several electronic devices having a common address.

In English, RAM is called R andom A access M emory (RAM) - random access memory. Term "random access" means that you can read (write) information at any time from any (to any) electronic device. Note that there is another memory organization in which, before reading the necessary information, you need to “push out” previously received operands.

There are two main types of RAM used: static(SRAM - Static RAM) and dynamic(DRAM - Dynamic RAM).

These two types of memory differ in speed and specific density (capacity) of stored information. Memory performance is characterized by two parameters: access time and cycle time. These quantities are usually measured in nanoseconds. The smaller these values, the higher the memory performance.

Access time represents the time interval between the formation of a request to read information from memory and the moment the requested machine word (operand) arrives from memory.

Cycle duration determined by the minimum allowable time between two successive memory accesses.

IN static memory elements are built on triggers - circuits with two stable states. To build one trigger, 4-6 transistors are required. Once information is written to a static memory element, it can store information indefinitely (as long as electrical power is supplied).

Structurally, the memory chip is made in the form of a rectangular matrices, and the ES are located at the intersection of rows and columns. When accessing a static memory chip, it is given a full address, which is split into two parts. One part of the address is used to select the rows of the accumulator matrix, and the second part is used to select the columns.

The figure shows a block diagram of the K561RU2 memory chip, which has 8 address inputs: a 7 a 6 , …, a 0 . This allows you to place 2 8 = 256 memory elements in the matrix. The address inputs are divided into two equal parts (square matrix). Low part of the address a 3 a 2 a 1 a 0 allows you to select one of sixteen lines x 0 , x 1 , x 2 , …, x 15 . Using the leading part of the address a 7 a 6 a 5 a 4 one of sixteen columns is selected y 0 , y 1 , …, y 15 .

To select a certain electronic signature, you need to activate the row and column at the intersection of which the desired electronic signature is located.

a 7

a 6

a 5

a 4

y 15

y 1

y 0

x 15

x 1

x 0

a 3

a 2

a 1

a 0

For example, to select ES 0, you need to supply zeros to all address inputs of the microcircuit, then the DCR line decoder ( D e c oder R ow) and DCC column decoder ( D e c oder C olumn) activate the line accordingly x 0 and column y 0 . At their intersection there is ED 0, into which, after selecting it, you can write (or read) information.

Other EPs are selected similarly. So, to select ES 241 you need to activate the line x 1 and column y 15 . To do this, to the junior group of addresses ( a 3 , ..., a 0 ) you need to submit binary code 0001, and to the senior group of addresses ( a 7 , …, a 4 ) - all units.

Static memory has high performance and low specific density of stored data. In dynamic memory, electronic devices are built on the basis of semiconductor capacitors, which occupy a much smaller area than triggers in static electronic devices. To build a dynamic memory element, only 1-2 transistors are required.

Charge regeneration should occur quite often. This is confirmed by the following reasoning. Since it is necessary to obtain a high specific density of information storage, the capacitance of the capacitor cannot be large (in practice, the capacitance of storage capacitors is about 0.1 pF). The discharge time constant is determined as the product of the capacitance of the capacitor and the resistance of the closed transistor. This product is of the order of magnitude

= RC= 10 10 0.110 -12 = 10 -3 s.

Thus, the discharge time constant is one millisecond and, therefore, charge regeneration should occur approximately a thousand times per second.

The need to frequently recharge storage capacitors in the drive matrix leads to a decrease in the performance of dynamic memory. However, due to the small size of the capacitor and the small number of additional elements, the specific storage density of dynamic memory is higher than that of static memory.

The capacity of dynamic memory chips is tens of Mbits per case. The possibility of placing a large number of electronic devices on one chip raises another design problem: it is necessary to use a large number of address inputs. To reduce the severity of this problem, multiplexing is used.

Multiplexing- this is a technical technique for temporary compression of information, thanks to which it is possible to transmit different information through the same electrical circuits to different receivers (consumers) of information. Thus, designers halve the number of address inputs on memory chips. The address is divided into two equal parts and entered into the chip one by one: first the low part, and then the high part of the address. In this case, the first part selects the desired row in the drive matrix, and the second part activates the corresponding column.

In order for the memory chip to “know” which part of the address is being entered at a given time, the entry of each address group is accompanied by a corresponding control signal.

Thus, synchronously with the input of the low-order part of the address, the RAS signal ( R ow A ddress S trobe) - signal for strobe (accompaniment) of the line address. Almost simultaneously with entering the high part of the address, the CAS signal is sent to the memory chip ( C olumn A ddress S trobe) - strobe column address.

After completing the selection of any electronic device, time is required during which the microcircuit is restored to its original state. This delay is due to the need to recharge the internal circuits of the microcircuit. The duration of this delay is significant and amounts to up to 90% of the cycle time.

This undesirable phenomenon is circumvented by various constructive tricks. For example, when writing several consecutive operands, they are placed on the same row of the matrix, but in different columns. Time savings are achieved by the fact that there is no need to wait for the completion of transient processes when changing row addresses.

Another way to improve performance is to split the memory into blocks (banks), from which the processor reads data alternately. Thus, while data is being read from one memory area, the second one gets time to complete transient processes.

Various modifications of static and dynamic memory have been developed.

FPM DRAM ( F ast P age M ode DRAM) - dynamic memory with fast page access. Paged memory differs from regular dynamic memory in that after selecting one row of the matrix, the row select signal RAS is held and the column addresses are changed repeatedly (using the CAS signal). In this case, no time is wasted on completing transient processes when the line address changes. In other words, the row address remains constant for some time, but the column addresses change. In this case page are called memory elements located on one row of the matrix.

EDO ( E xtended D ata O ut) - these microcircuits are characterized by an increased data retention time at the output. In fact, they are ordinary FPM DRAM memory, at the output of which registers - data latches - are installed. Registers- These are digital devices built on triggers and allowing you to store several bits of information (a word) at once. During page exchange, such microcircuits hold the contents of the last selected memory cell at the outputs of the microcircuit, while the address of the next selected memory cell is already supplied to their inputs. This makes it possible to speed up the process of reading sequential data arrays by approximately 15% compared to FPM.

SDRAM ( S synchronous DRAM - synchronous dynamic memory) - memory with synchronous access, operating faster than conventional asynchronous memory. The basis of this type of memory is the traditional DRAM circuit. However, SDRAM differs in that it uses a clock generator to synchronize all the signals used in the memory chip. In addition to the synchronous access method, SDRAM uses an internal division of the memory array into two independent banks, which makes it possible to combine the time of sampling from one bank with setting an address in another bank.

Ministry of Education and Science of the Nizhny Novgorod Region

State budgetary educational institution

secondary vocational education

"Bor Provincial College"

Specialty 230701 Applied informatics (by industry)

Essay

On the topic: Structure of RAM.

Discipline: Operating systems and environments.

Completed:

student gr. IT-41

Rodov A.E.

Checked:

Markov A.V.

Urban district of Bor

Introduction

Random access memory(from English Random Access Memory) random access memory. RAM ( random access memory) is a volatile part of a computer memory system in which executable machine code (programs) is stored while the computer is running, as well as input, output intermediate data processed by the processor.

1. Structure of RAM

RAM consists of cells, each of which can contain a unit of information - a machine word. Each cell has two characteristics: address and content. Through the address register of the microprocessor, you can access any memory cell.

2. Segmental memory model

Once upon a time, at the dawn of computer technology, RAM was very small and 2 bytes (the so-called “word”) were used to address it. This approach made it possible to address 64 KB of memory, and the addressing was linear - a single number was used to indicate the address. Later, as technology improved, manufacturers realized that it was possible to support larger amounts of memory, but to do this they needed to make the address size larger. For compatibility with already written software, it was decided to do this: addressing is now two-component (segment and offset), each of which is 16-bit, and old programs both used one 16-bit component and know nothing about segments, and continue to work


4. DRAM – Dynamic Random Access Memory

DRAM- This is a very old type of RAM chip, which has not been used for a long time. Differently DRAM is a dynamic memory with a random access order. The minimum unit of information when storing or transmitting data in a computer is a bit. Each bit can have two states: on (yes, 1) or off (no, 0). Any amount of information ultimately consists of bits that are turned on and off. Thus, in order to save or transmit any amount of data, it is necessary to store or transmit every bit, regardless of its state, of this data.

To store bits of information in RAM there are cells. The cells consist of capacitors and transistors. Here is an approximate and simplified diagram of a DRAM cell:

Each cell can only store one bit. If the cell capacitor is charged, this means that the bit is on; if it is discharged, it is off. If you need to store one byte of data, you will need 8 cells (1 byte = 8 bits). The cells are located in matrices and each of them has its own address, consisting of a row number and a column number.

Now let's look at how reading occurs. First, the RAS (Row Address Strobe) signal is applied to all inputs - this is the address of the row. After this, all data from this line is written to the buffer. Then the CAS (Column Address Strobe) signal is applied to the register - this is a column signal and the bit with the corresponding address is selected. This bit is supplied to the output. But during reading, the data in the cells of the read line is destroyed and must be rewritten by taking it from the buffer.

Now the recording. The WR (Write) signal is applied and information is supplied to the column bus not from the register, but from the memory information input through a switch determined by the column address. Thus, the passage of data when written is determined by a combination of the column and row address signals and the permission to write data to memory. When writing, data from the row register is not output.

It should be taken into account that the matrices with cells are arranged like this:

This means that not one bit will be read at a time, but several. If 8 matrices are located in parallel, then one byte will be read at once. This is called bit depth. The number of lines along which data will be transmitted from (or to) parallel matrices is determined by the width of the input/output bus of the microcircuit.
When talking about the operation of DRAM, one point must be taken into account. The whole point is that capacitors cannot store charge indefinitely and it eventually “drains.” Therefore, capacitors need to be recharged. The recharging operation is called Refresh or regeneration. This operation occurs approximately every 2 ms and sometimes takes up to 10% (or even more) of the processor’s working time.

The most important characteristic of DRAM is performance, or, more simply, cycle duration + delay time + access time, where cycle duration is the time spent on data transfer, delay time is the initial setting of the row and column address, and access time is the search time for the cell itself. This bullshit is measured in nanoseconds (one billionth of a second). Modern memory chips have speeds below 10 ms.

RAM is controlled by a controller located in the motherboard chipset, or more precisely in that part of it called North Bridge.

And now, having understood how RAM works, let’s figure out why it is needed at all. After the processor, RAM can be considered the fastest device. Therefore, the main data exchange occurs between these two devices. All information on a personal computer is stored on the hard drive. When you turn on the computer, drivers, special programs and elements of the operating system are written to RAM (Random Access Memory) from the screw. Then those programs - applications that you will launch will be recorded there. Closing these programs will erase them from RAM. Data recorded in RAM is transferred to the CPU (Central Processing Unit), where it is processed and written back. And so all the time: they gave a command to the processor to take bits at such and such addresses, somehow process them there and return them to their place or write them to a new one - he did just that.

All this is good as long as there are enough RAM cells. And if not? Then the swap file comes into play. This file is located on the hard drive and everything that does not fit into the RAM cells is written there. Since the speed of the screw is significantly lower than RAM, the operation of the paging file greatly slows down the system. In addition, it reduces the longevity of the hard drive itself.

Increasing the amount of memory does not lead to an increase in its performance. Changing the memory size will not affect its operation in any way. But if we consider the operation of the system, then it’s a different matter. If you have enough RAM, increasing the volume will not lead to an increase in system speed. If there are not enough RAM cells, then increasing their number (in other words, adding a new one or replacing an old one with a new one with a larger memory capacity) will speed up the system.

We’ll talk about hardware again, namely the computer’s RAM. We will divide this article into two parts. In the first, that is, in this article, I will talk about what RAM is, its purpose and other useful information, and in the second article I will describe how to choose RAM, what criteria to follow, and so on.

Now let's move on to a specific question, namely, what is RAM and why is it needed.

Purpose of RAM

Each of us has a computer, and users may often be faced with the question of improving and upgrading this very PC. Every person has the right to experiment with his or her electronic device, but within reason, of course. Someone, someone is working magic on the processor, but we will look at a cheaper option - RAM, namely increasing its volume.

Firstly, the option of choosing RAM is the simplest, since you do not need to have any special knowledge for this, and installing a memory module takes place in an instant. Moreover, currently this technical part is quite cheap.

Now we will move on to the definition of RAM, otherwise known as RAM.

RAM (random access memory)– the structure of temporary data storage, with the help of which the software operates. It is always a set of chips and modules connected to the motherboard.

This memory usually acts as a buffer between the drives and the processor, it temporarily stores files and data, and also stores running applications.

By the way, do not confuse RAM with hard disk memory. ROM– this is hard disk memory (read only memory). These are different types of memory.

According to its structure, RAM consists of cells that store data of a certain size, 1 or 4 bits. Also, each cell has its own address, which is divided into horizontal rows and vertical columns.

The cells described above are capacitors that store electrical charge. There are also special amplifiers here that are capable of converting analog signals into digital ones, which then create data.

When transmitting the address of a line to the chip, a signal called RAS (Row Address Strobe), to transmit the column address, a signal is used CAS (Column Address Strobe).

We've sorted out the complex definitions, now let's move on to the work of RAM.

The operation of RAM is unconditionally linked to the operation of the processor and other external devices of the computer, since it receives data from all these devices. First of all, data from the hard drive goes into RAM, and then is processed by the processor; this structure can be seen in the figure below:

Information exchange between RAM and the processor itself can occur either directly or with the participation of cache memory.

Cache memory is also a temporary storage of data and represents areas of local memory. The use of this memory significantly reduces the time it takes to deliver data to the processor register, and all because the speed of external media is very slow, unlike the processor. Also because of this, which is also important.

But actually, who or what controls the RAM? The RAM is controlled by a controller installed in the motherboard chipset. This part is called " North Bridge", which provides connection to the processor ( CPU) to various nodes using the graphics controller and RAM. You can see such a diagram below.

I would also like to say one important thing. If data is written to the RAM in any cell, the content that was before the recording will be immediately erased.

An important point in application programs is that they must run under one or another operating system, otherwise it will not be able to allocate the required amount of RAM for this program. There have been cases where it was not possible to run old programs that were intended for the old OS on the new operating system.

You should know that Windows 7, which has 64 bits, supports 192 GB of RAM, but 32-bit Windows 7 only supports 4 GB.

Why do you need RAM?

So, now we know that the so-called cache memory is involved in the data exchange process. At this moment, it is controlled by a controller that analyzes a program and calculates what data the processor will most likely need, and then loads it into cache memory from RAM; then the modified data by the processor, if necessary, is returned back to RAM.

To begin with, we note that all your information is stored on the hard drive, then, when you turn on the PC, various drivers, OS elements, and special programs are written from this same hard drive to the RAM. At the end, the programs that we will run are recorded, and when we close them, they will be erased from RAM.

Information recorded in RAM is transferred to the processor, processed by it and written back, and so on every time. But it may happen that the memory cells run out, what should you do in this case?

In this case, the so-called process comes into play. This file is located on the hard drive; information that is not included in the RAM is written there. This is a big plus. The downside is that the hard drive is much slower than RAM, so the system may run slower. The life of the hard drive itself is also shortened.

What does RAM consist of?

Now we can look at what the RAM module itself consists of.

Typically, all RAM sticks (modules) consist of the same elements. There are also two types of modules: unilateral And double sided. And they say that double-sided ones are much faster. But it happens that the double-sided bar did not work at full capacity, since the chips on either side were not used. And all because, both the motherboard and the processor must support one or another memory.

Note - if you purchase, for example, two RAM modules, then it is better to buy one type.

At the moment, there are several types of memory: DDR, DDR2, DDR3. Also, a new type of memory has been developed - DDR4, which is not yet particularly used. Today, DDR3 is the most popular and used memory type.

A laptop uses almost the same memory, but the module is slightly smaller. It bears the name SO-DIMM (DDR, DDR2, DDR3).

At this point, I think it’s worth finishing, we learned what RAM is and its purposes, various characteristics and types. You may have any comments on this issue, feel free to ask them below. Any suggestions and criticism are welcome.

RAM can be made in the form of a drive, that is, you can store data on it and install programs. This technology is called . If you are interested, you can read about it.