/* Ajith - Syntax Higlighter - End ----------------------------------------------- */

9.02.2009

Signals in Linux - Generating Signals

Besides signals that are generated as a result of a hardware trap or interrupt, your program can explicitly send signals to itself or to another process.

The kill system call can be used to send any signal to any process group or process.
#include <sys/types.h>
#include <signal.h>

int kill(pid_t pid, int sig);

For more information checkout: man 2 kill

There are restrictions that prevent you from using kill to send signals to any random process. These are intended to prevent antisocial behavior such as arbitrarily killing off processes belonging to another user. In typical use, kill is used to pass signals between parent, child, and sibling processes, and in these situations you normally do have permission to send signals. The only common exception is when you run a setuid program in a child process; if the program changes its real UID as well as its effective UID, you may not have permission to send a signal. The su program does this.

A process or thread can send a signal to itself with the raise function. The raise function takes just one parameter, a signal number.

In a single-threaded program it is equivalent to kill(getpid(), sig). In a multithreaded program it is equivalent to pthread_kill(pthread_self(), sig). If the signal causes a handler to be called, raise will only return after the signal handler has returned.
#include <signal.h>

int raise(int sig);

For more information checkout: man 3 raise
#include <signal.h>
#include <stdio.h>
#include <stdlib.h>

static volatile sig_atomic_t doneflag = 10;

static void setdoneflag(int signo) {
printf("\nIn SignalHandler - setdoneflag\n");
doneflag=0;
}

int main (void) {

signal(SIGINT, setdoneflag);

while(doneflag--)
{
printf("In While loop - %d\n",doneflag);
if(doneflag==5)
raise(2);
else
sleep(1);
}

printf("Program terminating ...\n");
return 0;
}

8.26.2009

Implementation of Singly Linked List

This article is part of article series - "Datastructures"

Generally a Linked List means "Singly Linked List". It is a chain of records known as Nodes. Each node has at least two members, one of which points to the next Node in the list and the other holds the data.

Figure 1: Singly Linked List
Basically Single Linked Lists are uni-directional as they can only point to the next Node in the list but not to the previous. We use below structure for a Node in our example.
 struct Node
 {
   int Data;
   struct Node *Next;
 }; 
Variable Data holds the data in the Node (It can be a pointer variable pointing to the dynamically allocated memory) while Next holds the address to the next Node in the list.

Figure 2: Node in a Singly Linked List
Head is a pointer variable of type struct Node which acts as the Head to the list. Initially we set 'Head' as NULL which means list is empty.

8.25.2009

Signals in Linux - Standard Signals

Every signal has a unique signal name, an abbreviation that begins with SIG (SIGINT for interrupt signal, for example). Each signal name is a macro which stands for a positive integer - the signal number for that kind of signal. Your programs should never make assumptions about the numeric code for a particular kind of signal, but rather refer to them always by the names defined. This is because the number for a given kind of signal can vary from system to system, but the meanings of the names are standardized and fairly uniform.

The signal names are defined in signal.h (/usr/include/bits/signum.h), which must be included by any C program that uses signals.

Several signal numbers are architecture-dependent, as indicated in the "Value" column. (Where three values are given, the first one is usually valid for alpha and sparc, the middle one for ix86, ia64, ppc, s390, arm and sh, and the last one for mips. A - denotes that a signal is absent on the corresponding architecture.)



The signals SIGKILL and SIGSTOP cannot be caught, blocked, or ignored.

Next the signals not in the POSIX.1-1990 standard but described in SUSv2 and POSIX.1-2001.


Up to and including Linux 2.2, the default behavior for SIGSYS, SIGXCPU, SIGXFSZ, and (on architectures other than SPARC and MIPS) SIGBUS was to terminate the process (without a core dump). Linux 2.4 conforms to the POSIX.1-2001 requirements for these signals, terminating the process with a core dump.

Next various other signals.


For detailed information about the side-effects and reasons causing these signals checout libc manual.

8.23.2009

Signals in Linux - Basics

What is a Signal ?

A signal is a software interrupt delivered to notify a process or thread of a particular event. The operating system uses signals to report exceptional situations to an executing program. Some signals report errors such as references to invalid memory addresses; others report asynchronous events, such as disconnection of a phone line.

The lifetime of a signal is the interval between its generation and its delivery. A signal that has been generated but not yet delivered is said to be pending. There may be considerable time between signal generation and signal delivery. The process must be running on a processor at the time of signal delivery.
Many computer science researchers compare signals with hardware interrupts, which occur when a hardware subsystem, such as a disk I/O (input/output) interface, generates an interrupt to a processor when the I/O completes. This event in turn causes the processor to enter an interrupt handler, so subsequent processing can be done in the operating system based on the source and cause of the interrupt. When a signal is sent to a process or thread, a signal handler may be entered (depending on the current disposition of the signal), which is similar to the system entering an interrupt handler as the result of receiving an interrupt.

What causes a Signal ?

Let us see some of the events that can cause (or generate, or raise) a signal:
  • A program error such as dividing by zero or issuing an address outside the valid range.
  • A user request to interrupt or terminate the program. Most environments are set up to let a user suspend the program by typing Ctrl-z, or terminate it with Ctrl-c. Whatever key sequence is used, the operating system sends the proper signal to interrupt the process.
  • The termination of a child process.
  • Expiration of a timer or alarm.
  • A call to kill or raise system calls by the same process.
  • A call to kill from another process. Signals are a limited but useful form of interprocess communication.
  • An attempt to perform an I/O operation that cannot be done. Examples are reading from a pipe that has no writer, and reading or writing to a terminal in certain situations.
Each of these kinds of events (excepting explicit calls to kill and raise) generates its own particular kind of signal.

Signals may be generated synchronously or asynchronously. A synchronous signal pertains to a specific action in the program, and is delivered (unless blocked) during that action. Most errors generate signals synchronously, and so do explicit requests by a process to generate a signal for that same process. On some machines, certain kinds of hardware errors (usually floating-point exceptions) are not reported completely synchronously, but may arrive a few instructions later.

Asynchronous signals are generated by events outside the control of the process that receives them. These signals arrive at unpredictable times during execution. External events generate signals asynchronously, and so do explicit requests that apply to some other process. One obvious example would be the sending of a signal to a process from another process or thread via a kill system call. Asynchronous signals are also aptly referred to as interrupts.

A given type of signal is either typically synchronous or typically asynchronous. For example, signals for errors are typically synchronous because errors generate signals synchronously. But any type of signal can be generated synchronously or asynchronously with an explicit request.

How Signals Are Delivered ?

When a signal is generated, it becomes pending. Normally it remains pending for just a short period of time and then is delivered to the process that was signaled. However, if that kind of signal is currently blocked, it may remain pending indefinitely—until signals of that kind are unblocked. Once unblocked, it will be delivered immediately.

When the signal is delivered, whether right away or after a long delay, the specified action for that signal is taken. For certain signals, such as SIGKILL and SIGSTOP, the action is fixed, but for most signals, the program has a choice. Possible default dispositions are
Term   Default action is to terminate the process.
Ign Default action is to ignore the signal.
Core Default action is to terminate the process
and dump core.
Stop Default action is to stop the process.
Cont Default action is to continue the process
if it is currently stopped.
The program can also specify its own way of handling the signals (signal handlers). We sometimes say that a handler catches the signal. While the handler is running, that particular signal is normally blocked.

References:

1. Libc Manual

7.29.2009

Mysterious C program

Never thought in my life that one can write such a mysterious 'C' program which compiles without any errors and provides a strange output.

#include <stdio.h>

main(t,_,a)
char *a;
{return!0<t?t<3?main(-79,-13,a+main(-87,1-_,
main(-86, 0, a+1 )+a)):1,t<_?main(t+1, _, a ):3,main ( -94, -27+t, a
)&&t == 2 ?_<13 ?main ( 2, _+1, "%s %d %d\n" ):9:16:t<0?t<-72?main(_,
t,"@n'+,#'/*{}w+/w#cdnr/+,{}r/*de}+,/*{*+,/w{%+,/w#q#n+,/#{l,+,/n{n+\
,/+#n+,/#;#q#n+,/+k#;*+,/'r :'d*'3,}{w+K w'K:'+}e#';dq#'l q#'+d'K#!/\
+k#;q#'r}eKK#}w'r}eKK{nl]'/#;#q#n'){)#}w'){){nl]'/+#n';d}rw' i;# ){n\
l]!/n{n#'; r{#w'r nc{nl]'/#{l,+'K {rw' iK{;[{nl]'/w#q#\
n'wk nw' iwk{KK{nl]!/w{%'l##w#' i; :{nl]'/*{q#'ld;r'}{nlwb!/*de}'c \
;;{nl'-{}rw]'/+,}##'*}#nc,',#nw]'/+kd'+e}+;\
#'rdq#w! nr'/ ') }+}{rl#'{n' ')# }'+}##(!!/")
:t<-50?_==*a ?putchar(a[31]):main(-65,_,a+1):main((*a == '/')+t,_,a\
+1 ):0<t?main ( 2, 2 , "%s"):*a=='/'||main(0,main(-61,*a, "!ek;dc \
i@bK'(q)-[w]*%n+r3#l,{}:\nuwloca-O;m .vpbks,fxntdCeghiry"),a+1);}
Any Clue of the output ???

Output for the above code is

On the first day of Christmas my true love gave to me
a partridge in a pear tree.

On the second day of Christmas my true love gave to me
two turtle doves
and a partridge in a pear tree.

On the third day of Christmas my true love gave to me
three french hens, two turtle doves
and a partridge in a pear tree.

On the fourth day of Christmas my true love gave to me
four calling birds, three french hens, two turtle doves
and a partridge in a pear tree.

On the fifth day of Christmas my true love gave to me
five gold rings;
four calling birds, three french hens, two turtle doves
and a partridge in a pear tree.

On the sixth day of Christmas my true love gave to me
six geese a-laying, five gold rings;
four calling birds, three french hens, two turtle doves
and a partridge in a pear tree.

On the seventh day of Christmas my true love gave to me
seven swans a-swimming,
six geese a-laying, five gold rings;
four calling birds, three french hens, two turtle doves
and a partridge in a pear tree.

On the eighth day of Christmas my true love gave to me
eight maids a-milking, seven swans a-swimming,
six geese a-laying, five gold rings;
four calling birds, three french hens, two turtle doves
and a partridge in a pear tree.

On the ninth day of Christmas my true love gave to me
nine ladies dancing, eight maids a-milking, seven swans a-swimming,
six geese a-laying, five gold rings;
four calling birds, three french hens, two turtle doves
and a partridge in a pear tree.

On the tenth day of Christmas my true love gave to me
ten lords a-leaping,
nine ladies dancing, eight maids a-milking, seven swans a-swimming,
six geese a-laying, five gold rings;
four calling birds, three french hens, two turtle doves
and a partridge in a pear tree.

On the eleventh day of Christmas my true love gave to me
eleven pipers piping, ten lords a-leaping,
nine ladies dancing, eight maids a-milking, seven swans a-swimming,
six geese a-laying, five gold rings;
four calling birds, three french hens, two turtle doves
and a partridge in a pear tree.

On the twelfth day of Christmas my true love gave to me
twelve drummers drumming, eleven pipers piping, ten lords a-leaping,
nine ladies dancing, eight maids a-milking, seven swans a-swimming,
six geese a-laying, five gold rings;
four calling birds, three french hens, two turtle doves
and a partridge in a pear tree.
Can anyone explain me step by step way the process to get above output ....

4.29.2009

Beware of NVIDIA Graphic cards in Laptops

NVIDIA famously known for its high end graphics cards which makes everyone to own a machine with NVIDIA graphics card. But beware of NVIDIA graphics card as they suffer OVER HEATING issue because of weak die/packaging material. It seems NVIDIA G84 & G86 graphic chipsets had this problem.
According to our sources, the failures are caused by a solder bump that connects the I/O termination of the silicon chip to the pad on the substrate. In Nvidia’s GPUs, this solder bump is created using high-lead. A thermal mismatch between the chip and the substrate has substantially grown in recent chip generations, apparently leading to fatigue cracking. Add into the equation a growing chip size (double the chip dimension, quadruple the stress on the bump) as well as generally hotter chips and you may have the perfect storm to take high lead beyond its limits. Apparently, problems arise at what Nvidia claims to be "extreme temperatures" and what we hear may be temperatures not too much above 70 degrees Celsius.

What supports the theory that a high-lead solder bump in fact is at fault is the fact that Nvidia ordered an immediate switch to use eutectic solders instead of high-lead versions in the last week of July.
source
Major laptop manufacturers like DELL, HP has accepted that there are some issues with laptops shipped with a NVIDIA graphics cards.

According to NVIDIA
, these affected GPUs are experiencing higher than expected failure rates causing video problems and it is because of weak die/packaging material set, which may fail with higher GPU temperature fluctuations.

If your NVIDIA GPU fails, you may see intermittent symptoms during early stages of failure that include:

* Multiple images
* Random characters on the screen
* Lines on the screen
* No video
* Black Screen

DELL agreed that some of its laptop models namely Inspiron 1420, Latitude D630, Latitude D630c, Dell Precision M2300, Vostro Notebook 1310, Vostro Notebook 1400, Vostro Notebook 1510, Vostro Notebook 1710, XPS M1330, XPS M1530 are facing the issue and released a BIOS upgradation to minimize the effect of the GPU failure problem and new systems are being shipped with the updated BIOS revisions.
DELL is offering an additional 12-month limited warranty enhancement specific to this GPU failure issue. For all customers worldwide, DELL plans to add 12 months of coverage for this issue to the existing limited warranty up to 60 months from the date of purchase for the given list of systems.

DELL's BIOS fix is simply turning on the laptop fan much more than usual as a result laptop battery is drained. The 'fix' keeps the fan on much more and destroys battery life.

HP also identified a hardware issue with certain HP Pavilion dv2000/dv6000/dv9000 and Compaq Presario V3000/V6000 series notebook PCs, and has also released a new BIOS for these notebook PCs.
HP says"This service enhancement program is available in North America for 24 months after the start of your original standard limited warranty for issues listed below; otherwise your current standard limited warranty applies. Customers who already have a 24 month or longer warranty period will be covered under their existing standard HP Limited Warranty."
Apple is also facing the same problem with the macbook pro's.
Apple says, NVIDIA assured Apple that Mac computers with these graphics processors were not affected. However, after an Apple-led investigation, Apple has determined that some MacBook Pro computers with the NVIDIA GeForce 8600M GT graphics processor may be affected. If the NVIDIA graphics processor in your MacBook Pro has failed, or fails within two years of the original date of purchase, a repair will be done free of charge, even if your MacBook Pro is out of warranty.
Customers are really angry as the OEM's are not providing a permanent solution to the GPU problem and also not including some of the laptop models that are facing the same issue. Even some of the customers sued NVIDIA to repair their laptops with faulty graphics cards.

We can see still DELL is shipping laptops with defective NVIDIA graphics card.

3.22.2009

Cache Memory - Part2

Interaction Policies with Main Memory
Basically READ operations dominate processor cache accesses since many of the instruction accesses are READ operation’s and most instructions do not WRITE into memory. When the address of the block to be READ is available then the tag is read and if it is a HIT then READ from it.

In case of a miss the READ policies are:
  • Read Through - Reading a block directly from main memory.
  • No Read Through - Reading a block from main memory into cache and then from cache to CPU. So we even update the cache memory.

Basically a Miss is comparatively slow because they require the data to be transferred from main memory to CPU which incurs a delay since main memory is much slower than cache memory, and also incurs the overhead for recording the new data in the cache before it is delivered to the processor. To take advantage of Locality of Reference, the CPU copies data into the cache whenever it accesses an address not present in the cache. Since it is likely the system will access that same location shortly, the system will save wait states by having that data in the cache. Thus cache memory handles the temporal aspects of memory access, but not the spatial aspects.

Accessing Caching memory locations won't speed up the program execution if we constantly access consecutive memory locations (Spatial Locality of Reference). To solve this problem, most caching systems read several consecutive bytes from memory when a cache miss occurs. 80x86 CPUs, for example, read between 16 and 64 bytes at a shot (depending upon the CPU) upon a cache miss. If you read 16 bytes, why read them in blocks rather than as you need them? As it turns out, most memory chips available today have special modes which let you quickly access several consecutive memory locations on the chip. The cache exploits this capability to reduce the average number of wait states needed to access memory.
Cache READ operation
Cache READ operation

It is not the same with Cache WRITE operation. Modifying a block cannot begin until the tag is checked to see if the address is a hit. Also the processor specifies the size of the write, usually between 1 and 8 bytes; only that portion of the block can be changed. In contrast, reads can access more bytes than necessary without a problem.

The Cache WRITE policies on write hit often distinguish cache designs:
  • Write Through - the modified data is written back to both the block in the cache memory and in the main memory.

    Advantage:
    1. READ miss never results in writes to main memory.
    2. Easy to implement
    3. Main Memory always has the most current copy of the data (consistent)

    Disadvantage:
    1. WRITE operation is slower as we have to update both Main Memory and Cache Memory.
    2. Every write needs a main memory access as a result uses more memory bandwidth
Write Through in cache
  • Write Back - the modified data is first written only to the block in the cache memory. The modified cache block is written to main memory only when it is replaced. In order to reduce the frequency of writing back blocks on replacement, a dirty bit (a status bit) is commonly used to indicate whether the block is dirty (modified while in the cache) or clean (not modified). If it is clean the block is not written on a miss.

    Advantage:
    1. WRITE’s occur at the speed of the cache memory.
    2. Multiple WRITE’s within a block require only one WRITE to main memory as a result uses less memory bandwidth

    Disadvantage:
    1. Harder to implement
    2. Main Memory is not always consistent with cache reads that result in replacement may cause writes of dirty blocks to main memory.
Incase of Cache Write MISS we have to options.
  • Write Allocate - the memory block is first loaded into cache memory from main memory on a write miss, followed by the write-hit action.
  • No Write Allocate - the block is directly modified in the main memory and not loaded into the cache memory.
Although either write-miss policy could be used with write through or write back, write-back caches generally use write allocate (hoping that subsequent writes to that block will be captured by the cache) and write-through caches often use no-write allocate (since subsequent writes to that block will still have to go to memory).

The data in main memory being cached may be changed by other entities, in which case the copy in the cache may become out-of-date or stale. Alternatively, when the CPU updates the data in the cache, copies of data in other caches will become stale. Communication protocols between the cache managers which keep the data consistent are known as cache coherence protocols.

3.18.2009

Principle of Locality

When a program executes on a computer, most of the memory references are not made uniformly to a small number of locations. Here the Locality of the reference does matter.

Locality of Reference, also known as the Principle of Locality, the phenomenon of the same value or related storage locations being frequently accessed. Locality occurs in time(temporal locality) and in space (spatial locality).
  • Temporal Locality refers to the reuse of specific data and/or resources within relatively small time durations.
  • Spatial Locality refers to the use of data elements within relatively close storage locations. Sequential locality, a special case of spatial locality, occurs when data elements are arranged and accessed linearly, eg, traversing the elements in a one-dimensional array.
To be very simple when exhibiting spatial locality, a program accesses consecutive memory locations and during temporal locality of reference a program repeatedly accesses the same memory location during a short time period. Both forms of locality occur in the following Pascal code segment:
  for i := 0 to 10 do
A [i] := 0;

In the above Pascal code, the variable 'i' is referenced several times in for loop where 'i' is compared against 10 to see if the loop is complete and also incremented by one at the end of the loop. This shows temporal locality of reference in action since the CPU accesses 'i' at different points in a short time period.

This program also exhibits spatial locality of reference. The loop itself zeros out the elements of array A by writing a zero to the first location in A, then to the second location in A, and so on. Assume Pascal stores elements of A into consecutive memory locations then on each loop iteration it accesses adjacent memory locations.

Cache Memory - Set Associative Mapped Cache

Set Associative mapping scheme combines the simplicity of Direct mapping with the flexibility of Fully Associative mapping. It is more practical than Fully Associative mapping because the associative portion is limited to just a few slots that make up a set.

In this mapping mechanism, the cache memory is divided into 'v' sets, each consisting of 'n' cache lines. A block from Main memory is first mapped onto a specific cache set, and then it can be placed anywhere within that set. This type of mapping has very efficient ratio between implementation and efficiency. The set is usually chosen by

Cache set number = (Main memory block number) MOD (Number of sets in the cache memory)

If there are 'n' cache lines in a set, the cache placement is called n-way set associative i.e. if there are two blocks or cache lines per set, then it is a 2-way set associative cache mapping and four blocks or cache lines per set, then it is a 4-way set associative cache mapping.

Let us assume we have a Main Memory of size 4GB (232), with each byte directly addressable by a 32-bit address. We will divide Main memory into blocks of each 32 bytes (25). Thus there are 128M (i.e. 232/25 = 227) blocks in Main memory.

We have a Cache memory of 512KB (i.e. 219), divided into blocks of each 32 bytes (25). Thus there are 16K (i.e. 219/25 = 214) blocks also known as Cache slots or Cache lines in cache memory. It is clear from above numbers that there are more Main memory blocks than Cache slots.

NOTE: The Main memory is not physically partitioned in the given way, but this is the view of Main memory that the cache sees.

NOTE: We are dividing both Main Memory and cache memory into blocks of same size i.e. 32 bytes.

Let us try 2-way set associative cache mapping i.e. 2 cache lines per set. We will divide 16K cache lines into sets of 2 and hence there are 8K (214/2 = 213) sets in the Cache memory.

Cache Size = (Number of Sets) * (Size of each set) * (Cache line size)

So even using the above formula we can find out number of sets in the Cache memory i.e.

219 = (Number of Sets) * 2 * 25

Number of Sets = 219 / (2 * 25) = 213.

Set Associative Mapped CacheWhen an address is mapped to a set, the direct mapping scheme is used, and then associative mapping is used within a set.

The format for an address has 13 bits in the set field, which identifies the set in which the addressed word will be found if it is in the cache. There are five bits for the word field as before and there is 14-bit tag field that together make up the remaining 32 bits of the address as shown below:
Set Associative Mapped CacheAs an example of how the set associative cache views a Main memory address, consider again the address (A035F014)16. The leftmost 14 bits form the tag field, followed by 13 bits for the set field, followed by five bits for the word field as shown below:
cache memory - Set Associative mapped
In the below example we have chosen the block 14 from Main memory and compared it with the different block replacement algorithms. In Direct Mapped cache it can be placed in Frame 6 since 14 mod 8 = 6. In Set associative cache it can be placed in set 2.
cache memory - Set Associative mappedCheckout one more solved problem below.
cache memory - Set Associative mapped
References


1. Computer Architecture Tutorial - By Gurpur M. Prabhu.

2. Computer Architecture And Organization: An Integrated Approach - By Murdocca & Vincent Heuring

Cache Memory - Fully Associative Mapped Cache

If a Main memory block can be placed in any of the Cache slots, then the cache is said to be mapped in fully associative.

Let us assume we have a Main Memory of size 4GB (232), with each byte directly addressable by a 32-bit address. We will divide Main memory into blocks of each 32 bytes (25). Thus there are 128M (i.e. 232/25 = 227) blocks in Main memory.

We have a Cache memory of 512KB (i.e. 219), divided into blocks of each 32 bytes (25). Thus there are 16K (i.e. 219/25 = 214) blocks also known as Cache slots or Cache lines in cache memory. It is clear from above numbers that there are more Main memory blocks than Cache slots.

NOTE: The Main memory is not physically partitioned in the given way, but this is the view of Main memory that the cache sees.

NOTE: We are dividing both Main Memory and cache memory into blocks of same size i.e. 32 bytes.

In fully associative mapping any one of the 128M (i.e. 227) Main memory blocks can be mapped into any of the single Cache slot. To keep track of which one of the 227 possible blocks is in each slot, a 27-bit tag field is added to each slot which holds an identifier in the range from 0 to 227 – 1. The tag field is the most significant 27 bits of the 32-bit memory address presented to the cache.
Fully Associative Mapped Cache
In an associative mapped cache, each Main memory block can be mapped to any slot. The mapping from main memory blocks to cache slots is performed by partitioning an address into fields for the tag and the word (also known as the “byte” field) as shown below:
Fully Associative Mapped Cache
When a reference is made to a Main memory address, the cache hardware intercepts the reference and searches the cache tag memory to see if the requested block is in the cache. For each slot, if the valid bit is 1, then the tag field of the referenced address is compared with the tag field of the slot. All of the tags are searched in parallel, using an associative memory. If any tag in the cache tag memory matches the tag field of the memory reference, then the word is taken from the position in the slot specified by the word field. If the referenced word is not found in the cache, then the main memory block that contains the word is brought into the cache and the referenced word is then taken from the cache. The tag, valid, and dirty fields are updated, and the program resumes execution.

Associative mapped cache has the advantage of placing any main memory block into any cache line. This means that regardless of how irregular the data and program references are, if a slot is available for the block, it can be stored in the cache. This results in considerable hardware overhead needed for cache bookkeeping.

Although this mapping scheme is powerful enough to satisfy a wide range of memory access situations, there are two implementation problems that limit performance.
  • The process of deciding which slot should be freed when a new block is brought into the cache can be complex. This process requires a significant amount of hardware and introduces delays in memory accesses.
  • When the cache is searched, the tag field of the referenced address must be compared with all 214 tag fields in the cache.

Cache Memory - Direct Mapped Cache

If each block from main memory has only one place it can appear in the cache, the cache is said to be Direct Mapped. Inorder to determine to which Cache line a main memory block is mapped we can use the formula shown below

Cache Line Number = (Main memory Block number) MOD (Number of Cache lines)

Let us assume we have a Main Memory of size 4GB (232), with each byte directly addressable by a 32-bit address. We will divide Main memory into blocks of each 32 bytes (25). Thus there are 128M (i.e. 232/25 = 227) blocks in Main memory.

We have a Cache memory of 512KB (i.e. 219), divided into blocks of each 32 bytes (25). Thus there are 16K (i.e. 219/25 = 214) blocks also known as Cache slots or Cache lines in cache memory. It is clear from above numbers that there are more Main memory blocks than Cache slots.

NOTE: The Main memory is not physically partitioned in the given way, but this is the view of Main memory that the cache sees.

NOTE: We are dividing both Main Memory and cache memory into blocks of same size i.e. 32 bytes.

A set of 8k (i.e. 227/214 = 213) Main memory blocks are mapped onto a single Cache slot. In order to keep track of which of the 213 possible Main memory blocks are in each Cache slot, a 13-bit tag field is added to each Cache slot which holds an identifier in the range from 0 to 213 – 1.

All the tags are stored in a special tag memory where they can be searched in parallel. Whenever a new block is stored in the cache, its tag is stored in the corresponding tag memory location.
Direct Mapped Cache
When a program is first loaded into Main memory, the Cache is cleared, and so while a program is executing, a valid bit is needed to indicate whether or not the slot holds a block that belongs to the program being executed. There is also a dirty bit that keeps track of whether or not a block has been modified while it is in the cache. A slot that is modified must be written back to the main memory before the slot is reused for another block. When a program is initially loaded into memory, the valid bits are all set to 0. The first instruction that is executed in the program will therefore cause a miss, since none of the program is in the cache at this point. The block that causes the miss is located in the main memory and is loaded into the cache.

This scheme is called "direct mapping" because each cache slot corresponds to an explicit set of main memory blocks. For a direct mapped cache, each main memory block can be mapped to only one slot, but each slot can receive more than one block.

The mapping from main memory blocks to cache slots is performed by partitioning an main memory address into fields for the tag, the slot, and the word as shown below:
Direct Mapped Cache
The 32-bit main memory address is partitioned into a 13-bit tag field, followed by a 14-bit slot field, followed by a 5-bit word field. When a reference is made to a main memory address, the slot field identifies in which of the 214 cache slots the block will be found if it is in the cache.

If the valid bit is 1, then the tag field of the referenced address is compared with the tag field of the cache slot. If the tag fields are the same, then the word is taken from the position in the slot specified by the word field. If the valid bit is 1 but the tag fields are not the same, then the slot is written back to main memory if the dirty bit is set, and the corresponding main memory block is then read into the slot. For a program that has just started execution, the valid bit will be 0, and so the block is simply written to the slot. The valid bit for the block is then set to 1, and the program resumes execution.

Check out one more solved problem below


References


1. Computer Architecture Tutorial - By Gurpur M. Prabhu.

Cache Memory - Part1

Cache memory is a small (in size) and very fast (zero wait state) memory which sits between the CPU and main memory. Unlike normal memory, the bytes appearing within a cache do not have fixed addresses. Instead, cache memory can reassign the address of a data object. This allows the system to keep recently accessed values in the cache.


Cache Memory

Using the Principle of Locality to improve performance while keeping the memory system affordable we can pose 4 questions about any level of memory hierarchy and we will answer those questions considering one level of memory hierarchy for e.g. cache in our case.
  • Block Placement - Where should a block be placed in the cache?
  • Block Identification -How to confirm if a block is in the cache or not?
  • Block Replacement -Which block frame in the cache should be replaced upon a miss?
  • Interaction Policies with Main Memory - What happens when reads and writes are done in the cache?
Block Placement
A number of hardware schemes have been developed for translating main memory addresses to cache memory addresses. The user does not need to know much about the address translation circuitry, which has the advantage, that cache memory enhancements can be introduced into a computer without a corresponding need for modifying application software.

Basically number of cache lines are very less than the number of main memory blocks. As a result an algorithm is needed for mapping main memory blocks into cache lines. Also a means is needed for determining which main memory block currently occupies a cache line.

The choice of cache mapping scheme affects cost and performance, and there is no single best method that is appropriate for all situations. There are three methods in block placement namely

Block Identification
In general a cache has two important parts; the cache data line and the cache tags. But in granular it can shown as below
  • Valid Bit : is set to 1 when a valid data is stored in cache.
  • Dirty Bit : is set to 1 when data is changed and is not updated to main memory in the same time.
  • Tag : this field tells which address is in that line.
  • Data : the data fetched from main memory.
Since a cache is typically smaller than an entire address space, there is a possibility that any particular requested data is not present in the cache. Therefore there must be some mechanism to determine whether any requested data is present in the cache or not. The tags fill this purpose.

Cache has an address tag on each block frame that gives the block address. Therefore tag entry of every cache block is checked to see if it matches the block address from the CPU. As a rule, all possible tags are searched in parallel because speed is critical.

There must be a way to know that a cache block does/doesn’t have valid information. The most common procedure is to add a valid bit to the tag to say whether or not this entry contains a valid address. If the bit is not set, there cannot be a match on this address. Accordingly an address, generated by CPU (or main memory address) is divided as shown below:


Courtesy: Various parts into which main menory address is divided.
The first division is between the block address and the block offset. The block frame address can be further divided into the tag field and the index field. The block-offset field selects the desired data from the block, the index field selects the set, and the tag field is compared against it for a hit. Although the comparison could be made on more of the address than the tag, there is no need because of the following:

If the total cache size is kept the same, increasing associativity increases the number of blocks per set, thereby decreasing the size of the index and increasing the size of the tag.

Block Replacement
When a miss occurs, the cache controller must select a block to be replaced with the desired data. A benefit of direct-mapped placement is that hardware decisions are simplified - in fact, so simple that there is no choice: Only one block frame is checked for a hit, and only that block can be replaced. With fully associative or set-associative placement, there are many blocks to choose from on a miss. There are three primary strategies employed for selecting which block to replace:

Random - To spread allocation uniformly, candidate blocks are randomly selected. Some systems generate pseudorandom block numbers to get reproducible behavior, which is particularly useful when debugging hardware.

Advantage : simple to implement in hardware
Disadvantage : ignores Principle of Locality

Least-recently used (LRU) - To reduce the chance of throwing out information that will be needed soon, accesses to blocks are recorded. Relying on the past to predict the future, the block replaced is the one that has been unused for the longest time. LRU relies on a corollary of locality: If recently used blocks are likely to be used again, then a good candidate for disposal is the least-recently used block.

Advantage : takes locality into account
Disadvantage : as the number of blocks to keep track of increases, LRU becomes more expensive (harder to implement, slower and often just approximated).

First In First Out (FIFO) - Because LRU can be complicated to calculate, this approximates LRU by determining the oldest block rather than the LRU.

Checkout part2.

References
1. Computer Architecture - A Quantitative Approach, Third Edition.

2.17.2009

Blogger: Adding syntax highlighter to Blogger

As we know it is really hard to post any source code to the blogger as there is no Syntax Highlighting option by default to the blogger (A Big Deficiency).

After doing some web search first I came across a Javascript tool named syntaxhighlighter and then I came across Heisencoder's post on how to add the syntaxhighlighter option to the blogger template.

NOTE: We are about to tweak the HTML code of the blogger template. Inorder to know how to edit the HTML code of the blogger template check this post.

NOTE: For safety precautions click "Download Full Template" link to download full html code of your present blog's template.

Select "Expand Widget Templates" option to see the full html code in the editor. Now we have to make our hands dirty as we are about to add some code into our blogger template code.

1. Go to http://syntaxhighlighter.googlecode.com/svn/trunk/Styles/SyntaxHighlighter.css, and perform "select all" and "copy" the whole code and paste it at the end of the css section of your blogger html template (i.e.,  before ]]--></b:skin>).

2. Before the </head> tag, paste the following code:


<!-- Add-in CSS for syntax highlighting -->
<script src='http://syntaxhighlighter.googlecode.com/svn/trunk/Scripts/shCore.js' type='text/javascript'></script>
<script src='http://syntaxhighlighter.googlecode.com/svn/trunk/Scripts/shBrushCpp.js' type='text/javascript'></script>
<script src='http://syntaxhighlighter.googlecode.com/svn/trunk/Scripts/shBrushCSharp.js' type='text/javascript'></script>
<script src='http://syntaxhighlighter.googlecode.com/svn/trunk/Scripts/shBrushCss.js' type='text/javascript'></script>
<script src='http://syntaxhighlighter.googlecode.com/svn/trunk/Scripts/shBrushDelphi.js' type='text/javascript'></script>
<script src='http://syntaxhighlighter.googlecode.com/svn/trunk/Scripts/shBrushJava.js' type='text/javascript'></script>
<script src='http://syntaxhighlighter.googlecode.com/svn/trunk/Scripts/shBrushJScript.js' type='text/javascript'></script>
<script src='http://syntaxhighlighter.googlecode.com/svn/trunk/Scripts/shBrushPhp.js' type='text/javascript'></script>
<script src='http://syntaxhighlighter.googlecode.com/svn/trunk/Scripts/shBrushPython.js' type='text/javascript'></script>
<script src='http://syntaxhighlighter.googlecode.com/svn/trunk/Scripts/shBrushRuby.js' type='text/javascript'></script>
<script src='http://syntaxhighlighter.googlecode.com/svn/trunk/Scripts/shBrushSql.js' type='text/javascript'></script>
<script src='http://syntaxhighlighter.googlecode.com/svn/trunk/Scripts/shBrushVb.js' type='text/javascript'></script>
<script src='http://syntaxhighlighter.googlecode.com/svn/trunk/Scripts/shBrushXml.js' type='text/javascript'></script>

NOTE: Simply remove lines for languages which you will never use (for example, Delphi) -- it will save some loading time of the blogs.

3. Before the </body> tag, insert the following:
<!-- Add-in Script for syntax highlighting -->
<script language='javascript'>
dp.SyntaxHighlighter.BloggerMode();
dp.SyntaxHighlighter.HighlightAll('code');
</script>

NOTE: Tweaking of the Blogger HTML template code is complete. So before you save the template code just click on "Preview" button to see if the code is not crashing & working fine.

4. While posting a post that has source code then click on "Edit Html" tab and post the source code between pre tags shown below
<pre name="code" class="cpp">
...Your html-escaped code goes here...
</pre>

In the above code substitute "cpp" with whatever language you're using. Choices: cpp, c, c++, c#, c-sharp, csharp, css, delphi, pascal, java, js, jscript, javascript, php, py, python, rb, ruby, rails, ror, sql, vb, vb.net, xml, html, xhtml, xslt. Full list can be accessed at Supported languages.

NOTE: Instead of remembering the code everytime we can add this HTML code simply into the template so that it is displayed whenever we create a new post. Click on "Settings" tab and then "formatting" sub-tab and post the html code in the "Post Template" box. As a result next whenever we create a new post it is displayed when we click "Edit Html".

We have to perform HTML escaping which can be done in the sites like Centricle, Accessify.

Reference

[1] Heisencoder - Link

2.16.2009

Gcov - analyzing code produced with GCC

How can a programmer exactly know which part of his code is frequently executed and which part of code is never traversed ? That's where CODE COVERAGE comes for rescue ...

Code Coverage is a measure used in software testing to describe the degree to which the source code of a program has been tested. It is a form of testing that inspects the code directly and checks the active and non-active parts of the source-code.

Basically there are number of code coverage criteria, the main ones being:
  • Function coverage
  • Has each function in the program been executed?

  • Statement coverage
  • Has each line of the source code been executed?

  • Decision coverage (also known as Branch coverage)
  • Has each control structure (such as an if statement) evaluated both to true
    and false?

  • Condition coverage
  • Has each boolean sub-expression evaluated both to true and false (this does
    not necessarily imply decision coverage)?

  • Modified Condition/Decision Coverage (MC/DC)
  • Has every condition in a decision taken on all possible outcomes at least
    once? Has each condition been shown to affect that decision outcome
    independently?

  • Path coverage
  • Has every possible route through a given part of the code been executed?

  • Entry/exit coverage
  • Has every possible call and return of the function been executed?

2.10.2009

NPTEL - Web Video courses from IIT's and IISc

NPTEL - National Programme on Technology Enhanced Learning, a project funded by the Ministry of Human Resource Development (MHRD) of India.

The main objective of NPTEL program is "To enhance the quality of engineering education in the country by developing curriculum based video and web courses".

Seven IITs and IISc Bangalore together created web courses on different engineering subjects and made them available for all other students around the world.


In the first phase of the project, supplementary content for 129 web courses in engineering/science and humanities have been developed. Each course contains materials that can be covered in depth in 40 or more lecture hours. In addition, 110 courses have been developed in video format, with each course comprising of approximately 40 or more one-hour lectures. In the next phase other premier institutions are also likely to participate in content creation.

The courses can be accessed at http://nptel.iitm.ac.in/

The video lectures of various courses can be directly accessed from Youtube at http://youtube.com/iit

2.04.2009

Blogger: Adding Applet code into the blogger post

Since we cant directly upload the Applet code into the google and then include it in the blogger post checkout the the way we have to add the Applet code into the blogger.

We cant simply add the Applet code to post in the blogger. For e.g. let us add the following applet code <applet code=ThreeD.class width=100 height=100></applet> into the post then we see a box with a 'x' mark on the left side corner as shown below.



Inorder to solve the problem we have to modify the Applet code so that blogger can track down where the following applet is actually present.


Solution to the problem comes up with a tag "codebase" ...

What is codebase?
Codebase tells the browser where the applet files are located, but if all files (including the HTML file, .class files, and images) are all together in one folder on your own server or local system, you should not specify CODEBASE. Likewise, if you are going to use the applet off-line, do not use a CODEBASE.

If you are using something like BLOGGER, then yes, you will definitely need to use a codebase address (since .class files can't be directly stored on the .blogspot server).

Something like this

<applet code=ThreeD.class width=100 height=100 codebase=http://www.yoursite.com/files/ ></applet>

I have provided the exact link location where the applet code is stored.

1.24.2009

Tin Tin EBook Collection - Part3

There are download links available below each and every Image of the TiTle book ... If any problem in seeing them please leave a comment .....

Checkout TinTin Ebook Collection - PART1 for the books from 1 to 10.

Checkout TinTin Ebook Collection - PART2 for the books from 11 to 20

21. The Castafiore Emerald


22. Flight 714


23. Tintin and the Picaros


24. Tintin and Alph-Art


Tintin and the Lake of Sharks


Tintin in Thailand


Tintin - The Complete Companion


Tintin the freelance reporter

Tintin and the mysterious visitor

1.21.2009

Tin Tin EBook Collection - Part2

There are download links available below each and every Image of the TiTle book ... If any problem in seeing them please leave a comment .....

Checkout TinTin ebook collection for the books from 1 to 10.

11. The Secret of the Unicorn



12. Red Rackham's Treasure



13. The Seven Crystal Balls



14. Prisoners of the Sun



15. Land of Black Gold



16. Destination Moon



17. Explorers on the Moon



18. The Calculus Affair



19. The Red Sea Sharks



20. Tintin in Tibet



Checkout TinTin ebook collection part3 for the remaining editions.