Every signal has a unique signal name, an abbreviation that begins with SIG (SIGINT for interrupt signal, for example). Each signal name is a macro which stands for a positive integer - the signal number for that kind of signal. Your programs should never make assumptions about the numeric code for a particular kind of signal, but rather refer to them always by the names defined. This is because the number for a given kind of signal can vary from system to system, but the meanings of the names are standardized and fairly uniform.
The signal names are defined in signal.h (/usr/include/bits/signum.h), which must be included by any C program that uses signals.
Several signal numbers are architecture-dependent, as indicated in the "Value" column. (Where three values are given, the first one is usually valid for alpha and sparc, the middle one for ix86, ia64, ppc, s390, arm and sh, and the last one for mips. A - denotes that a signal is absent on the corresponding architecture.)
The signals SIGKILL and SIGSTOP cannot be caught, blocked, or ignored.
Next the signals not in the POSIX.1-1990 standard but described in SUSv2 and POSIX.1-2001.
Up to and including Linux 2.2, the default behavior for SIGSYS, SIGXCPU, SIGXFSZ, and (on architectures other than SPARC and MIPS) SIGBUS was to terminate the process (without a core dump). Linux 2.4 conforms to the POSIX.1-2001 requirements for these signals, terminating the process with a core dump.
Next various other signals.
For detailed information about the side-effects and reasons causing these signals checout libc manual.
8.25.2009
8.23.2009
Signals in Linux - Basics
What is a Signal ?
A signal is a software interrupt delivered to notify a process or thread of a particular event. The operating system uses signals to report exceptional situations to an executing program. Some signals report errors such as references to invalid memory addresses; others report asynchronous events, such as disconnection of a phone line.
The lifetime of a signal is the interval between its generation and its delivery. A signal that has been generated but not yet delivered is said to be pending. There may be considerable time between signal generation and signal delivery. The process must be running on a processor at the time of signal delivery.
What causes a Signal ?
Let us see some of the events that can cause (or generate, or raise) a signal:
Signals may be generated synchronously or asynchronously. A synchronous signal pertains to a specific action in the program, and is delivered (unless blocked) during that action. Most errors generate signals synchronously, and so do explicit requests by a process to generate a signal for that same process. On some machines, certain kinds of hardware errors (usually floating-point exceptions) are not reported completely synchronously, but may arrive a few instructions later.
Asynchronous signals are generated by events outside the control of the process that receives them. These signals arrive at unpredictable times during execution. External events generate signals asynchronously, and so do explicit requests that apply to some other process. One obvious example would be the sending of a signal to a process from another process or thread via a kill system call. Asynchronous signals are also aptly referred to as interrupts.
A given type of signal is either typically synchronous or typically asynchronous. For example, signals for errors are typically synchronous because errors generate signals synchronously. But any type of signal can be generated synchronously or asynchronously with an explicit request.
How Signals Are Delivered ?
When a signal is generated, it becomes pending. Normally it remains pending for just a short period of time and then is delivered to the process that was signaled. However, if that kind of signal is currently blocked, it may remain pending indefinitely—until signals of that kind are unblocked. Once unblocked, it will be delivered immediately.
When the signal is delivered, whether right away or after a long delay, the specified action for that signal is taken. For certain signals, such as SIGKILL and SIGSTOP, the action is fixed, but for most signals, the program has a choice. Possible default dispositions are
References:
1. Libc Manual
A signal is a software interrupt delivered to notify a process or thread of a particular event. The operating system uses signals to report exceptional situations to an executing program. Some signals report errors such as references to invalid memory addresses; others report asynchronous events, such as disconnection of a phone line.
The lifetime of a signal is the interval between its generation and its delivery. A signal that has been generated but not yet delivered is said to be pending. There may be considerable time between signal generation and signal delivery. The process must be running on a processor at the time of signal delivery.
Many computer science researchers compare signals with hardware interrupts, which occur when a hardware subsystem, such as a disk I/O (input/output) interface, generates an interrupt to a processor when the I/O completes. This event in turn causes the processor to enter an interrupt handler, so subsequent processing can be done in the operating system based on the source and cause of the interrupt. When a signal is sent to a process or thread, a signal handler may be entered (depending on the current disposition of the signal), which is similar to the system entering an interrupt handler as the result of receiving an interrupt.
What causes a Signal ?
Let us see some of the events that can cause (or generate, or raise) a signal:
- A program error such as dividing by zero or issuing an address outside the valid range.
- A user request to interrupt or terminate the program. Most environments are set up to let a user suspend the program by typing Ctrl-z, or terminate it with Ctrl-c. Whatever key sequence is used, the operating system sends the proper signal to interrupt the process.
- The termination of a child process.
- Expiration of a timer or alarm.
- A call to kill or raise system calls by the same process.
- A call to kill from another process. Signals are a limited but useful form of interprocess communication.
- An attempt to perform an I/O operation that cannot be done. Examples are reading from a pipe that has no writer, and reading or writing to a terminal in certain situations.
Signals may be generated synchronously or asynchronously. A synchronous signal pertains to a specific action in the program, and is delivered (unless blocked) during that action. Most errors generate signals synchronously, and so do explicit requests by a process to generate a signal for that same process. On some machines, certain kinds of hardware errors (usually floating-point exceptions) are not reported completely synchronously, but may arrive a few instructions later.
Asynchronous signals are generated by events outside the control of the process that receives them. These signals arrive at unpredictable times during execution. External events generate signals asynchronously, and so do explicit requests that apply to some other process. One obvious example would be the sending of a signal to a process from another process or thread via a kill system call. Asynchronous signals are also aptly referred to as interrupts.
A given type of signal is either typically synchronous or typically asynchronous. For example, signals for errors are typically synchronous because errors generate signals synchronously. But any type of signal can be generated synchronously or asynchronously with an explicit request.
How Signals Are Delivered ?
When a signal is generated, it becomes pending. Normally it remains pending for just a short period of time and then is delivered to the process that was signaled. However, if that kind of signal is currently blocked, it may remain pending indefinitely—until signals of that kind are unblocked. Once unblocked, it will be delivered immediately.
When the signal is delivered, whether right away or after a long delay, the specified action for that signal is taken. For certain signals, such as SIGKILL and SIGSTOP, the action is fixed, but for most signals, the program has a choice. Possible default dispositions are
Term Default action is to terminate the process.The program can also specify its own way of handling the signals (signal handlers). We sometimes say that a handler catches the signal. While the handler is running, that particular signal is normally blocked.
Ign Default action is to ignore the signal.
Core Default action is to terminate the process
and dump core.
Stop Default action is to stop the process.
Cont Default action is to continue the process
if it is currently stopped.
References:
1. Libc Manual
7.29.2009
Mysterious C program
Never thought in my life that one can write such a mysterious 'C' program which compiles without any errors and provides a strange output.
Output for the above code is
Any Clue of the output ???
#include <stdio.h>
main(t,_,a)
char *a;
{return!0<t?t<3?main(-79,-13,a+main(-87,1-_,
main(-86, 0, a+1 )+a)):1,t<_?main(t+1, _, a ):3,main ( -94, -27+t, a
)&&t == 2 ?_<13 ?main ( 2, _+1, "%s %d %d\n" ):9:16:t<0?t<-72?main(_,
t,"@n'+,#'/*{}w+/w#cdnr/+,{}r/*de}+,/*{*+,/w{%+,/w#q#n+,/#{l,+,/n{n+\
,/+#n+,/#;#q#n+,/+k#;*+,/'r :'d*'3,}{w+K w'K:'+}e#';dq#'l q#'+d'K#!/\
+k#;q#'r}eKK#}w'r}eKK{nl]'/#;#q#n'){)#}w'){){nl]'/+#n';d}rw' i;# ){n\
l]!/n{n#'; r{#w'r nc{nl]'/#{l,+'K {rw' iK{;[{nl]'/w#q#\
n'wk nw' iwk{KK{nl]!/w{%'l##w#' i; :{nl]'/*{q#'ld;r'}{nlwb!/*de}'c \
;;{nl'-{}rw]'/+,}##'*}#nc,',#nw]'/+kd'+e}+;\
#'rdq#w! nr'/ ') }+}{rl#'{n' ')# }'+}##(!!/")
:t<-50?_==*a ?putchar(a[31]):main(-65,_,a+1):main((*a == '/')+t,_,a\
+1 ):0<t?main ( 2, 2 , "%s"):*a=='/'||main(0,main(-61,*a, "!ek;dc \
i@bK'(q)-[w]*%n+r3#l,{}:\nuwloca-O;m .vpbks,fxntdCeghiry"),a+1);}
Output for the above code is
Can anyone explain me step by step way the process to get above output ....
On the first day of Christmas my true love gave to me
a partridge in a pear tree.
On the second day of Christmas my true love gave to me
two turtle doves
and a partridge in a pear tree.
On the third day of Christmas my true love gave to me
three french hens, two turtle doves
and a partridge in a pear tree.
On the fourth day of Christmas my true love gave to me
four calling birds, three french hens, two turtle doves
and a partridge in a pear tree.
On the fifth day of Christmas my true love gave to me
five gold rings;
four calling birds, three french hens, two turtle doves
and a partridge in a pear tree.
On the sixth day of Christmas my true love gave to me
six geese a-laying, five gold rings;
four calling birds, three french hens, two turtle doves
and a partridge in a pear tree.
On the seventh day of Christmas my true love gave to me
seven swans a-swimming,
six geese a-laying, five gold rings;
four calling birds, three french hens, two turtle doves
and a partridge in a pear tree.
On the eighth day of Christmas my true love gave to me
eight maids a-milking, seven swans a-swimming,
six geese a-laying, five gold rings;
four calling birds, three french hens, two turtle doves
and a partridge in a pear tree.
On the ninth day of Christmas my true love gave to me
nine ladies dancing, eight maids a-milking, seven swans a-swimming,
six geese a-laying, five gold rings;
four calling birds, three french hens, two turtle doves
and a partridge in a pear tree.
On the tenth day of Christmas my true love gave to me
ten lords a-leaping,
nine ladies dancing, eight maids a-milking, seven swans a-swimming,
six geese a-laying, five gold rings;
four calling birds, three french hens, two turtle doves
and a partridge in a pear tree.
On the eleventh day of Christmas my true love gave to me
eleven pipers piping, ten lords a-leaping,
nine ladies dancing, eight maids a-milking, seven swans a-swimming,
six geese a-laying, five gold rings;
four calling birds, three french hens, two turtle doves
and a partridge in a pear tree.
On the twelfth day of Christmas my true love gave to me
twelve drummers drumming, eleven pipers piping, ten lords a-leaping,
nine ladies dancing, eight maids a-milking, seven swans a-swimming,
six geese a-laying, five gold rings;
four calling birds, three french hens, two turtle doves
and a partridge in a pear tree.
4.29.2009
Beware of NVIDIA Graphic cards in Laptops
NVIDIA famously known for its high end graphics cards which makes everyone to own a machine with NVIDIA graphics card. But beware of NVIDIA graphics card as they suffer OVER HEATING issue because of weak die/packaging material. It seems NVIDIA G84 & G86 graphic chipsets had this problem.
According to NVIDIA, these affected GPUs are experiencing higher than expected failure rates causing video problems and it is because of weak die/packaging material set, which may fail with higher GPU temperature fluctuations.
If your NVIDIA GPU fails, you may see intermittent symptoms during early stages of failure that include:
* Multiple images
* Random characters on the screen
* Lines on the screen
* No video
* Black Screen
DELL agreed that some of its laptop models namely Inspiron 1420, Latitude D630, Latitude D630c, Dell Precision M2300, Vostro Notebook 1310, Vostro Notebook 1400, Vostro Notebook 1510, Vostro Notebook 1710, XPS M1330, XPS M1530 are facing the issue and released a BIOS upgradation to minimize the effect of the GPU failure problem and new systems are being shipped with the updated BIOS revisions.
HP also identified a hardware issue with certain HP Pavilion dv2000/dv6000/dv9000 and Compaq Presario V3000/V6000 series notebook PCs, and has also released a new BIOS for these notebook PCs.
We can see still DELL is shipping laptops with defective NVIDIA graphics card.
According to our sources, the failures are caused by a solder bump that connects the I/O termination of the silicon chip to the pad on the substrate. In Nvidia’s GPUs, this solder bump is created using high-lead. A thermal mismatch between the chip and the substrate has substantially grown in recent chip generations, apparently leading to fatigue cracking. Add into the equation a growing chip size (double the chip dimension, quadruple the stress on the bump) as well as generally hotter chips and you may have the perfect storm to take high lead beyond its limits. Apparently, problems arise at what Nvidia claims to be "extreme temperatures" and what we hear may be temperatures not too much above 70 degrees Celsius.Major laptop manufacturers like DELL, HP has accepted that there are some issues with laptops shipped with a NVIDIA graphics cards.
What supports the theory that a high-lead solder bump in fact is at fault is the fact that Nvidia ordered an immediate switch to use eutectic solders instead of high-lead versions in the last week of July.source
According to NVIDIA, these affected GPUs are experiencing higher than expected failure rates causing video problems and it is because of weak die/packaging material set, which may fail with higher GPU temperature fluctuations.
If your NVIDIA GPU fails, you may see intermittent symptoms during early stages of failure that include:
* Multiple images
* Random characters on the screen
* Lines on the screen
* No video
* Black Screen
DELL agreed that some of its laptop models namely Inspiron 1420, Latitude D630, Latitude D630c, Dell Precision M2300, Vostro Notebook 1310, Vostro Notebook 1400, Vostro Notebook 1510, Vostro Notebook 1710, XPS M1330, XPS M1530 are facing the issue and released a BIOS upgradation to minimize the effect of the GPU failure problem and new systems are being shipped with the updated BIOS revisions.
DELL is offering an additional 12-month limited warranty enhancement specific to this GPU failure issue. For all customers worldwide, DELL plans to add 12 months of coverage for this issue to the existing limited warranty up to 60 months from the date of purchase for the given list of systems.
DELL's BIOS fix is simply turning on the laptop fan much more than usual as a result laptop battery is drained. The 'fix' keeps the fan on much more and destroys battery life.
HP also identified a hardware issue with certain HP Pavilion dv2000/dv6000/dv9000 and Compaq Presario V3000/V6000 series notebook PCs, and has also released a new BIOS for these notebook PCs.
HP says"This service enhancement program is available in North America for 24 months after the start of your original standard limited warranty for issues listed below; otherwise your current standard limited warranty applies. Customers who already have a 24 month or longer warranty period will be covered under their existing standard HP Limited Warranty."Apple is also facing the same problem with the macbook pro's.
Apple says, NVIDIA assured Apple that Mac computers with these graphics processors were not affected. However, after an Apple-led investigation, Apple has determined that some MacBook Pro computers with the NVIDIA GeForce 8600M GT graphics processor may be affected. If the NVIDIA graphics processor in your MacBook Pro has failed, or fails within two years of the original date of purchase, a repair will be done free of charge, even if your MacBook Pro is out of warranty.Customers are really angry as the OEM's are not providing a permanent solution to the GPU problem and also not including some of the laptop models that are facing the same issue. Even some of the customers sued NVIDIA to repair their laptops with faulty graphics cards.
We can see still DELL is shipping laptops with defective NVIDIA graphics card.
3.22.2009
Cache Memory - Part2
Interaction Policies with Main Memory
Basically READ operations dominate processor cache accesses since many of the instruction accesses are READ operation’s and most instructions do not WRITE into memory. When the address of the block to be READ is available then the tag is read and if it is a HIT then READ from it.
In case of a miss the READ policies are:
Basically a Miss is comparatively slow because they require the data to be transferred from main memory to CPU which incurs a delay since main memory is much slower than cache memory, and also incurs the overhead for recording the new data in the cache before it is delivered to the processor. To take advantage of Locality of Reference, the CPU copies data into the cache whenever it accesses an address not present in the cache. Since it is likely the system will access that same location shortly, the system will save wait states by having that data in the cache. Thus cache memory handles the temporal aspects of memory access, but not the spatial aspects.
Accessing Caching memory locations won't speed up the program execution if we constantly access consecutive memory locations (Spatial Locality of Reference). To solve this problem, most caching systems read several consecutive bytes from memory when a cache miss occurs. 80x86 CPUs, for example, read between 16 and 64 bytes at a shot (depending upon the CPU) upon a cache miss. If you read 16 bytes, why read them in blocks rather than as you need them? As it turns out, most memory chips available today have special modes which let you quickly access several consecutive memory locations on the chip. The cache exploits this capability to reduce the average number of wait states needed to access memory.
It is not the same with Cache WRITE operation. Modifying a block cannot begin until the tag is checked to see if the address is a hit. Also the processor specifies the size of the write, usually between 1 and 8 bytes; only that portion of the block can be changed. In contrast, reads can access more bytes than necessary without a problem.
The Cache WRITE policies on write hit often distinguish cache designs:
The data in main memory being cached may be changed by other entities, in which case the copy in the cache may become out-of-date or stale. Alternatively, when the CPU updates the data in the cache, copies of data in other caches will become stale. Communication protocols between the cache managers which keep the data consistent are known as cache coherence protocols.
Basically READ operations dominate processor cache accesses since many of the instruction accesses are READ operation’s and most instructions do not WRITE into memory. When the address of the block to be READ is available then the tag is read and if it is a HIT then READ from it.
In case of a miss the READ policies are:
- Read Through - Reading a block directly from main memory.
- No Read Through - Reading a block from main memory into cache and then from cache to CPU. So we even update the cache memory.
Basically a Miss is comparatively slow because they require the data to be transferred from main memory to CPU which incurs a delay since main memory is much slower than cache memory, and also incurs the overhead for recording the new data in the cache before it is delivered to the processor. To take advantage of Locality of Reference, the CPU copies data into the cache whenever it accesses an address not present in the cache. Since it is likely the system will access that same location shortly, the system will save wait states by having that data in the cache. Thus cache memory handles the temporal aspects of memory access, but not the spatial aspects.
Accessing Caching memory locations won't speed up the program execution if we constantly access consecutive memory locations (Spatial Locality of Reference). To solve this problem, most caching systems read several consecutive bytes from memory when a cache miss occurs. 80x86 CPUs, for example, read between 16 and 64 bytes at a shot (depending upon the CPU) upon a cache miss. If you read 16 bytes, why read them in blocks rather than as you need them? As it turns out, most memory chips available today have special modes which let you quickly access several consecutive memory locations on the chip. The cache exploits this capability to reduce the average number of wait states needed to access memory.
It is not the same with Cache WRITE operation. Modifying a block cannot begin until the tag is checked to see if the address is a hit. Also the processor specifies the size of the write, usually between 1 and 8 bytes; only that portion of the block can be changed. In contrast, reads can access more bytes than necessary without a problem.
The Cache WRITE policies on write hit often distinguish cache designs:
- Write Through - the modified data is written back to both the block in the cache memory and in the main memory.
Advantage:
1. READ miss never results in writes to main memory.
2. Easy to implement
3. Main Memory always has the most current copy of the data (consistent)
Disadvantage:
1. WRITE operation is slower as we have to update both Main Memory and Cache Memory.
2. Every write needs a main memory access as a result uses more memory bandwidth
- Write Back - the modified data is first written only to the block in the cache memory. The modified cache block is written to main memory only when it is replaced. In order to reduce the frequency of writing back blocks on replacement, a dirty bit (a status bit) is commonly used to indicate whether the block is dirty (modified while in the cache) or clean (not modified). If it is clean the block is not written on a miss.
Advantage:
1. WRITE’s occur at the speed of the cache memory.
2. Multiple WRITE’s within a block require only one WRITE to main memory as a result uses less memory bandwidth
Disadvantage:
1. Harder to implement
2. Main Memory is not always consistent with cache reads that result in replacement may cause writes of dirty blocks to main memory.
- Write Allocate - the memory block is first loaded into cache memory from main memory on a write miss, followed by the write-hit action.
- No Write Allocate - the block is directly modified in the main memory and not loaded into the cache memory.
The data in main memory being cached may be changed by other entities, in which case the copy in the cache may become out-of-date or stale. Alternatively, when the CPU updates the data in the cache, copies of data in other caches will become stale. Communication protocols between the cache managers which keep the data consistent are known as cache coherence protocols.
Subscribe to:
Posts
(
Atom
)