BATTERIES cont.....
Progress in this area has been so fast that several new and sometimes conflicting considerations need to be outlined. These are mainly how to store, (charged or discharged), what dangers from over charging/discharging, and perhaps ambient temperatures.
As well as the progress in secondary (rechargeable types) there has been some progress in the primary types too. I am now strictly of the opinion that Lithium cells (AA or AAA) are best for cameras and remote controllers. We aren't just talking economy here. Reliability & dependability are a far greater issue for these items. This brings up longevity issues and for my money I'd opt for that in the form of Lithium cells. These can be purchased at reduced prices on-line. Try E-bay.
Friday, 2 January 2015
Monday, 28 December 2009
MORE on MICROPHONES
This entry supplements a previous post on microphones. This time we are going to look at the implications of so called quality or goodness. To do that we need to recap on a few "givens." Although aural acuity varies between individuals just as our vision does, it is a matter of fact that the fairer sex can generally hear much higher frequencies than the men. The usual range for a youngish person is said to be 30 to 20,000Hz at the outside, with the males being some 10 to 20% less at the high end. It is a also true that we can all follow a slowly rising sound frequency up much further than when it is just presented. As we get older our hearing tails off particularly in the higher frequency range.
Unfortunately there are some side issues. "Attack" or onset may require the presence of frequencies that lie beyond our perceived range. Another issue concerns "distortion." Most people would agree that this occurs when the process imparts new elements to the original sound. Unfortunately, it is nonetheless true that some such distortions are thought of as pleasing! Therein lies a very busy little bundle. Surely by that definition even amplification is a "distortion." Yes and so it might be argued. The words used to describe processed sounds are nearly all borrowed from existing language. In the main the English language as interpreted by the Americans!
This gives rise to such wonders as "colouration" (coloration if you must).
"Loudness" - "Level" - "Fader" - "Presence" - "Phasing" - "Chorusing" -
"Trim" - "Treble" - "Middle" - "Bass" - "Echo" - "Flanging" - "Reverb"- "Noise"- "Reflections" - etc. etc.
And here's a couple of French ones - "Ambience" - "Envelope"
Then there are the processes:- "Equalisation" - "Parametric EQ" - "Notch Filter" "Graphic EQ" - "Harmonizer-ing" - "Limiting" - "Compression" - Signal to Noise & "SINAD" - "Over-spill" - "Mono" - "Stereo/Binaural" - "Noise-cancelling" - {there'll be more!}.
We haven't got into the musical terms yet either if only because not all sound recording is of music.
Traditional thinking would have it as a pre-requisite that a microphone should be able to deal with all the frequencies in the range of our hearing without any undue distortion. However, this does not take into account the peculiarities of the human ear. There is a non-linearity in our hearing that changes with perceived volume or intensity. Then there is this very peculiar selective and directional ability that we have to somehow select the things we most want to hear. The best example is that of conversation across a noisy room. Some people can listen to, and concentrate upon, just a single instrument from a whole orchestra. Once you make a recording some of these clever options are denied to us.
This innate ability that we humans have to be selective with what we hear is very interesting. The more so since I have noticed that with advancing years my ability do do this is severely curtailed. Given that we might call the unwanted sounds "Noise" then we seem able to deploy some very sophisticated noise cancelling techniques. We do it without much thought. Those who have sought to understand that mechanism more thoroughly are soon forced to recognise just how very clever the faculty actually is. For example, our ears are directional collectors, but they are displaced to each side of our head and that allows us to tilt and otherwise align our heads for maximum pick-up of the desired sound. It emerges that we can actually get some unwanted sounds to cancel each other and the reason for this is to do with time delays in the arrival of the sound at each ear. In some circumstance that delay causes the two sound "waves" to oppose each other and the energy is dissipated in mutual cancellation. Our brain can do the rest! By the precise swivel of the head we can change the group of frequencies that we "tune in to" from a high group to a low group. Some of the mechanisms are not well understood. {See older posts "EAR 'ERE"at foot of page}.
The point is that once such sounds are recorded and played out to us through a single sound source such as a speaker, we can no longer do that. The microphone will react equally to ALL the sounds unless........
NOISE CANCELLING & CARDIOID MICROPHONES
To try and get around such problems, engineers have developed all manner of devices. Microphones with directional pick-up. Microphones that do that by noise cancelling - just as we do. Shielded microphones. Limited sensitivity microphones. Then follows the sound processing devices. At first developed to improve the selection, then to remove unwanted noise, and finally to change or enhance. The latter class are collectively known as "EFFECTS."
The larger size of condenser microphone reaches back to the earliest days of sound recording and in particular, broadcasting. Certainly back to the 1950's if not earlier. They were said to give an excellent sought after result. There is no doubt that they were a triumph in their time, but as technical knowledge has moved on, so has human taste & amp; opinion.
The first condenser microphones had to make use of valve(s) to amplify the tiny signal coming from the transducer. Thus the need for a power supply which became standardised at 48V. The triode valves usually chosen do have a downside in that they produce some "noise" which is heard as a hiss by those with acute hearing sensitivity. The studios sought out ways to reduce or mask this tendency with such clever devices as noise-gates, and filters. This whole process is usually referred to as "EQ" or Equalisation. An interesting by-product of the valve is said to be the added "warmth" that is added to the sound. It is here that we depart from pure science as the precise reason for that remains elusive and open to conjecture. For my own part I would put it down to an inbuilt "compression", of which more later.
Certain anomalies now present themselves. While a certain microphone, amplifier, indeed studio, might make your particular guitar, violin, or voice sound the way you wanted it to, another might not! Therein lies a discussion that might yet go on forever. It leads us to such phrases of convenience as "beauty is in the eye of the beholder" and "it's right if you think it's right."
For the would be sound engineer the availability of so many choices can be a veritable nightmare. Philosophically it might lead to another phrase that I have grown to like. "The good enough sound." This leads us to the study of perceived improvements. One very important aspect of this is known as "SIGNAL to NOISE." That is that which we wanted to accentuate or make prominant, and the opposite of that. In measuring what we have achieved in this respect our old unit of sound intensity the decibel (db) makes a re-entry. Because it is difficult to entirely separate the wanted signal from the noise, we often talk in terms of SINAD. This is SIgNal AnD noise to Noise only ratio - expressed in db. Whoops - have we lost you? Think slowly here. We can rarely, if ever, have perfection. We will have to settle for there being some noise, some unwanted sounds. It then becomes a question of masking the unwanted with extra volume or using other rather snazzy devices such as a noise gate. Is it worth a moments thought? I think so.
ADDENDUM - It can be argued that the various studio processes used to condition the condenser microphone (or any other device inc other types of microphone) will change, and may even reduce, their overall performance.
THE NOISE GATE
This started life in the studios when it was realised that even a low level hiss (from processing circuits, wiring etc.), can be very annoying in the dead area of a an acoustic (anechoic) chamber. The idea is that the signal is clamped or shut off to an absolute quiet until it has risen to a small but predetermined level called the threshold. Signals over this are allowed through, with the noise, which tends to mask its presence.
DIGITISATION
We spoke in another earlier post about the digitisation process. Technological advances have moved us in this direction because these techniques get us over several other difficult hurdles. For example the digital process can make faithful copies of sound tracks & sequences without any degeneration. We can do complex edits and "condition" the sound in ways that were once impossible. {e.g. change tempo but not pitch and vice-versa}. We can also accomplish with software, many of the analogue processes that only very expensive hardware can achieve.
However, there is a slight downside to it in that the best quality of sound needs many samples and levels which leads to large files. While technology has made huge strides in digital storage, we are still challenged to reduce the file-size and to accept the resulting shortfall in sound quality. Many are they who will argue that the process is so magical that they can't tell any difference!
To a purist however, this quality of sound is a non-starter. The tricks used to save space come at a price in terms of real sound quality. This is further compounded by the inferior capabilities of small speakers and some ear-pieces.
One needs to look at the weakest link in the chain to decide just what the other links should be like. Look at ALL the outlets that the source sound will traverse and apply a similar rule. Sometimes, of course, one just has to do as one is told by "the piper who calls the tune" or the depth of our pockets!
COMPRESSION
Let's look at Limiting first. Right from the earliest times it soon became clear that audio equipment was not tolerant to overload. Just look at the grooves of an old plastic record. If the stylus or cutter that is making the master recording swings too far laterally, it will slice into an adjacent groove or channel. Yet we need to consider what happens during very quiet passages when the signal is barely able to modulate the cutter. What we hear then is the hiss of the groove mixed with the quiet signal that we really wanted. If we solve the first problem by saying "there is a LIMIT beyond which we will not let this cutter move" - we get over the first snag. However, the ideal solution would be if we could turn the volume of quiet passages UP and turn DOWN the louder sequences.
Thus is born the idea behind COMPRESSION. That this is now done electronically is of little importance. If these ideas are carried to extremes, the dynamic range is foreshortened and gives rise to an obviously processed sound that is "Punchier." Some folk like that EFFECT. We need to use compression even inside our own ears and we certainly need to guard against overloads when using tape or even during digitisation. In some respects it's not very different from the shock absorber on a car wheel.
NOISE CANCELLING MICROPHONES
Please refer to my previous BLOG on microphones for a description of how a noise cancelling microphone works. We need to remember that audible sound reaches us through slight pressure variations in the air. We are sensitive to a very large range indeed yet we must guard against overloads because our ears are very fragile.
LOW V HIGH IMPEDANCE
The advantage of a low internal ac resistance (called impedance or "Z") is that incidental noise coming through the sheilding (cable etc) is shunted away. The disadvantage is a much lower signal level in need of more amplification. Go to the button at the end of the page to see another of my previous BLOG pieces
'ERE' EAR for more.
Unfortunately there are some side issues. "Attack" or onset may require the presence of frequencies that lie beyond our perceived range. Another issue concerns "distortion." Most people would agree that this occurs when the process imparts new elements to the original sound. Unfortunately, it is nonetheless true that some such distortions are thought of as pleasing! Therein lies a very busy little bundle. Surely by that definition even amplification is a "distortion." Yes and so it might be argued. The words used to describe processed sounds are nearly all borrowed from existing language. In the main the English language as interpreted by the Americans!
This gives rise to such wonders as "colouration" (coloration if you must).
"Loudness" - "Level" - "Fader" - "Presence" - "Phasing" - "Chorusing" -
"Trim" - "Treble" - "Middle" - "Bass" - "Echo" - "Flanging" - "Reverb"- "Noise"- "Reflections" - etc. etc.
And here's a couple of French ones - "Ambience" - "Envelope"
Then there are the processes:- "Equalisation" - "Parametric EQ" - "Notch Filter" "Graphic EQ" - "Harmonizer-ing" - "Limiting" - "Compression" - Signal to Noise & "SINAD" - "Over-spill" - "Mono" - "Stereo/Binaural" - "Noise-cancelling" - {there'll be more!}.
We haven't got into the musical terms yet either if only because not all sound recording is of music.
Traditional thinking would have it as a pre-requisite that a microphone should be able to deal with all the frequencies in the range of our hearing without any undue distortion. However, this does not take into account the peculiarities of the human ear. There is a non-linearity in our hearing that changes with perceived volume or intensity. Then there is this very peculiar selective and directional ability that we have to somehow select the things we most want to hear. The best example is that of conversation across a noisy room. Some people can listen to, and concentrate upon, just a single instrument from a whole orchestra. Once you make a recording some of these clever options are denied to us.
This innate ability that we humans have to be selective with what we hear is very interesting. The more so since I have noticed that with advancing years my ability do do this is severely curtailed. Given that we might call the unwanted sounds "Noise" then we seem able to deploy some very sophisticated noise cancelling techniques. We do it without much thought. Those who have sought to understand that mechanism more thoroughly are soon forced to recognise just how very clever the faculty actually is. For example, our ears are directional collectors, but they are displaced to each side of our head and that allows us to tilt and otherwise align our heads for maximum pick-up of the desired sound. It emerges that we can actually get some unwanted sounds to cancel each other and the reason for this is to do with time delays in the arrival of the sound at each ear. In some circumstance that delay causes the two sound "waves" to oppose each other and the energy is dissipated in mutual cancellation. Our brain can do the rest! By the precise swivel of the head we can change the group of frequencies that we "tune in to" from a high group to a low group. Some of the mechanisms are not well understood. {See older posts "EAR 'ERE"at foot of page}.
The point is that once such sounds are recorded and played out to us through a single sound source such as a speaker, we can no longer do that. The microphone will react equally to ALL the sounds unless........
NOISE CANCELLING & CARDIOID MICROPHONES
To try and get around such problems, engineers have developed all manner of devices. Microphones with directional pick-up. Microphones that do that by noise cancelling - just as we do. Shielded microphones. Limited sensitivity microphones. Then follows the sound processing devices. At first developed to improve the selection, then to remove unwanted noise, and finally to change or enhance. The latter class are collectively known as "EFFECTS."
The larger size of condenser microphone reaches back to the earliest days of sound recording and in particular, broadcasting. Certainly back to the 1950's if not earlier. They were said to give an excellent sought after result. There is no doubt that they were a triumph in their time, but as technical knowledge has moved on, so has human taste & amp; opinion.
The first condenser microphones had to make use of valve(s) to amplify the tiny signal coming from the transducer. Thus the need for a power supply which became standardised at 48V. The triode valves usually chosen do have a downside in that they produce some "noise" which is heard as a hiss by those with acute hearing sensitivity. The studios sought out ways to reduce or mask this tendency with such clever devices as noise-gates, and filters. This whole process is usually referred to as "EQ" or Equalisation. An interesting by-product of the valve is said to be the added "warmth" that is added to the sound. It is here that we depart from pure science as the precise reason for that remains elusive and open to conjecture. For my own part I would put it down to an inbuilt "compression", of which more later.
Certain anomalies now present themselves. While a certain microphone, amplifier, indeed studio, might make your particular guitar, violin, or voice sound the way you wanted it to, another might not! Therein lies a discussion that might yet go on forever. It leads us to such phrases of convenience as "beauty is in the eye of the beholder" and "it's right if you think it's right."
For the would be sound engineer the availability of so many choices can be a veritable nightmare. Philosophically it might lead to another phrase that I have grown to like. "The good enough sound." This leads us to the study of perceived improvements. One very important aspect of this is known as "SIGNAL to NOISE." That is that which we wanted to accentuate or make prominant, and the opposite of that. In measuring what we have achieved in this respect our old unit of sound intensity the decibel (db) makes a re-entry. Because it is difficult to entirely separate the wanted signal from the noise, we often talk in terms of SINAD. This is SIgNal AnD noise to Noise only ratio - expressed in db. Whoops - have we lost you? Think slowly here. We can rarely, if ever, have perfection. We will have to settle for there being some noise, some unwanted sounds. It then becomes a question of masking the unwanted with extra volume or using other rather snazzy devices such as a noise gate. Is it worth a moments thought? I think so.
ADDENDUM - It can be argued that the various studio processes used to condition the condenser microphone (or any other device inc other types of microphone) will change, and may even reduce, their overall performance.
THE NOISE GATE
This started life in the studios when it was realised that even a low level hiss (from processing circuits, wiring etc.), can be very annoying in the dead area of a an acoustic (anechoic) chamber. The idea is that the signal is clamped or shut off to an absolute quiet until it has risen to a small but predetermined level called the threshold. Signals over this are allowed through, with the noise, which tends to mask its presence.
DIGITISATION
We spoke in another earlier post about the digitisation process. Technological advances have moved us in this direction because these techniques get us over several other difficult hurdles. For example the digital process can make faithful copies of sound tracks & sequences without any degeneration. We can do complex edits and "condition" the sound in ways that were once impossible. {e.g. change tempo but not pitch and vice-versa}. We can also accomplish with software, many of the analogue processes that only very expensive hardware can achieve.
However, there is a slight downside to it in that the best quality of sound needs many samples and levels which leads to large files. While technology has made huge strides in digital storage, we are still challenged to reduce the file-size and to accept the resulting shortfall in sound quality. Many are they who will argue that the process is so magical that they can't tell any difference!
To a purist however, this quality of sound is a non-starter. The tricks used to save space come at a price in terms of real sound quality. This is further compounded by the inferior capabilities of small speakers and some ear-pieces.
One needs to look at the weakest link in the chain to decide just what the other links should be like. Look at ALL the outlets that the source sound will traverse and apply a similar rule. Sometimes, of course, one just has to do as one is told by "the piper who calls the tune" or the depth of our pockets!
COMPRESSION
Let's look at Limiting first. Right from the earliest times it soon became clear that audio equipment was not tolerant to overload. Just look at the grooves of an old plastic record. If the stylus or cutter that is making the master recording swings too far laterally, it will slice into an adjacent groove or channel. Yet we need to consider what happens during very quiet passages when the signal is barely able to modulate the cutter. What we hear then is the hiss of the groove mixed with the quiet signal that we really wanted. If we solve the first problem by saying "there is a LIMIT beyond which we will not let this cutter move" - we get over the first snag. However, the ideal solution would be if we could turn the volume of quiet passages UP and turn DOWN the louder sequences.
Thus is born the idea behind COMPRESSION. That this is now done electronically is of little importance. If these ideas are carried to extremes, the dynamic range is foreshortened and gives rise to an obviously processed sound that is "Punchier." Some folk like that EFFECT. We need to use compression even inside our own ears and we certainly need to guard against overloads when using tape or even during digitisation. In some respects it's not very different from the shock absorber on a car wheel.
NOISE CANCELLING MICROPHONES
Please refer to my previous BLOG on microphones for a description of how a noise cancelling microphone works. We need to remember that audible sound reaches us through slight pressure variations in the air. We are sensitive to a very large range indeed yet we must guard against overloads because our ears are very fragile.
LOW V HIGH IMPEDANCE
The advantage of a low internal ac resistance (called impedance or "Z") is that incidental noise coming through the sheilding (cable etc) is shunted away. The disadvantage is a much lower signal level in need of more amplification. Go to the button at the end of the page to see another of my previous BLOG pieces
'ERE' EAR for more.
Monday, 14 September 2009
Automatic Universal Battery Chargers
Whilst almost anything is possible - and more will be - some information is required in advance by the charger. This starts with the voltage and to some extent the capacity. There are quite a few chargers that can cope with varying capacities and to a lesser extent voltages. They do this by measuring the temperature of the cell(s) under charge. Indeed many battery packs have an inbuilt temperature sensor.
When Ni-Cad & Ni-M-Hy types cells are fully charged and/or cannot chemically convert the energy anymore, their temperature rises quite substantially. This can be used to "assume" that a fully charged state has now arrived
Another, though less reliable way is to monitor the rise in cell voltage. There's a snag here! If one or more cells in a pack become shorted (as can happen with dendrite growth in Ni-Cads), the expected final terminal voltage can never be achieved. This is what makes the temperature measuring method much safer. Another issue in its favour is that even cells that have lost their original capacity will be properly detected for a fully charged condition.
Here you see why it is so important that the cells in a battery should be evenly matched. We need them all to be charged/discharged at the the same rate & time.
POSTSCRIPT - Another method of detecting when a Ni-Cad or Ni-M-Hy battery is fully cahrged has come to my notice. It is called the "minus Delta U" calculation which in this case works thus: When above battery types are charged with a constant current their voltage rises continuously to amaximum which falls slightly if the charge is maintained. This fall can be used to terminate the charge.
UPDATE - whether covered elsewhere here or not there's another point to make concerning pulse battery charging. This was a common method on older cars equipped with a dynamo that delivered DC at a voltage that varies with RPM. A "Regulator" was fitted to ensure that charging currents did not exceed a sensible level for too long. The period of each pulse was controlled by voltage & current operated solenoid which interrupted the current flow often diverting it via a resistor. Thus were two levels of cuurent applied in pulses the duration of which depended on the battery state of charge. This practice gave something else that later alternator circuits did not give which concerns the formation of sulfate on the plates. This happens when the battery is in a wholly or partially discharged state and since it effectively insulates the area of the plates to which it has adhered, it effectively lowers the battery capacity. It now seems that the older pulsed charging systems could somehow dissolve any such formation more effectively than a constant current/voltage system and this has brought us to the electronic pulse charger. In a car it is the job of the ECU. However, external to the car specialised electronically controlled pulsed lead-acid battery charging and conditioning brings those advantages back.
Nor does the story end there. Other battery types, in particular Ni-Cad & Ni-M-Hy types can be charged up much quicker with the pulse method. It seems that the chemistry conversion is more effiicient when charged in pulses. Of those chargers I have seen the pulses are about 1A with a duration of 25 to 50% or in other words something like 1 to 2 secs in 4
When Ni-Cad & Ni-M-Hy types cells are fully charged and/or cannot chemically convert the energy anymore, their temperature rises quite substantially. This can be used to "assume" that a fully charged state has now arrived
Another, though less reliable way is to monitor the rise in cell voltage. There's a snag here! If one or more cells in a pack become shorted (as can happen with dendrite growth in Ni-Cads), the expected final terminal voltage can never be achieved. This is what makes the temperature measuring method much safer. Another issue in its favour is that even cells that have lost their original capacity will be properly detected for a fully charged condition.
Here you see why it is so important that the cells in a battery should be evenly matched. We need them all to be charged/discharged at the the same rate & time.
POSTSCRIPT - Another method of detecting when a Ni-Cad or Ni-M-Hy battery is fully cahrged has come to my notice. It is called the "minus Delta U" calculation which in this case works thus: When above battery types are charged with a constant current their voltage rises continuously to amaximum which falls slightly if the charge is maintained. This fall can be used to terminate the charge.
UPDATE - whether covered elsewhere here or not there's another point to make concerning pulse battery charging. This was a common method on older cars equipped with a dynamo that delivered DC at a voltage that varies with RPM. A "Regulator" was fitted to ensure that charging currents did not exceed a sensible level for too long. The period of each pulse was controlled by voltage & current operated solenoid which interrupted the current flow often diverting it via a resistor. Thus were two levels of cuurent applied in pulses the duration of which depended on the battery state of charge. This practice gave something else that later alternator circuits did not give which concerns the formation of sulfate on the plates. This happens when the battery is in a wholly or partially discharged state and since it effectively insulates the area of the plates to which it has adhered, it effectively lowers the battery capacity. It now seems that the older pulsed charging systems could somehow dissolve any such formation more effectively than a constant current/voltage system and this has brought us to the electronic pulse charger. In a car it is the job of the ECU. However, external to the car specialised electronically controlled pulsed lead-acid battery charging and conditioning brings those advantages back.
Nor does the story end there. Other battery types, in particular Ni-Cad & Ni-M-Hy types can be charged up much quicker with the pulse method. It seems that the chemistry conversion is more effiicient when charged in pulses. Of those chargers I have seen the pulses are about 1A with a duration of 25 to 50% or in other words something like 1 to 2 secs in 4
Thursday, 3 September 2009
Battery Charging
Oh dear - the need to tackle this in a bit more detail arises! Please read the previous section first. There are several ways to charge. We'll get rid of two straight away. "Trickle" & "Fast"
These are each the antithesis of the other. A "trckle" refers to the current which can be continually applied without any damage. Such damage occurs through overheating or chemical "gassing" in the extreme.
A "Fast" charge always does damage and is the trade off for a more immediate restoration to use.
The two most common methods of charging are referred to as the "Constant current" and "Constant voltage" methods. If we study the constant voltage method first, we will see why the other become necessary.
I think we might stay with the idea of a "battery" here - that is a collection of cells that together have a terminal PD (that's the voltage when under a medium discharge load) of say 12Volts. When a lead-acid car battery is in a fully charged state the cells will be slightly over their nominal 2V each. In fact 2.3V is the usual value. There need to be six cells and so the fully charged voltage will be 13.8V {Let's say14V}. The unloaded Voltage is known as the EMF. {ElectroMotive Force}.
It's worth saying that a 12V battery made from Ni-Cad, {or indeed Ni-M-Hy}, cells would need 10 cells. {to equal 12V}. When fully charged these would actually achieve an EMF of 14 to 16Volts.
In either case the voltage from the battery charger needs to be at least as large in order to overcome that standing voltage and thereby impart a charge.
Can you see that, (you must make yourself see this), the charging voltage must be at least equal to 14V. The applied voltage must be able to overcome the standing "surface" charge. We also need to consider some current "limiting" especially if the battery is in a very low state of charge.
Once the battery has "caught up" and is equal to the applied Voltage the charging will cease. It is therefore very safe to leave unattended as overcharging cannot occur. However the rate of charge will diminish over time as the battery voltage rises and becomes ever more equal. The total process slows down (to a stop) and takes longer than it really needs to. {Exponential decay}.
This, then, is the reason for the alternative where the applied voltage is much higher and the charge rate is much more constant over time. This method carries with it the dangers of over-charging mentioned earlier in text.
In practice most good chargers will employ a mixture of both methods for what is wanted here is Maximum speed without loss of capacity or any damage. So we will charge at a medium rate until the cell voltages all rise and then "fold" the current back. Can we do any better?
I'm afraid to tell you we can! The cell voltage will rise to a peak when it is fully charged and this works fine if all the cells are in the same condition. In practice they are often not so equal as the chemicals within them age at different rates.
There's another problem. We might not know how large (how much capacity) a given battery actually has. If this is so, we can't use the charging time as a guide. The answer is to measure the temperature. There will be a rise in temperature when the chemical conversion is done. If we sense that, then we might be on the way to some pretty smart battery charging.
What more do you really need to know?
These are each the antithesis of the other. A "trckle" refers to the current which can be continually applied without any damage. Such damage occurs through overheating or chemical "gassing" in the extreme.
A "Fast" charge always does damage and is the trade off for a more immediate restoration to use.
The two most common methods of charging are referred to as the "Constant current" and "Constant voltage" methods. If we study the constant voltage method first, we will see why the other become necessary.
I think we might stay with the idea of a "battery" here - that is a collection of cells that together have a terminal PD (that's the voltage when under a medium discharge load) of say 12Volts. When a lead-acid car battery is in a fully charged state the cells will be slightly over their nominal 2V each. In fact 2.3V is the usual value. There need to be six cells and so the fully charged voltage will be 13.8V {Let's say14V}. The unloaded Voltage is known as the EMF. {ElectroMotive Force}.
It's worth saying that a 12V battery made from Ni-Cad, {or indeed Ni-M-Hy}, cells would need 10 cells. {to equal 12V}. When fully charged these would actually achieve an EMF of 14 to 16Volts.
In either case the voltage from the battery charger needs to be at least as large in order to overcome that standing voltage and thereby impart a charge.
Can you see that, (you must make yourself see this), the charging voltage must be at least equal to 14V. The applied voltage must be able to overcome the standing "surface" charge. We also need to consider some current "limiting" especially if the battery is in a very low state of charge.
Once the battery has "caught up" and is equal to the applied Voltage the charging will cease. It is therefore very safe to leave unattended as overcharging cannot occur. However the rate of charge will diminish over time as the battery voltage rises and becomes ever more equal. The total process slows down (to a stop) and takes longer than it really needs to. {Exponential decay}.
This, then, is the reason for the alternative where the applied voltage is much higher and the charge rate is much more constant over time. This method carries with it the dangers of over-charging mentioned earlier in text.
In practice most good chargers will employ a mixture of both methods for what is wanted here is Maximum speed without loss of capacity or any damage. So we will charge at a medium rate until the cell voltages all rise and then "fold" the current back. Can we do any better?
I'm afraid to tell you we can! The cell voltage will rise to a peak when it is fully charged and this works fine if all the cells are in the same condition. In practice they are often not so equal as the chemicals within them age at different rates.
There's another problem. We might not know how large (how much capacity) a given battery actually has. If this is so, we can't use the charging time as a guide. The answer is to measure the temperature. There will be a rise in temperature when the chemical conversion is done. If we sense that, then we might be on the way to some pretty smart battery charging.
What more do you really need to know?
Labels:
Additional charging information
Thursday, 9 July 2009
BATTERY TECHNOLOGY
I still get some interest in this subject, what with all the digital cameras and remote controls, clocks and even battery driven tools. Battery use is endless and can be expensive too. It pays to know the best way to cope. There has been tremendous progress in both primary & secondary battery technology over the last few years. I once wrote a little treatise for my sons which I will reproduce here. Actually, it came about because of all their battery driven toys. Since then we have seen alkaline, Mercuric-Oxide, Zinc-Air, Silver Oxide, Nickel MetalHydride, Lithium, Li-Ion and today I spotted a Nickel-Zinc type for digital cameras in Boots. Anyway here's how it was a few years ago with one or two updates: -
RE-CHARGABLE BATTERIES
(A collection of information)
Resume
First of all a recap.
The "battery" is made up of a group of "cells".
There are two classes of such cells: -
PRIMARY, those which cannot be re-charged because the chemical action is not reversible.
SECONDARY, those that can be re-charged because it is.
Efficiency is related to the energy that it takes to produce a charged battery, and what energy can then be recovered. It is measured in Amp hours or mAh
ELECTRO-MOTIVE FORCE (EMF). This refers to the UN-LOADED terminal voltage, or the actual potential (Voltage) that the cell delivers before there is any load whatsoever. In practice a very light load as would be drawn by a high resistance volt-meter reads it OK. EMF falls as the current drawn is increased and is in proportion to the internal resistance of the cell. This can change over time or with the state of charge, hence the drop off of voltage as cells "flatten." The internal resistance can, in some chemistries, be used as an indication of the state of charge.
POTENTIAL DIFFERENCE (PD) Refers to the terminal output voltage under load - when a fairly substantial current is being drawn, say at a 1 Hr rate. This means, for example that for a 50Ah battery the load would be 50 Amps; and for a 1600mAH battery, 1600mA.
SURFACE CHARGE There is an apparent increase in EMF when a cell is freshly and fully recharged. It reduces quickly after standing or at first discharge. PD (Potential Difference) is the EMF or Voltage that is present when the cell is being made to work (discharge). This is the parameter that counts in practical use. Fortunately, most cells have a fairly constant and predictable terminal voltage which is maintained over their discharge cycle.
CAPACITY refers to the total energy that a cell contains in Ah (Ampere - hours). This can be too big for some little cells and so we then use mAh (for milli or one thousandth part). i.e. 1000mA = 1A.
A specific discharge rate is implied usually over 10 hours for large capacity cells, and over 1 hour for small cells. This is sometimes referred to as the "C" or "C1" / "1C" rate. For example, a 1.2 Ah cell will supply 1.2A for 1 hour.
The CAPACITY of a cell will vary with the discharge rate, reducing as the process is speeded up. The discharge time at the HOUR RATE, meaning the current that can be drawn over that period that would just render the cell to be FLAT - or discharged. This is sometimes quoted as a PD that is about 15% under the nominal voltage. This gives us 1.05V for Ni-Cad & 1.75V for Lead-Acid.
WATTS PER HOUR
If you prefer to think in the more familiar Watts measurement for power, multiply the terminal voltage by the current in amps. A 12V car battery of say 50 Ah capacity, can deliver 12 x 50 = 600Watt-hours. It's not much really. In practice, at that rate of discharge, it might well be rather less.
RECHARGING
Optimum re - charging is quoted in relation to the ten hour rate at, but with an added EFFICIENCY factor in per cent (%). If the EFFICIENCY is quoted at 40% over the TEN HOUR RATE, then we would proceed with the charging for that extra period of time: 10 hrs + 40% = 14 hrs.
Now if the CAPACITY is say 1Ah at the 10 hour rate (that's 100mA over 10hrs), we charge at that rate over 14 hrs (assume 40% extra for in-efficiency), to achieve full charge. This is the most usual example.
Fast charge / discharge results in reduced efficiency because the chemical action cannot keep up with the demand and therefore energy is wasted, usually as heat or gas. Carried to extremes this will cause damage. This is similar to what happens when you go on charging for too long and is one reason why it is better to start the re - charge from when the cell is flat. However, there are dangers to having some cell types in a flat condition for long. Most cell types must never be reverse charged. This can happen when cells are connected in a serial stack (or BATTERY), as one or more cells become fully discharged before some of the others.
IT IS VERY IMPORTANT TO UNDERSTAND THIS LAST POINT!
EXAMPLE
A circuit of cells connected in series (and +ve to -ve to increase the voltage) is to supply a bulb (we call this the LOAD). At first all the cells are in a more or less fully charged condition. As time goes by one, (or more), cells is the first to be discharged to zero Volts at its output terminals. At this point it begins to absorb energy in reverse polarity from the other cells in the circuit that are not yet flat and receives the damaging reverse charge. It is difficult or impossible to rescue cells that are damaged in this way.
The 2Volts / cell LEAD - ACID (as used in car) batteries SULPHATE when areas of their lead plates are no longer in a chemically charged state. {Re-charge every 3 months or keep trickle-charged}.
LEAD-ACID CELLS SHOULD THEREFORE BE STORED IN A CHARGED CONDITION. There are two types of lead involved and it is the negative plate that suffers if they are not fully charged. The white lead-sulphate covers the surface and prevents it being in contact with the electrolyte which is dilute - SULPHURIC ACID - H2SO4. Sulphate is very hard to remove once it has formed although CALCIUM in the acid can help. For this reason it is far better to make sure that LEAD - ACID batteries are always kept in a charged condition or are re - charged at very frequent intervals. Sulphation leads to a loss of CAPACITY, (Ah), as not so much area of material is available for conversion - to absorb the charge. This can lead to inadvertent overcharge-ing which although they are tolerant of a small "trickle" over charge, tends to force material from the plates where it falls to the bottom, is lost for chemical conversion and can cause shorts. Perhaps more serious is that when a fully charged lead - acid cell is still receiving charge current, it produces highly explosive HYDROGEN gas !!
Some heavy duty, slow discharge units produced say, for telephone exchange back - up, can last for 30 years !
Nickel-Cadmium (Ni - Cad) cells have some different problems. In many ways they are superior to other SECONDARY re - charge-able cells, and they are certainly less hazardous if only because they contain no acids. They are however, POISONOUS to all life! Their PD is only 1.2V per cell and for a given Ah capacity will need to be larger in size and heavier than other types.
Some reports say that they are tolerant of persistent overcharging, others say they are not! It may depend upon the construction and re-sealable venting. In actual practice, these batteries do NOT thrive on being continually charged - or over-charged at high current rates. However, they seem happy to be trickle charged. Capacity is adversely affected by high temperatures during charging.
Ni-Cad cells tend to keep their output PD right up to the end of their full discharge, rolling off very suddenly as they go entirely flat. Little warning is given by the terminal voltage and the only real means of knowing the state of charge is to monitor the discharge rate and period. Because these cells should be stored in a discharged state, they are easier to maintain and keep. Their chief draw-back is that they grow DENDRITE hairs of conductive metal when left in a charged state which shorts out the material within and prevents re - charging. One possible solution is to "blow" the hairs away (melt them) with a very high instantaneous current pulse of correct polarity, limited duration and magnitude. For cells up to say, 4Ah use a charged up electrolytic capacitor (say to 12V from a car battery via a series resistor of about 1 to 10 ohms, (to resrict the surge current).
When it is charged, "splash" the capacitor across the cell terminals briefly, then try a few seconds of high charge within normal limits for the cell. As a guide use say 50% of capacity. Once the cell can support its normal 1.2 Volts PD for a light current draw, charge normally, but at 14 hour rate. (One tenth of total capacity for 14 Hrs). Remember that this treatment formula applies to individual cells and NOT battery packs. If individual cells cannot be isolated a higher voltage will be required, say double (24 for V for a 12 Volt battery, with the risk of damage to perfectly good cells in the pack.
NI-CAD cells also suffer from a phenomenon called 'THE MEMORY EFFECT' If a cell is boost charged before it has become exhausted, it behaves as though it has a reduced capacity, subsequently discharging to the level it was at when the boost started and then behaving 'FLAT'. This may be related to the dendrite growth that was mentioned earlier. However, official procedure is to make sure the cell is fully discharged before the re - charge begins. This is reasonably easy with a single CELL as a modern 'intelligent' charger can effect a discharge until the cell voltage first reduces and thereby commence the full charge cycle. Because of the reverse charge dangers that can occur in a BATTERY of several cells, this procedure is not without some problems when say, camcorder batteries are to be re - charged. In any case these 'intelligent' chargers have to be told what the full voltage should be, and the Ah capacity. They are completely duped by a battery with a faulty cell in its stack !
STORE Ni-Cad BATTERIES/CELLS IN FULLY DISCHRGRED STATE!
Nickel-Metal-Hydride
The Ni-M-H type of cell has become popular & much cheaper over revcent years. These are very similar to the older Ni-Cad having 1.2V cells, but without the terrors of the memory effect, or indeed long term inactivity. Shelf life charge is improved and occasional top up boosting does not cause a problem. Nor does a small continuous trickle charge. It would appear that these types don't grow dendrite hairs. CAN BE STORED IN A CHARGED STATE WITH A MODERATE SHELF LIFE. For long term store discharged.
B.J.Greene. Apr99
LI-ION TYPE CELLS ! {26May2005}
These Lithium-ion batteries are best STORED IN A FULLY CHARGED CONDITION. They are also fussy about the temperature, not liking to be charged in any extremes below freezing or above 40°C. Such operation or even storage will reduce capacity and life.
There is no memory effect and the power to weight ratio is very favourable.
Lithium-ion batteries (sometimes abbreviated Li-ion batteries) are a type of rechargeable battery in which a lithium ion moves between the anode and cathode. The lithium ion moves from the anode to the cathode during discharge and in reverse, from the cathode to the anode, when charging.
Lithium ion batteries are common in consumer electronics. They are one of the most popular types of battery for portable electronics, with one of the best energy-to-weight ratios, no memory effect, and a slow loss of charge when not in use. In addition to uses for consumer electronics, lithium-ion batteries are growing in popularity for defense, automotive, and aerospace applications due to their high energy density. However certain kinds of mistreatment may cause Li-ion batteries to explode.
The three primary functional components of a lithium ion battery are the anode, cathode, and electrolyte, for which a variety of materials may be used. Commercially, the most popular material for the anode is graphite. The cathode is generally one of three materials: a layered oxide, such as lithium cobalt oxide, one based on a polyanion, such as lithium iron phosphate, or a spinel, such as lithium manganese oxide, although materials such as TiS2 (titanium disulfide) were originally used.[3] Depending on the choice of material for the anode, cathode, and electrolyte the voltage, capacity, life, and safety of a lithium ion battery can change dramatically. Lithium ion batteries are not to be confused with lithium batteries, the key difference being that lithium batteries are primary batteries containing metallic lithium while lithium-ion batteries are secondary batteries containing an intercalation anode material.
RE-CHARGABLE BATTERIES
(A collection of information)
Resume
First of all a recap.
The "battery" is made up of a group of "cells".
There are two classes of such cells: -
PRIMARY, those which cannot be re-charged because the chemical action is not reversible.
SECONDARY, those that can be re-charged because it is.
Efficiency is related to the energy that it takes to produce a charged battery, and what energy can then be recovered. It is measured in Amp hours or mAh
ELECTRO-MOTIVE FORCE (EMF). This refers to the UN-LOADED terminal voltage, or the actual potential (Voltage) that the cell delivers before there is any load whatsoever. In practice a very light load as would be drawn by a high resistance volt-meter reads it OK. EMF falls as the current drawn is increased and is in proportion to the internal resistance of the cell. This can change over time or with the state of charge, hence the drop off of voltage as cells "flatten." The internal resistance can, in some chemistries, be used as an indication of the state of charge.
POTENTIAL DIFFERENCE (PD) Refers to the terminal output voltage under load - when a fairly substantial current is being drawn, say at a 1 Hr rate. This means, for example that for a 50Ah battery the load would be 50 Amps; and for a 1600mAH battery, 1600mA.
SURFACE CHARGE There is an apparent increase in EMF when a cell is freshly and fully recharged. It reduces quickly after standing or at first discharge. PD (Potential Difference) is the EMF or Voltage that is present when the cell is being made to work (discharge). This is the parameter that counts in practical use. Fortunately, most cells have a fairly constant and predictable terminal voltage which is maintained over their discharge cycle.
CAPACITY refers to the total energy that a cell contains in Ah (Ampere - hours). This can be too big for some little cells and so we then use mAh (for milli or one thousandth part). i.e. 1000mA = 1A.
A specific discharge rate is implied usually over 10 hours for large capacity cells, and over 1 hour for small cells. This is sometimes referred to as the "C" or "C1" / "1C" rate. For example, a 1.2 Ah cell will supply 1.2A for 1 hour.
The CAPACITY of a cell will vary with the discharge rate, reducing as the process is speeded up. The discharge time at the HOUR RATE, meaning the current that can be drawn over that period that would just render the cell to be FLAT - or discharged. This is sometimes quoted as a PD that is about 15% under the nominal voltage. This gives us 1.05V for Ni-Cad & 1.75V for Lead-Acid.
WATTS PER HOUR
If you prefer to think in the more familiar Watts measurement for power, multiply the terminal voltage by the current in amps. A 12V car battery of say 50 Ah capacity, can deliver 12 x 50 = 600Watt-hours. It's not much really. In practice, at that rate of discharge, it might well be rather less.
RECHARGING
Optimum re - charging is quoted in relation to the ten hour rate at, but with an added EFFICIENCY factor in per cent (%). If the EFFICIENCY is quoted at 40% over the TEN HOUR RATE, then we would proceed with the charging for that extra period of time: 10 hrs + 40% = 14 hrs.
Now if the CAPACITY is say 1Ah at the 10 hour rate (that's 100mA over 10hrs), we charge at that rate over 14 hrs (assume 40% extra for in-efficiency), to achieve full charge. This is the most usual example.
Fast charge / discharge results in reduced efficiency because the chemical action cannot keep up with the demand and therefore energy is wasted, usually as heat or gas. Carried to extremes this will cause damage. This is similar to what happens when you go on charging for too long and is one reason why it is better to start the re - charge from when the cell is flat. However, there are dangers to having some cell types in a flat condition for long. Most cell types must never be reverse charged. This can happen when cells are connected in a serial stack (or BATTERY), as one or more cells become fully discharged before some of the others.
IT IS VERY IMPORTANT TO UNDERSTAND THIS LAST POINT!
EXAMPLE
A circuit of cells connected in series (and +ve to -ve to increase the voltage) is to supply a bulb (we call this the LOAD). At first all the cells are in a more or less fully charged condition. As time goes by one, (or more), cells is the first to be discharged to zero Volts at its output terminals. At this point it begins to absorb energy in reverse polarity from the other cells in the circuit that are not yet flat and receives the damaging reverse charge. It is difficult or impossible to rescue cells that are damaged in this way.
The 2Volts / cell LEAD - ACID (as used in car) batteries SULPHATE when areas of their lead plates are no longer in a chemically charged state. {Re-charge every 3 months or keep trickle-charged}.
LEAD-ACID CELLS SHOULD THEREFORE BE STORED IN A CHARGED CONDITION. There are two types of lead involved and it is the negative plate that suffers if they are not fully charged. The white lead-sulphate covers the surface and prevents it being in contact with the electrolyte which is dilute - SULPHURIC ACID - H2SO4. Sulphate is very hard to remove once it has formed although CALCIUM in the acid can help. For this reason it is far better to make sure that LEAD - ACID batteries are always kept in a charged condition or are re - charged at very frequent intervals. Sulphation leads to a loss of CAPACITY, (Ah), as not so much area of material is available for conversion - to absorb the charge. This can lead to inadvertent overcharge-ing which although they are tolerant of a small "trickle" over charge, tends to force material from the plates where it falls to the bottom, is lost for chemical conversion and can cause shorts. Perhaps more serious is that when a fully charged lead - acid cell is still receiving charge current, it produces highly explosive HYDROGEN gas !!
Some heavy duty, slow discharge units produced say, for telephone exchange back - up, can last for 30 years !
Nickel-Cadmium (Ni - Cad) cells have some different problems. In many ways they are superior to other SECONDARY re - charge-able cells, and they are certainly less hazardous if only because they contain no acids. They are however, POISONOUS to all life! Their PD is only 1.2V per cell and for a given Ah capacity will need to be larger in size and heavier than other types.
Some reports say that they are tolerant of persistent overcharging, others say they are not! It may depend upon the construction and re-sealable venting. In actual practice, these batteries do NOT thrive on being continually charged - or over-charged at high current rates. However, they seem happy to be trickle charged. Capacity is adversely affected by high temperatures during charging.
Ni-Cad cells tend to keep their output PD right up to the end of their full discharge, rolling off very suddenly as they go entirely flat. Little warning is given by the terminal voltage and the only real means of knowing the state of charge is to monitor the discharge rate and period. Because these cells should be stored in a discharged state, they are easier to maintain and keep. Their chief draw-back is that they grow DENDRITE hairs of conductive metal when left in a charged state which shorts out the material within and prevents re - charging. One possible solution is to "blow" the hairs away (melt them) with a very high instantaneous current pulse of correct polarity, limited duration and magnitude. For cells up to say, 4Ah use a charged up electrolytic capacitor (say to 12V from a car battery via a series resistor of about 1 to 10 ohms, (to resrict the surge current).
When it is charged, "splash" the capacitor across the cell terminals briefly, then try a few seconds of high charge within normal limits for the cell. As a guide use say 50% of capacity. Once the cell can support its normal 1.2 Volts PD for a light current draw, charge normally, but at 14 hour rate. (One tenth of total capacity for 14 Hrs). Remember that this treatment formula applies to individual cells and NOT battery packs. If individual cells cannot be isolated a higher voltage will be required, say double (24 for V for a 12 Volt battery, with the risk of damage to perfectly good cells in the pack.
NI-CAD cells also suffer from a phenomenon called 'THE MEMORY EFFECT' If a cell is boost charged before it has become exhausted, it behaves as though it has a reduced capacity, subsequently discharging to the level it was at when the boost started and then behaving 'FLAT'. This may be related to the dendrite growth that was mentioned earlier. However, official procedure is to make sure the cell is fully discharged before the re - charge begins. This is reasonably easy with a single CELL as a modern 'intelligent' charger can effect a discharge until the cell voltage first reduces and thereby commence the full charge cycle. Because of the reverse charge dangers that can occur in a BATTERY of several cells, this procedure is not without some problems when say, camcorder batteries are to be re - charged. In any case these 'intelligent' chargers have to be told what the full voltage should be, and the Ah capacity. They are completely duped by a battery with a faulty cell in its stack !
STORE Ni-Cad BATTERIES/CELLS IN FULLY DISCHRGRED STATE!
Nickel-Metal-Hydride
The Ni-M-H type of cell has become popular & much cheaper over revcent years. These are very similar to the older Ni-Cad having 1.2V cells, but without the terrors of the memory effect, or indeed long term inactivity. Shelf life charge is improved and occasional top up boosting does not cause a problem. Nor does a small continuous trickle charge. It would appear that these types don't grow dendrite hairs. CAN BE STORED IN A CHARGED STATE WITH A MODERATE SHELF LIFE. For long term store discharged.
B.J.Greene. Apr99
LI-ION TYPE CELLS ! {26May2005}
These Lithium-ion batteries are best STORED IN A FULLY CHARGED CONDITION. They are also fussy about the temperature, not liking to be charged in any extremes below freezing or above 40°C. Such operation or even storage will reduce capacity and life.
There is no memory effect and the power to weight ratio is very favourable.
Lithium-ion batteries (sometimes abbreviated Li-ion batteries) are a type of rechargeable battery in which a lithium ion moves between the anode and cathode. The lithium ion moves from the anode to the cathode during discharge and in reverse, from the cathode to the anode, when charging.
Lithium ion batteries are common in consumer electronics. They are one of the most popular types of battery for portable electronics, with one of the best energy-to-weight ratios, no memory effect, and a slow loss of charge when not in use. In addition to uses for consumer electronics, lithium-ion batteries are growing in popularity for defense, automotive, and aerospace applications due to their high energy density. However certain kinds of mistreatment may cause Li-ion batteries to explode.
The three primary functional components of a lithium ion battery are the anode, cathode, and electrolyte, for which a variety of materials may be used. Commercially, the most popular material for the anode is graphite. The cathode is generally one of three materials: a layered oxide, such as lithium cobalt oxide, one based on a polyanion, such as lithium iron phosphate, or a spinel, such as lithium manganese oxide, although materials such as TiS2 (titanium disulfide) were originally used.[3] Depending on the choice of material for the anode, cathode, and electrolyte the voltage, capacity, life, and safety of a lithium ion battery can change dramatically. Lithium ion batteries are not to be confused with lithium batteries, the key difference being that lithium batteries are primary batteries containing metallic lithium while lithium-ion batteries are secondary batteries containing an intercalation anode material.
Friday, 8 May 2009
DIGITAL & ANALOGUE
At first the complex sound-wave contours were represented by the shape & extent of the record groove. It just needed to be amplified and applied to an air pump transducer (Speaker). Then the talking films used light patches at the side of the film. This was followed by magnetic tape where the magnetisation depth still represents the sound-wave. These schemes are "ANALOGOUS" to the original wave shape.
Digitization is very different and quite hard to get a handle on. The same sound-wave is repeatedly measured for its instant strength and the value is recorded as a binary number or code. This has to be done very quickly and a favourite standard rate is 44.1 KHz. This is to ensure that nothing of importance is missed.
The range of levels that the coding system can represent depends on the total number of digits in the number. If it was a decimal system we know that two digits can represent 99 levels, three 999, and so on. {Actually 100 & 1000 since zero is a level too}.
In the binary system each "bit" (for binary digit) can represent either a "0" or a "1" and it turns out that the numbers can get unreasonably long. 1000 levels needs 10 bits. (1024 levels actually). This doesn't fit well with the architecture of most computers and long numbers take lots of time to handle. It is common to place a limit on the maximum number of "quantities" that can be represented.
This "quantization" process can be "shifted" to represent more levels without the need of more bits. That process is not that unlike the "SHIFT" keys used on a typewriter to switch from upper to lower case letters - WITHOUT the need of more & more keys.
Advantages of the digital method:
Analogue signals can never be faithfully copied. There is always some degradation from the original. Digitized quantities can be copied exactly, and much more quickly too.
It is useful to be able to manipulate sound tracks. In a computer the binary coded signal can be pitched changed or speed changed independantly.
Cutting & patching are possible in the software.
Effects (echo, reverb, harmonoy, tremelo, vibrato) can all be added to the signal without destroying the original sound track.
The necessary physical storage space is less than with other media.
The cost of the optical media (CD/DVD) is very low.
DISADVANTAGES: -
The argument that digital sound sources are "better" is not usually true. This is because practicalities dictate that space-saving devices are used. (mp3 files). The sound quality can seem almost too clean as it is possible to remove all "noise" signals completely. It can be argued that the complex compression techniques that are used actually change the sound in subtle
ways that is not always to the listeners liking.
The jury is still out on digital longevity. The old bakelite & celluloid records will take some beating in that respect with some still playing after 100 years. It isn't thought that CD/DVD life will be anything like that long. The compact nature of the track data militates against a long term life. Flash memories have only a 10 yr life. The magentic stores might fare a little better in this respect.
Comapatibility has always ben an issue. Phonographs, gramophones, record players, reel to reel and cassette tape machines have all had a turn. With digital both the media and the software format might be overtaken. While common software formats may be maintained & even regenerated fairly easily, it is the hardware that poses the biggest problem. Where would you now go, for example, to get played one of the original 7" floppy disks from the 1980's? The 5.25" format is also gone. Even a 3.5" FDD is getting hard to find now. What will the future bring?
Digitization is very different and quite hard to get a handle on. The same sound-wave is repeatedly measured for its instant strength and the value is recorded as a binary number or code. This has to be done very quickly and a favourite standard rate is 44.1 KHz. This is to ensure that nothing of importance is missed.

The range of levels that the coding system can represent depends on the total number of digits in the number. If it was a decimal system we know that two digits can represent 99 levels, three 999, and so on. {Actually 100 & 1000 since zero is a level too}.
In the binary system each "bit" (for binary digit) can represent either a "0" or a "1" and it turns out that the numbers can get unreasonably long. 1000 levels needs 10 bits. (1024 levels actually). This doesn't fit well with the architecture of most computers and long numbers take lots of time to handle. It is common to place a limit on the maximum number of "quantities" that can be represented.

This "quantization" process can be "shifted" to represent more levels without the need of more bits. That process is not that unlike the "SHIFT" keys used on a typewriter to switch from upper to lower case letters - WITHOUT the need of more & more keys.
Advantages of the digital method:
Analogue signals can never be faithfully copied. There is always some degradation from the original. Digitized quantities can be copied exactly, and much more quickly too.
It is useful to be able to manipulate sound tracks. In a computer the binary coded signal can be pitched changed or speed changed independantly.
Cutting & patching are possible in the software.
Effects (echo, reverb, harmonoy, tremelo, vibrato) can all be added to the signal without destroying the original sound track.
The necessary physical storage space is less than with other media.
The cost of the optical media (CD/DVD) is very low.
DISADVANTAGES: -
The argument that digital sound sources are "better" is not usually true. This is because practicalities dictate that space-saving devices are used. (mp3 files). The sound quality can seem almost too clean as it is possible to remove all "noise" signals completely. It can be argued that the complex compression techniques that are used actually change the sound in subtle
ways that is not always to the listeners liking.
The jury is still out on digital longevity. The old bakelite & celluloid records will take some beating in that respect with some still playing after 100 years. It isn't thought that CD/DVD life will be anything like that long. The compact nature of the track data militates against a long term life. Flash memories have only a 10 yr life. The magentic stores might fare a little better in this respect.
Comapatibility has always ben an issue. Phonographs, gramophones, record players, reel to reel and cassette tape machines have all had a turn. With digital both the media and the software format might be overtaken. While common software formats may be maintained & even regenerated fairly easily, it is the hardware that poses the biggest problem. Where would you now go, for example, to get played one of the original 7" floppy disks from the 1980's? The 5.25" format is also gone. Even a 3.5" FDD is getting hard to find now. What will the future bring?
Wednesday, 15 April 2009
Thermionic Valves
Millions of thermionic valves were made and used long before the pheonomen of the "Cat's Whisker" gave rise to the transistor and integrated industry that followed on.
A link (below) will take you to a short video that graphically shows how much work is involved in the making of a simple triode valve. Most of the process is very traditional, although the use of some modern technology is also evident. This is a most delightful "silent" French film with absorbing background music.
The glassblowing tradition was what brought the Dutch company Philips to electronics.
http://dailymotion.virgilio.it/video/x3wrzo_fabrication-dune-lampe-triode_tech
A link (below) will take you to a short video that graphically shows how much work is involved in the making of a simple triode valve. Most of the process is very traditional, although the use of some modern technology is also evident. This is a most delightful "silent" French film with absorbing background music.
The glassblowing tradition was what brought the Dutch company Philips to electronics.
http://dailymotion.virgilio.it/video/x3wrzo_fabrication-dune-lampe-triode_tech
Subscribe to:
Posts (Atom)