Monday 28 December 2009

MORE on MICROPHONES

This entry supplements a previous post on microphones. This time we are going to look at the implications of so called quality or goodness. To do that we need to recap on a few "givens." Although aural acuity varies between individuals just as our vision does, it is a matter of fact that the fairer sex can generally hear much higher frequencies than the men. The usual range for a youngish person is said to be 30 to 20,000Hz at the outside, with the males being some 10 to 20% less at the high end. It is a also true that we can all follow a slowly rising sound frequency up much further than when it is just presented. As we get older our hearing tails off particularly in the higher frequency range.

Unfortunately there are some side issues. "Attack" or onset may require the presence of frequencies that lie beyond our perceived range. Another issue concerns "distortion." Most people would agree that this occurs when the process imparts new elements to the original sound. Unfortunately, it is nonetheless true that some such distortions are thought of as pleasing! Therein lies a very busy little bundle. Surely by that definition even amplification is a "distortion." Yes and so it might be argued. The words used to describe processed sounds are nearly all borrowed from existing language. In the main the English language as interpreted by the Americans!

This gives rise to such wonders as "colouration" (coloration if you must).
"Loudness" - "Level" - "Fader" - "Presence" - "Phasing" - "Chorusing" -
"Trim" - "Treble" - "Middle" - "Bass" - "Echo" - "Flanging" - "Reverb"- "Noise"- "Reflections" - etc. etc.
And here's a couple of French ones - "Ambience" - "Envelope"
Then there are the processes:- "Equalisation" - "Parametric EQ" - "Notch Filter" "Graphic EQ" - "Harmonizer-ing" - "Limiting" - "Compression" - Signal to Noise & "SINAD" - "Over-spill" - "Mono" - "Stereo/Binaural" - "Noise-cancelling" - {there'll be more!}.
We haven't got into the musical terms yet either if only because not all sound recording is of music.

Traditional thinking would have it as a pre-requisite that a microphone should be able to deal with all the frequencies in the range of our hearing without any undue distortion. However, this does not take into account the peculiarities of the human ear. There is a non-linearity in our hearing that changes with perceived volume or intensity. Then there is this very peculiar selective and directional ability that we have to somehow select the things we most want to hear. The best example is that of conversation across a noisy room. Some people can listen to, and concentrate upon, just a single instrument from a whole orchestra. Once you make a recording some of these clever options are denied to us.

This innate ability that we humans have to be selective with what we hear is very interesting. The more so since I have noticed that with advancing years my ability do do this is severely curtailed. Given that we might call the unwanted sounds "Noise" then we seem able to deploy some very sophisticated noise cancelling techniques. We do it without much thought. Those who have sought to understand that mechanism more thoroughly are soon forced to recognise just how very clever the faculty actually is. For example, our ears are directional collectors, but they are displaced to each side of our head and that allows us to tilt and otherwise align our heads for maximum pick-up of the desired sound. It emerges that we can actually get some unwanted sounds to cancel each other and the reason for this is to do with time delays in the arrival of the sound at each ear. In some circumstance that delay causes the two sound "waves" to oppose each other and the energy is dissipated in mutual cancellation. Our brain can do the rest! By the precise swivel of the head we can change the group of frequencies that we "tune in to" from a high group to a low group. Some of the mechanisms are not well understood. {See older posts "EAR 'ERE"at foot of page}.

The point is that once such sounds are recorded and played out to us through a single sound source such as a speaker, we can no longer do that. The microphone will react equally to ALL the sounds unless........

NOISE CANCELLING & CARDIOID MICROPHONES 
 To try and get around such problems, engineers have developed all manner of devices. Microphones with directional pick-up. Microphones that do that by noise cancelling - just as we do. Shielded microphones. Limited sensitivity microphones. Then follows the sound processing devices. At first developed to improve the selection, then to remove unwanted noise, and finally to change or enhance. The latter class are collectively known as "EFFECTS."
The larger size of condenser microphone reaches back to the earliest days of sound recording and in particular, broadcasting. Certainly back to the 1950's if not earlier. They were said to give an excellent sought after result. There is no doubt that they were a triumph in their time, but as technical knowledge has moved on, so has human taste & amp; opinion.

The first condenser microphones had to make use of valve(s) to amplify the tiny signal coming from the transducer. Thus the need for a power supply which became standardised at 48V. The triode valves usually chosen do have a downside in that they produce some "noise" which is heard as a hiss by those with acute hearing sensitivity. The studios sought out ways to reduce or mask this tendency with such clever devices as noise-gates, and filters. This whole process is usually referred to as "EQ" or Equalisation. An interesting by-product of the valve is said to be the added "warmth" that is added to the sound. It is here that we depart from pure science as the precise reason for that remains elusive and open to conjecture. For my own part I would put it down to an inbuilt "compression", of which more later.

Certain anomalies now present themselves. While a certain microphone, amplifier, indeed studio, might make your particular guitar, violin, or voice sound the way you wanted it to, another might not! Therein lies a discussion that might yet go on forever. It leads us to such phrases of convenience as "beauty is in the eye of the beholder" and "it's right if you think it's right."

For the would be sound engineer the availability of so many choices can be a veritable nightmare. Philosophically it might lead to another phrase that I have grown to like. "The good enough sound." This leads us to the study of perceived improvements. One very important aspect of this is known as "SIGNAL to NOISE." That is that which we wanted to accentuate or make prominant, and the opposite of that. In measuring what we have achieved in this respect our old unit of sound intensity the decibel (db) makes a re-entry. Because it is difficult to entirely separate the wanted signal from the noise, we often talk in terms of SINAD. This is SIgNal AnD noise to Noise only ratio - expressed in db. Whoops - have we lost you? Think slowly here. We can rarely, if ever, have perfection. We will have to settle for there being some noise, some unwanted sounds. It then becomes a question of masking the unwanted with extra volume or using other rather snazzy devices such as a noise gate. Is it worth a moments thought? I think so.
ADDENDUM - It can be argued that the various studio processes used to condition the condenser microphone (or any other device inc other types of microphone) will change, and may even reduce, their overall performance.

THE NOISE GATE
This started life in the studios when it was realised that even a low level hiss (from processing circuits, wiring etc.), can be very annoying in the dead area of a an acoustic (anechoic) chamber. The idea is that the signal is clamped or shut off to an absolute quiet until it has risen to a small but predetermined level called the threshold. Signals over this are allowed through, with the noise, which tends to mask its presence.

DIGITISATION
We spoke in another earlier post about the digitisation process. Technological advances have moved us in this direction because these techniques get us over several other difficult hurdles. For example the digital process can make faithful copies of sound tracks & sequences without any degeneration. We can do complex edits and "condition" the sound in ways that were once impossible. {e.g. change tempo but not pitch and vice-versa}. We can also accomplish with software, many of the analogue processes that only very expensive hardware can achieve.

However, there is a slight downside to it in that the best quality of sound needs many samples and levels which leads to large files. While technology has made huge strides in digital storage, we are still challenged to reduce the file-size and to accept the resulting shortfall in sound quality. Many are they who will argue that the process is so magical that they can't tell any difference!

To a purist however, this quality of sound is a non-starter. The tricks used to save space come at a price in terms of real sound quality. This is further compounded by the inferior capabilities of small speakers and some ear-pieces.

One needs to look at the weakest link in the chain to decide just what the other links should be like. Look at ALL the outlets that the source sound will traverse and apply a similar rule. Sometimes, of course, one just has to do as one is told by "the piper who calls the tune" or the depth of our pockets!

COMPRESSION
Let's look at Limiting first. Right from the earliest times it soon became clear that audio equipment was not tolerant to overload. Just look at the grooves of an old plastic record. If the stylus or cutter that is making the master recording swings too far laterally, it will slice into an adjacent groove or channel. Yet we need to consider what happens during very quiet passages when the signal is barely able to modulate the cutter. What we hear then is the hiss of the groove mixed with the quiet signal that we really wanted. If we solve the first problem by saying "there is a LIMIT beyond which we will not let this cutter move" - we get over the first snag. However, the ideal solution would be if we could turn the volume of quiet passages UP and turn DOWN the louder sequences.
Thus is born the idea behind COMPRESSION. That this is now done electronically is of little importance. If these ideas are carried to extremes, the dynamic range is foreshortened and gives rise to an obviously processed sound that is "Punchier." Some folk like that EFFECT. We need to use compression even inside our own ears and we certainly need to guard against overloads when using tape or even during digitisation. In some respects it's not very different from the shock absorber on a car wheel.

NOISE CANCELLING MICROPHONES
Please refer to my previous BLOG on microphones for a description of how a noise cancelling microphone works. We need to remember that audible sound reaches us through slight pressure variations in the air. We are sensitive to a very large range indeed yet we must guard against overloads because our ears are very fragile.

LOW V HIGH IMPEDANCE
The advantage of a low internal ac resistance (called impedance or "Z") is that incidental noise coming through the sheilding (cable etc) is shunted away. The disadvantage is a much lower signal level in need of more amplification. Go to the button at the end of the page to see another of my previous BLOG pieces  

'ERE' EAR for more.

Monday 14 September 2009

Automatic Universal Battery Chargers

Whilst almost anything is possible - and more will be - some information is required in advance by the charger. This starts with the voltage and to some extent the capacity. There are quite a few chargers that can cope with varying capacities and to a lesser extent voltages. They do this by measuring the temperature of the cell(s) under charge. Indeed many battery packs have an inbuilt temperature sensor.
When Ni-Cad & Ni-M-Hy types cells are fully charged and/or cannot chemically convert the energy anymore, their temperature rises quite substantially. This can be used to "assume" that a fully charged state has now arrived
Another, though less reliable way is to monitor the rise in cell voltage. There's a snag here! If one or more cells in a pack become shorted (as can happen with dendrite growth in Ni-Cads), the expected final terminal voltage can never be achieved. This is what makes the temperature measuring method much safer. Another issue in its favour is that even cells that have lost their original capacity will be properly detected for a fully charged condition.
Here you see why it is so important that the cells in a battery should be evenly matched. We need them all to be charged/discharged at the the same rate & time.
POSTSCRIPT - Another method of detecting when a Ni-Cad or Ni-M-Hy battery is fully cahrged has come to my notice. It is called the "minus Delta U" calculation which in this case works thus: When above battery types are charged with a constant current their voltage rises continuously to amaximum which falls slightly if the charge is maintained. This fall can be used to terminate the charge.
UPDATE - whether covered elsewhere here or not there's another point to make concerning pulse battery charging. This was a common method on older cars equipped with a dynamo that delivered DC at a voltage that varies with RPM. A "Regulator" was fitted to ensure that charging currents did not exceed a sensible level for too long. The period of each pulse was controlled by voltage & current operated solenoid which interrupted the current flow often diverting it via a resistor. Thus were two levels of cuurent applied in pulses the duration of which depended on the battery state of charge. This practice gave something else that later alternator circuits did not give which concerns the formation of sulfate on the plates. This happens when the battery is in a wholly or partially discharged state and since it effectively insulates the area of the plates to which it has adhered, it effectively lowers the battery capacity. It now seems that the older pulsed charging systems could somehow dissolve any such formation more effectively than a constant current/voltage system and this has brought us to the electronic pulse charger. In a car it is the job of the ECU. However, external to the car specialised electronically controlled pulsed lead-acid battery charging and conditioning brings those advantages back.
Nor does the story end there. Other battery types, in particular Ni-Cad & Ni-M-Hy types can be charged up much quicker with the pulse method. It seems that the chemistry conversion is more effiicient when charged in pulses. Of those chargers I have seen the pulses are about 1A with a duration of 25 to 50% or in other words something like 1 to 2 secs in 4      

  

Thursday 3 September 2009

Battery Charging

Oh dear - the need to tackle this in a bit more detail arises! Please read the previous section first. There are several ways to charge. We'll get rid of two straight away. "Trickle" & "Fast"

These are each the antithesis of the other. A "trckle" refers to the current which can be continually applied without any damage. Such damage occurs through overheating or chemical "gassing" in the extreme.

A "Fast" charge always does damage and is the trade off for a more immediate restoration to use.

The two most common methods of charging are referred to as the "Constant current" and "Constant voltage" methods. If we study the constant voltage method first, we will see why the other become necessary.

I think we might stay with the idea of a "battery" here - that is a collection of cells that together have a terminal PD (that's the voltage when under a medium discharge load) of say 12Volts. When a lead-acid car battery is in a fully charged state the cells will be slightly over their nominal 2V each. In fact 2.3V is the usual value. There need to be six cells and so the fully charged voltage will be 13.8V {Let's say14V}. The unloaded Voltage is known as the EMF. {ElectroMotive Force}.

It's worth saying that a 12V battery made from Ni-Cad, {or indeed Ni-M-Hy}, cells would need 10 cells. {to equal 12V}. When fully charged these would actually achieve an EMF of 14 to 16Volts.

In either case the voltage from the battery charger needs to be at least as large in order to overcome that standing voltage and thereby impart a charge.

Can you see that, (you must make yourself see this), the charging voltage must be at least equal to 14V. The applied voltage must be able to overcome the standing "surface" charge. We also need to consider some current "limiting" especially if the battery is in a very low state of charge.
Once the battery has "caught up" and is equal to the applied Voltage the charging will cease. It is therefore very safe to leave unattended as overcharging cannot occur. However the rate of charge will diminish over time as the battery voltage rises and becomes ever more equal. The total process slows down (to a stop) and takes longer than it really needs to. {Exponential decay}.

This, then, is the reason for the alternative where the applied voltage is much higher and the charge rate is much more constant over time. This method carries with it the dangers of over-charging mentioned earlier in text.

In practice most good chargers will employ a mixture of both methods for what is wanted here is Maximum speed without loss of capacity or any damage. So we will charge at a medium rate until the cell voltages all rise and then "fold" the current back. Can we do any better?

I'm afraid to tell you we can! The cell voltage will rise to a peak when it is fully charged and this works fine if all the cells are in the same condition. In practice they are often not so equal as the chemicals within them age at different rates.

There's another problem. We might not know how large (how much capacity) a given battery actually has. If this is so, we can't use the charging time as a guide. The answer is to measure the temperature. There will be a rise in temperature when the chemical conversion is done. If we sense that, then we might be on the way to some pretty smart battery charging.

What more do you really need to know?

Thursday 9 July 2009

BATTERY TECHNOLOGY

I still get some interest in this subject, what with all the digital cameras and remote controls, clocks and even battery driven tools. Battery use is endless and can be expensive too. It pays to know the best way to cope. There has been tremendous progress in both primary & secondary battery technology over the last few years. I once wrote a little treatise for my sons which I will reproduce here. Actually, it came about because of all their battery driven toys. Since then we have seen alkaline, Mercuric-Oxide, Zinc-Air, Silver Oxide, Nickel MetalHydride, Lithium, Li-Ion and today I spotted a Nickel-Zinc type for digital cameras in Boots. Anyway here's how it was a few years ago with one or two updates: -

RE-CHARGABLE BATTERIES
(A collection of information)
Resume
First of all a recap.
The "battery" is made up of a group of "cells".
There are two classes of such cells: -

PRIMARY, those which cannot be re-charged because the chemical action is not reversible.

SECONDARY, those that can be re-charged because it is.
Efficiency is related to the energy that it takes to produce a charged battery, and what energy can then be recovered. It is measured in Amp hours or mAh

ELECTRO-MOTIVE FORCE (EMF). This refers to the UN-LOADED terminal voltage, or the actual potential (Voltage) that the cell delivers before there is any load whatsoever. In practice a very light load as would be drawn by a high resistance volt-meter reads it OK. EMF falls as the current drawn is increased and is in proportion to the internal resistance of the cell. This can change over time or with the state of charge, hence the drop off of voltage as cells "flatten." The internal resistance can, in some chemistries, be used as an indication of the state of charge.

POTENTIAL DIFFERENCE (PD) Refers to the terminal output voltage under load - when a fairly substantial current is being drawn, say at a 1 Hr rate. This means, for example that for a 50Ah battery the load would be 50 Amps; and for a 1600mAH battery, 1600mA.

SURFACE CHARGE There is an apparent increase in EMF when a cell is freshly and fully recharged. It reduces quickly after standing or at first discharge. PD (Potential Difference) is the EMF or Voltage that is present when the cell is being made to work (discharge). This is the parameter that counts in practical use. Fortunately, most cells have a fairly constant and predictable terminal voltage which is maintained over their discharge cycle.

CAPACITY refers to the total energy that a cell contains in Ah (Ampere - hours). This can be too big for some little cells and so we then use mAh (for milli or one thousandth part). i.e. 1000mA = 1A.
A specific discharge rate is implied usually over 10 hours for large capacity cells, and over 1 hour for small cells. This is sometimes referred to as the "C" or "C1" / "1C" rate. For example, a 1.2 Ah cell will supply 1.2A for 1 hour.
The CAPACITY of a cell will vary with the discharge rate, reducing as the process is speeded up. The discharge time at the HOUR RATE, meaning the current that can be drawn over that period that would just render the cell to be FLAT - or discharged. This is sometimes quoted as a PD that is about 15% under the nominal voltage. This gives us 1.05V for Ni-Cad & 1.75V for Lead-Acid.

WATTS PER HOUR
If you prefer to think in the more familiar Watts measurement for power, multiply the terminal voltage by the current in amps. A 12V car battery of say 50 Ah capacity, can deliver 12 x 50 = 600Watt-hours. It's not much really. In practice, at that rate of discharge, it might well be rather less.

RECHARGING
Optimum re - charging is quoted in relation to the ten hour rate at, but with an added EFFICIENCY factor in per cent (%). If the EFFICIENCY is quoted at 40% over the TEN HOUR RATE, then we would proceed with the charging for that extra period of time: 10 hrs + 40% = 14 hrs.
Now if the CAPACITY is say 1Ah at the 10 hour rate (that's 100mA over 10hrs), we charge at that rate over 14 hrs (assume 40% extra for in-efficiency), to achieve full charge. This is the most usual example.
Fast charge / discharge results in reduced efficiency because the chemical action cannot keep up with the demand and therefore energy is wasted, usually as heat or gas. Carried to extremes this will cause damage. This is similar to what happens when you go on charging for too long and is one reason why it is better to start the re - charge from when the cell is flat. However, there are dangers to having some cell types in a flat condition for long. Most cell types must never be reverse charged. This can happen when cells are connected in a serial stack (or BATTERY), as one or more cells become fully discharged before some of the others.

IT IS VERY IMPORTANT TO UNDERSTAND THIS LAST POINT!

EXAMPLE
A circuit of cells connected in series (and +ve to -ve to increase the voltage) is to supply a bulb (we call this the LOAD). At first all the cells are in a more or less fully charged condition. As time goes by one, (or more), cells is the first to be discharged to zero Volts at its output terminals. At this point it begins to absorb energy in reverse polarity from the other cells in the circuit that are not yet flat and receives the damaging reverse charge. It is difficult or impossible to rescue cells that are damaged in this way.

The 2Volts / cell LEAD - ACID (as used in car) batteries SULPHATE when areas of their lead plates are no longer in a chemically charged state. {Re-charge every 3 months or keep trickle-charged}.

LEAD-ACID CELLS SHOULD THEREFORE BE STORED IN A CHARGED CONDITION. There are two types of lead involved and it is the negative plate that suffers if they are not fully charged. The white lead-sulphate covers the surface and prevents it being in contact with the electrolyte which is dilute - SULPHURIC ACID - H2SO4. Sulphate is very hard to remove once it has formed although CALCIUM in the acid can help. For this reason it is far better to make sure that LEAD - ACID batteries are always kept in a charged condition or are re - charged at very frequent intervals. Sulphation leads to a loss of CAPACITY, (Ah), as not so much area of material is available for conversion - to absorb the charge. This can lead to inadvertent overcharge-ing which although they are tolerant of a small "trickle" over charge, tends to force material from the plates where it falls to the bottom, is lost for chemical conversion and can cause shorts. Perhaps more serious is that when a fully charged lead - acid cell is still receiving charge current, it produces highly explosive HYDROGEN gas !!
Some heavy duty, slow discharge units produced say, for telephone exchange back - up, can last for 30 years !

Nickel-Cadmium (Ni - Cad) cells have some different problems. In many ways they are superior to other SECONDARY re - charge-able cells, and they are certainly less hazardous if only because they contain no acids. They are however, POISONOUS to all life! Their PD is only 1.2V per cell and for a given Ah capacity will need to be larger in size and heavier than other types.

Some reports say that they are tolerant of persistent overcharging, others say they are not! It may depend upon the construction and re-sealable venting. In actual practice, these batteries do NOT thrive on being continually charged - or over-charged at high current rates. However, they seem happy to be trickle charged. Capacity is adversely affected by high temperatures during charging.

Ni-Cad cells tend to keep their output PD right up to the end of their full discharge, rolling off very suddenly as they go entirely flat. Little warning is given by the terminal voltage and the only real means of knowing the state of charge is to monitor the discharge rate and period. Because these cells should be stored in a discharged state, they are easier to maintain and keep. Their chief draw-back is that they grow DENDRITE hairs of conductive metal when left in a charged state which shorts out the material within and prevents re - charging. One possible solution is to "blow" the hairs away (melt them) with a very high instantaneous current pulse of correct polarity, limited duration and magnitude. For cells up to say, 4Ah use a charged up electrolytic capacitor (say to 12V from a car battery via a series resistor of about 1 to 10 ohms, (to resrict the surge current).
When it is charged, "splash" the capacitor across the cell terminals briefly, then try a few seconds of high charge within normal limits for the cell. As a guide use say 50% of capacity. Once the cell can support its normal 1.2 Volts PD for a light current draw, charge normally, but at 14 hour rate. (One tenth of total capacity for 14 Hrs). Remember that this treatment formula applies to individual cells and NOT battery packs. If individual cells cannot be isolated a higher voltage will be required, say double (24 for V for a 12 Volt battery, with the risk of damage to perfectly good cells in the pack.
NI-CAD cells also suffer from a phenomenon called 'THE MEMORY EFFECT' If a cell is boost charged before it has become exhausted, it behaves as though it has a reduced capacity, subsequently discharging to the level it was at when the boost started and then behaving 'FLAT'. This may be related to the dendrite growth that was mentioned earlier. However, official procedure is to make sure the cell is fully discharged before the re - charge begins. This is reasonably easy with a single CELL as a modern 'intelligent' charger can effect a discharge until the cell voltage first reduces and thereby commence the full charge cycle. Because of the reverse charge dangers that can occur in a BATTERY of several cells, this procedure is not without some problems when say, camcorder batteries are to be re - charged. In any case these 'intelligent' chargers have to be told what the full voltage should be, and the Ah capacity. They are completely duped by a battery with a faulty cell in its stack !
STORE Ni-Cad BATTERIES/CELLS IN FULLY DISCHRGRED STATE!

Nickel-Metal-Hydride
The Ni-M-H type of cell has become popular & much cheaper over revcent years. These are very similar to the older Ni-Cad having 1.2V cells, but without the terrors of the memory effect, or indeed long term inactivity. Shelf life charge is improved and occasional top up boosting does not cause a problem. Nor does a small continuous trickle charge. It would appear that these types don't grow dendrite hairs. CAN BE STORED IN A CHARGED STATE WITH A MODERATE SHELF LIFE. For long term store discharged.
B.J.Greene. Apr99

LI-ION TYPE CELLS ! {26May2005}
These Lithium-ion batteries are best STORED IN A FULLY CHARGED CONDITION. They are also fussy about the temperature, not liking to be charged in any extremes below freezing or above 40°C. Such operation or even storage will reduce capacity and life.
There is no memory effect and the power to weight ratio is very favourable.

Lithium-ion batteries (sometimes abbreviated Li-ion batteries) are a type of rechargeable battery in which a lithium ion moves between the anode and cathode. The lithium ion moves from the anode to the cathode during discharge and in reverse, from the cathode to the anode, when charging.
Lithium ion batteries are common in consumer electronics. They are one of the most popular types of battery for portable electronics, with one of the best energy-to-weight ratios, no memory effect, and a slow loss of charge when not in use. In addition to uses for consumer electronics, lithium-ion batteries are growing in popularity for defense, automotive, and aerospace applications due to their high energy density. However certain kinds of mistreatment may cause Li-ion batteries to explode.
The three primary functional components of a lithium ion battery are the anode, cathode, and electrolyte, for which a variety of materials may be used. Commercially, the most popular material for the anode is graphite. The cathode is generally one of three materials: a layered oxide, such as lithium cobalt oxide, one based on a polyanion, such as lithium iron phosphate, or a spinel, such as lithium manganese oxide, although materials such as TiS2 (titanium disulfide) were originally used.[3] Depending on the choice of material for the anode, cathode, and electrolyte the voltage, capacity, life, and safety of a lithium ion battery can change dramatically. Lithium ion batteries are not to be confused with lithium batteries, the key difference being that lithium batteries are primary batteries containing metallic lithium while lithium-ion batteries are secondary batteries containing an intercalation anode material.

Friday 8 May 2009

DIGITAL & ANALOGUE

At first the complex sound-wave contours were represented by the shape & extent of the record groove. It just needed to be amplified and applied to an air pump transducer (Speaker). Then the talking films used light patches at the side of the film. This was followed by magnetic tape where the magnetisation depth still represents the sound-wave. These schemes are "ANALOGOUS" to the original wave shape.
Digitization is very different and quite hard to get a handle on. The same sound-wave is repeatedly measured for its instant strength and the value is recorded as a binary number or code. This has to be done very quickly and a favourite standard rate is 44.1 KHz. This is to ensure that nothing of importance is missed.
The range of levels that the coding system can represent depends on the total number of digits in the number. If it was a decimal system we know that two digits can represent 99 levels, three 999, and so on. {Actually 100 & 1000 since zero is a level too}.
In the binary system each "bit" (for binary digit) can represent either a "0" or a "1" and it turns out that the numbers can get unreasonably long. 1000 levels needs 10 bits. (1024 levels actually). This doesn't fit well with the architecture of most computers and long numbers take lots of time to handle. It is common to place a limit on the maximum number of "quantities" that can be represented.
This "quantization" process can be "shifted" to represent more levels without the need of more bits. That process is not that unlike the "SHIFT" keys used on a typewriter to switch from upper to lower case letters - WITHOUT the need of more & more keys.
Advantages of the digital method:
Analogue signals can never be faithfully copied. There is always some degradation from the original. Digitized quantities can be copied exactly, and much more quickly too.
It is useful to be able to manipulate sound tracks. In a computer the binary coded signal can be pitched changed or speed changed independantly.
Cutting & patching are possible in the software.
Effects (echo, reverb, harmonoy, tremelo, vibrato) can all be added to the signal without destroying the original sound track.
The necessary physical storage space is less than with other media.
The cost of the optical media (CD/DVD) is very low.
DISADVANTAGES: -
The argument that digital sound sources are "better" is not usually true. This is because practicalities dictate that space-saving devices are used. (mp3 files). The sound quality can seem almost too clean as it is possible to remove all "noise" signals completely. It can be argued that the complex compression techniques that are used actually change the sound in subtle
ways that is not always to the listeners liking.
The jury is still out on digital longevity. The old bakelite & celluloid records will take some beating in that respect with some still playing after 100 years. It isn't thought that CD/DVD life will be anything like that long. The compact nature of the track data militates against a long term life. Flash memories have only a 10 yr life. The magentic stores might fare a little better in this respect.
Comapatibility has always ben an issue. Phonographs, gramophones, record players, reel to reel and cassette tape machines have all had a turn. With digital both the media and the software format might be overtaken. While common software formats may be maintained & even regenerated fairly easily, it is the hardware that poses the biggest problem. Where would you now go, for example, to get played one of the original 7" floppy disks from the 1980's? The 5.25" format is also gone. Even a 3.5" FDD is getting hard to find now. What will the future bring?

Wednesday 15 April 2009

Thermionic Valves

Millions of thermionic valves were made and used long before the pheonomen of the "Cat's Whisker" gave rise to the transistor and integrated industry that followed on.

A link (below) will take you to a short video that graphically shows how much work is involved in the making of a simple triode valve. Most of the process is very traditional, although the use of some modern technology is also evident. This is a most delightful "silent" French film with absorbing background music.

The glassblowing tradition was what brought the Dutch company Philips to electronics.

http://dailymotion.virgilio.it/video/x3wrzo_fabrication-dune-lampe-triode_tech

Monday 30 March 2009

MICROPHONE TECHNOLOGY

PHANTOM POWER

First of all we need to consider & understand the operation of a capacitor microphone. A thin plastic diaphragm coated with gold or aluminium is stretched over a shallow hollow cavity which has a flat metal back-plate. These two plates form a capacitor of some 5 to 75pF {Nominally 20 to 30 pF}. A polarising charging voltage is provided by a DC source. Electrostatic attraction keeps the diaphragm taut, but sound waves that impinge upon it will cause a small variation in the capacitive value which varies in sympathy with the air pressure waves. The output impedance is very high at around 100Mohms. To avoid the losses of HF, a long cable cannot be used and a very adjacent pre-amplifier is required. The low mass and inertia of the diaphragm gives a very flat and wide response, while the output is high because of the pre-amp.
The need of power is a drawback and ways and means have been sought to provide the small but necessary power from within the amplifier with which the microphone is used. This is known as "Phantom Powering" The generally adopted standard is for a 48V DC supply. This is sometimes obtained within the actual microphone from a small battery. Otherwise it must come from the amplifier itself which might mean having extra wires. This is often overcome by the following circuitry.
The positive is taken to a centre tap in the input transformer or to the junction of a pair of resistors across the winding. Another similar method avoids the disadvantage of having to have the screen connected at both ends, a practice known to induce hum & noise pick-up. In this circuit both the signal wires are used to carry the dc power and the screen does not form a part of any circuit.

This is achieved by the circuit configuration known as A-B powering thus: See how cleverly the routes for the ac signal and the DC power is preserved over the same wires whilst the dc is blocked from the ac signal with scarcely any loading.

There is much more to know about microphones which all have their pros & cons. We have to weigh up their performance in several important ways: - Cost, performance, ruggedness, weight, size and pick-up qualities are the main considerations.
Since the latter may be the most obscure we will start with that. We may simply need a microphone that will pick up in an even way all around its site. Or we may need a performance that is more DIRECTIONAL. Here is a diagram that shows the most popular choices.

"Directivity" also Rejection ratio" - "Discrimination" - "Cancellation" and "Front to back ratio" are common terms used. Typical values are 15 to 20 dB which is about a twelvefold difference.

A is Omni-directional
B has a cardioid pick up
C is a super cardioid
D is a figure of eight velocity
E is a gun/interference tube type.
The latter is a very directional specialised type mainly used in noisy crowded rooms. You point it at what you want to hear. An open ended tube with a series of slots along one side is attached to a cardioid microphone. It operates by phase displacement cancelling down to a frequency that is a half wavelength of the tube length.

NOISE CANCELLATION
It often happens that unwanted background noise is picked up. There have been, and still are, novel ways to deal with it. Two microphones can be connected in antiphase to produce almost absolute cancelling. Announcements are made through one or the other from close proximity and this balance is overcome. Some capacitor units have two diaphragms facing front & rear. They each have a cardioid pick-up area. Polarising voltages can be variously switched off or reversed to form pick-up lobes.

EAR - 'ERE

HUMAN EAR - 'ERE
The final link in the sound reproduction chain is the human ear. How can you ignore the basic knowledge of it if you like or make music? It is a vital element to all those involved in audio engineering too. A read of this will cast doubt on Darwinian ideas of "natural selection".

DIRECTIVITY
Sounds from a source situated to one side of the listener arrive at the furthest ear fractionally later than the nearest. Thus there is a delay in phase which the brain interprets in terms of direction. Long wavelengths, (lower frequencies/notes), have less phase shift than the higher frequencies which fact is why it is much more difficult to locate the source of a low frequency note. The rattle of a box of matches is a bone-fide test used to demonstrate this.

Refer now to the drawing of the human ear below. The convolutions of the Pinner or outer ear produce reflections and phase delays which differ according to the angle of incidence with which the sound wave arrives. {A wave in this context is a variation in air pressure of a cyclic nature}. This also aids in direction location, especially, it is believed, in the vertical registration of the source. Again only the higher frequencies are effectively so identified. At mid frequencies the masking effect of the head plays a part by introducing small difference in amplitude.

IMPEDANCE MATCHING
The ear drum or tympanum vibrates in sympathy with the received sound and transmits the vibrations through three small bones called the ossicles in the middle ear. These are the hammer, anvil and stirrup {or stapes as some say}. The first two form a pair of pivoted levers that produces a nominal leverage ratio of 3:1, and the third communicates vibrations to the window of the inner ear. The ratio matches the mechanical impedance of the ear drum to that of the window, so obtaining optimum power transfer.
The tiny bones are held in place by small muscles which permit the pivotal positions, and hence the sensitivity, to alter. Thus the sensitivity is not linear, following a logarithmic law, being at a maximum for quiet sounds reducing to a minimum for loud ones.
This automatic volume control allows the ear to process an enormous range of sound levels being in the order of 10 to the power of 12 which is 1 million X 1 million = 1 trillion levels.
The Eustachion tube equalises the atmospheric pressure on both sides of the eardrum by venting the middle ear to the throat.

FREQUENCY DIVISION
The inner ear consists of a long liquid filled tube that is rolled up like a shell called the cochlea. A horizontal basilar membrane divides the tube along its length into upper & lower compartments except at the sealed far end where there is a connecting gap called the helicotrema. The compartments are termed the scala vestibule and scala tympani respectively.
Sound vibrations are applied to the Oval window at the entrance to the upper chamber by the Stirrup bone. From there they travel through the fluid to the gap at the end, then down into and along the lower compartment and back to its round window where they are absorbed. En route they pass through thousands of sensitive hair cells located on the upper surface of the membrane, which are linked to nerve fibres. These cells respond to different frequencies and are divided into 24 bands with one third octave spacings, starting with the highest band near to the entrance and the lowest at the far end. Individual bands occupy about 1.3mm of space, each being termed a bark.
The centre frequencies of the bands start at 50Hz for the No.1 and go up to 13.4KHz for band 24. Cut off outside of each band is sharp at the lower side but more gradual as the bands rise in frequency. The lowest band is 100Hz while the highest is 3.5KHz. The overall response in a healthy person under 30yrs is 16Hz to 16KHz with the girls generally having better HF hearing than the boys.

FREQUENCY RESPONSE
The frequency response of the ear is not flat, being at a maximum from 2 to 4KHz. The rest of the curve varies according to the sound level. At lower volumes the response to both treble & bass is less which is why some audio units boost these at low listening levels. Speech frequencies come well within the overall range but music encompasses greater range in both frequency & loudness. These contours show the sound pressures required to produce sensations of equal volume at various frequencies. They are known as "Equal Loudness Contours" and are the inverse of frequency response curves.
The contours are an averaged sample taken from an age group of 18 to 25 yrs.

HEARING DAMAGE
Temporary damage to hearing sensitivity results from exposure to loud sounds. It can become permanent if the exposure is prolonged. Damage is greater if the sound contains percussive energy bursts. Impairment is in the 4 kHz range (bark 18 in the cochlea) irrespective of the nature of the sound that caused the damage. As exposure extends the damage tends to reach down to 1 kHz.
Industry regs. give the following maximum exposures shown in the chart. It should be noted that disco music and headphone listening levels in excess of 100dBA can easily be realised. The dangers are obvious.

PRESBYCUSIS
This is an almost inevitable condition where hearing deteriorates with age. It starts slowly from 20 - 30 yrs and worsens over time. Exposure to loud noise plays its part. This chart shows the expected deterioration.

Wednesday 18 March 2009

DE-Gaussing

Johann Carl Friedrich Gauss was a German mathematician & scientist 1777-1856 whose name was adopted for the CGS unit of magnetic flux density. {Magnetisation}. Sometimes bits of metal (iron/chrome) become magnetised when we don't want them to. It happens in tape recorders. If you bring those parts under the influence of a powerful "saturating" and alternating magnetic field, that is made to decay away relatively slowly , their magnetism will be removed. "De-Gaussed"
Magnetite is a naturally occuring ferrite commonly called Lodestone. It exhibits "ferrimagnetism" as opposed to "ferromagnetism". "Ferroxcube" is a tradename for similar man-made magnetic ceramic materials. A china that contains ferrous particles, mixed with other metals. It is very useful in conjunction with inductors at higher frequencies.

Tuesday 3 March 2009

The Theremin

The theremin was invented in 1919 by a Russian physicist named Lev Termen. It is not like any other instrument, since it is played without being touched. Two antennas protrude from the theremin - one controlling pitch, and the other controlling volume. The electronic oscillators are tuned by hand capacity affects. As a hand approaches the vertical antenna, the pitch gets higher. Approaching the horizontal antenna makes the volume softer. They were originally built in the 1930's by RCA, GE & Westinghouse and found immediate use in the film industry because of their weird ethereal sounds. Robert Moog was building these in the 60's before the synthesizer.

Musical Electronics

Did you wonder where we are headed? Who would care if something more wonderful were not involved? To this very day, one of the most common discussions on audio amplifiers is always that of the guitar and transistors v Valves. Electric guitar players are convinced that the old valve amps are better. It is said that the transistors make the sound harsh ("tinny"), by emphasising odd harmonics. I have never really understood this claim. Although valves are more resistant to static discharge and rectifiers might give unintentioned compression, there are several things that put me off! Valves with their associated copper & iron transformers are heavy to carry & expensive! Overdrive effects are just as possible with transistors. I will grant it that valves are much more resistant to static damage but that is all. You tell me!

Saturday 21 February 2009

Speakers & Headphones

There once was a "Loudspeaker" made by the Leak company that was "electrostatic" in its operation. They were used for Hi-Fi in the early 1960's - so I suppose they were good. Very slim-line, they were expensive and looked like a radiator! We don't see them anymore.

I think all modern speaker "transducers" operate on an electro-magnetic principle. We want to move the air at various frequencies to hear our voices and our music. Long slow strokes of the diaphragm for the bass, very short fast strokes for the treble. That's one reason for the many different sizes and, indeed for the development of the "CROSSOVER" circuit used in Hi-Fi to drive them. We send the higher frequencies to the small one and the lower freqs to the larger one. Sometimes there's a "Midrange" unit (speaker).

All these units are constructed on the magnet & coil principle. They are the opposite of a generator. Here we have a varying applied (ac) voltage that generates a changing current in a coil. The ensuing magnetic field alters in sympathy and reacts with it's core magnet thus causing the coil assembly to move. A thin, (was paper - now often plastic), cone is attached to move the air. Too simple? Go and have a look at one!

We need these units to be very "responsive" & that means following the applied current in an exact copy. It ain't gonna happen folks! There's a raft of problems. Not least of these is the inertia, the impetus and the IMPEDANCE. {Got there!}.

The fact that the impedance varies with frequency is a big problem. If we know the impedance at a certain spot fequency, it still only gives us a basic idea of how far the impedance will range.

CROSS-OVER DRIVE
We can use different size units, coupled with a frequency steering & impedance matching network, but we still get an awful problem of "LINEARITY" in the drive current between frequencies. A novel, much used way around that is by a device within the amplifier known as NEGATIVE FEEDBACK. Were it not for this I doubt that true High Fidelity would be possible. Even then - but wait, enough unto the day etc. MORE SOON.

NEGATIVE FEEDBACK
The effective gain (amplification) of the amplifier is altered by taking a sample of the actual output and reducing gain when it is too much and increasing it when too little. We are able to do this via frequency filters if we want to, it's just the principle we need to grasp. If we take a sample of what is delivered into the speaker (load) we can level things out. There!

Glossary of Terminology


Right here's an apology: - I'm sorry! I really should have forseen that someone would want to know what these words mean - in this context. I can/will add to this list as we may progress. For now just these: -

RESISTANCE - This is a measure of how well a DC (& sometimes an ac) current can flow in a circuit. We say that when 1 Volt causes a current of 1 Amp to flow the resistance in the circuit is 1 OHM. This then, is OHMS LAW. To get POWER in WATTS multiply the current (I) by the Volts (V).

CONDUCTANCE is the reverse of that with a unit called a MHO (mho) Get it? Don't worry about this, it rarely crops up!

IMPEDANCE - Still measured in ohms this is the complex ratio of sinusoidal (ac) voltage to current in a component or circuit consisting of two parts. The real & the imaginary. The real part is the RESISTANCE which will dissipate heat, the REACTANCE part is "imaginary" and does not dissipate.

REACTANCE - Measured in ohms this is the ac only part of any circuit or component. Characterised by the storage of energy rather than by Wattful dissipation.

NOTE:- Bear in mind that most/all practical electrical/electronic components are not "pure" in the sense that they have elements of all in them. In that regard, and for most practical circumstances, we can regard RESISTANCE as being for both DC & ac, but IMPEDANCE as being for ac circuits which are much less than pure. Usually containing INDUCTANCE and/or CAPACITANCE.

RMS - Root Mean Square. DC is what you get from a battery. It has a fixed polarity. Ohms law works! In ac power they use slip-rings on the generator to collect the current instead of a commutator - as in a motor or DC dynamo. There are advantages which outweigh the cons. Less arcing, less brush wear AND because the polarity keeps reversing (50Hz here - 60Hz in America), it is much safer for us! Muscles don't get held in relentless contraction. You can alter voltages up or down with a transformer. You can't do that with DC. The shape of the reversing wave is sinusoidal - that is relating to the cyclic operation of the generator. We need to know when this varying, & reversing voltage has the same heating effect as an equivalent DC voltage. Hence an average of what power is delivered. There's a formula which relates to the ROOT of 2. That's 1.414. The peak voltage will have to be 1.414 times higher than its DC equivalent to give the same heating effect. That means that a a 240Vac supply will have a peak of 1.414 X 240 = 339.6 Volts. In BOTH directions! That makes the Peak to Peak (p to p) Voltage = 679.4
The wave shape is traced in time like the valve on a wheel as it rolls. If the axle is at zero, the valve goes higher and lower than that describing a sine wave. Rotations / second is Hz. Think about it!

Friday 20 February 2009

Audio Electronics

I am often asked about various aspects of this somewhat confusing arena. I find I have to think back to all the short-cuts that I used in my occupation. Here are a few notes that I may choose to refine in the light of further interest and discussion.

HEADPHONE IMPEDANCE
Several articles quote typical 50 to 150ohms and up to 600 ohms. This is very different from my memory of so called Hi-Z types which were in Kohms. Further research needed. The impedance of my mp3 headphones is 18.8ohms each. That is at 1Khz - which is/was a common standard. Remember the XL increases with rises in frequency and you cannot get a reliable reading from a DC ohm multimeter!

So I've fixed my Inductance meter and the headphones for my mp3 player are 0.3mH and another similar set is 0.5mH. Is that sensible?

DECIBELS
This is a ratio unless you state an impedance. "Characteristic" impedance in the case of cable. We say this is, say, 22dBs up (or down). FROM WHAT? POWER & VOLTS are different when it comes to dBs. Read on. (1Watt is 1 joule / sec. This is energy as in calories).

There is confusion between various references. In particular the old Post Office Telephones standard for their nominally 600ohm network. Knowing that impedance one can calculate actual power for particular dBs (known as dBm). The standard is that 1mW into 600 ohms is set at 0dBm. The operation of the network is -13dBm. Very easy if you will settle for each 3db (of power - not volts) being double or half. So -3dB is half = 0.5mW. Do it again is 0.25mW then again is 0.125mW and one more time for -12dbm is 0.0625mW. That's the power for a telephone earpiece (well -1 more db actually). Why? Well to get 1mW we need an RMS voltage of 0.707 which is a peak ac wave of 1V. {0.707 is the root of 2} and 1/1.414 = 0.707. Now just almost forget it!

Forget it save for one more very useful thing. When people use dBs in audio it's useful to know that power (Watts) doubles for each addition of 3dB, as I just said. If one is measuring voltage however, it doubles for each rise of 6dB. So if you raise 3V by 6dB it becomes 6Volts. This Voltage calculation becomes useful because microphones for example are often quoted in dB outputs. Just as in music no-one ever says that E7 / C7 is actually a FLATTED seventh note in the scale, so it is that no-one ever says that the microphones are normalised to 1V (peak). {That's times 0.707 for RMS}.
So a -70db moving coil microphone will 70/6 = 12 X 6dB steps down approx. You divide 1Volt by 2 six times! (or by 2 to the power of 6)
1V is 1000mV so if you divide it by 128 you get 7.8mV peak as the output. Very slight error - but you can do it in your head! For RMS multiply by 0.707 = 5.5mV. Amplifiers are the same. The gain is quoted in dBs. It doesn't tell you much unless you assume the voltage going in. Now a passive guitar pick-up gives around 30mV. BUT NONE OF THIS SPEAKS OF ANY KIND OF IMPEDANCE. {An unloaded Voltage is reduced by input impedance}.

IMPEDANCE of CABLES
The capacitative reactance of screened cable has a greater effect when it shunted across a high impedance. High Frequency (HF) losses are therefore worse with long cables. {Typical value is 200pF/meter. One way around this is to use low impedance microphones with a step up matching transformer. Medium Z is 200-1000 ohms. Hi-Z is 50Kohm or more. Lo-Z is 30-50 ohms.

There is no absolutely pure impedance (Z). There's R+XC+XL in any practical component. They all contain: -
Pure RESISTANCE+Capacitive+Inductive = REACTANCE.
Z impedance formulas are for C in Farads (that's enormous and the largest unit we use is uF which is 1 millionth part or x 0.000001). For L this is in HENRYS which is also quite large. We use more mH and uH. They are 1 thousandth and 1 millionth part respectively. That's x 0.001 or x 0.000001.

SPEAKER IMPEDANCE
There's a standard. Well more than one! It started with - well no standard at all but then 16 ohms became common. With transistors came 8 & 4 ohms. The reason for the low impedance is to keep the coil assembly light & small. Then again thicker wire can be used for high Watt output powers.
For Public Address (PA - not Power Amplifier as in guitar), in buildings where there were speakers in many rooms & places, there came a "LINE" distribution and speakers were often around 100 ohms - or more - and all wired in series/parallel as needed to match to the nominal Impedance of the system. Transistorised outputs of any power are usually 8ohms, but tolerant to higher impedances without distortion.

Now then on to valve amplifiers and their speaker loads. In order to match the high output impedance of a valve (many Kohms), with a low impedance speaker, a transformer is needed. These are heavy & expensive but many musicians like the result they give. They are more commonly 16 ohms, or two 8ohm units in series. It is absolutely vital that you do not operate without the speaker load. There will be almost certain damage to the transformer from undamped back EMF. {TIP: - Good idea to use a shorting jack with a load resistor to replace the speaker if it is unplugged}. It's not so much the valve damage that you have to worry about. They are relatively rugged & cheap - even now. It's the damage to the transformer as it's interwinding insulation breaks down. They are expensive and much more difficult to fit.
MORE TO FOLLOW

Saturday 14 February 2009

Explanation

"LONDON APPRENTICE" is the general electronic engineering persona who deals with technical things in a nice peaceful fashion.
"MUSICAL BENT" can be found at Beresfordsmusicality.blogspot.com He deals with the live issues of musical performance
"SPECIAL BITTER" deals with thorny issues and is less that tolerant or pleasant. We would say he has "attitude."