Wierd World Wars – Things You Probably Never Knew about the two World Wars

 

Did you know that…

During the First World War…

Soldiers used urine for almost anything! They pissed on their boots to soften the leather. They pissed on their handkerchieves to make gas-masks. They even pissed on their machine-guns to stop them warping from overheating! Urine was ideal for several applications in the trenches. It was easily accessed and in plentiful supply. Any duties where water was not absolutely required, or where urine was an acceptable substitute, this freely available fluid was utilised. Pee for victory!

Australia had the only 100% volunteer army. While other nations that participated in WWI had standing armies, the newly-federated (1901) nation of Australia did not have an army of its own. All its troops and officers sent to fight in the Great War were volunteers drawn up from ranks of civilians. Most of them had no prior combat-experience, and received only the most basic of outdated infantry training!

The first air-raids on a large population center were carried out. In 1915, the first-ever air-raids over a major city were carried out by the German ‘Zeppelin’ airships. Although highly inaccurate, these raids brought war to a civilian population that was previously untouchable. But for the first time, the people of Britain realised that the Channel was no guarantee of safety. The raids were carried out on London and other major British cities, starting in January 1915, and lasting until August of 1918.

The Underwood Typewriter Company manufactured a gigantic, working typewriter as a marketing gimmick in 1915!…It was later melted down for the war-effort. 

Despite the fact that the war started in Europe, the first allied shot was fired from Fort Nepean in Victoria, Australia!

Just two and a half hours after the declaration of war, Australia, a country on the other side of the world, fired the first allied shot of the war, using the coastal artillery cannon at Fort Nepean.

During the Second World War…

Despite the fact that the war started in Europe, the first allied shot was fired from Fort Nepean in Victoria, Australia!…Again! 

Just as in July of 1914, on the 3rd of September, 1939, the first allied shot was fired by the coastal artillery cannon at Fort Nepean, in Victoria, Australia, on the other side of the world! By the same gun, from the same fort…and the shot was even ordered by the same man! In both instances, gun-captain, Commander Veale, ordered shots fired across the bows of two ships which refused to heave-to. In both instances, just hours after the official declarations of war.  And before any other allied nation had fired so much as a flare gun.

Cities were bombed with pianos! Okay, not really. But…Starting in 1944, pianos were parachuted into bombed out, but liberated cities across Europe, as the Allies advanced eastwards towards Berlin. Manufactured by Steinway & Sons, and called “Victory Verticals“, these lightweight, cheap, upright pianos were designed to provide a form of entertainment for troops and liberated civilians, whose own instruments were damaged by air-raids and artillery-barrages during the earlier years of the war. 2,436 Victory Vertical Steinways were manufactured.

A Steinway ‘Victory Vertical’ piano, sourced from pianoworld.com

The British tried making aircraft carriers out of ice! Those crafty Limeys. They tried concealing convoy ships as icebergs, and tried to make aircraft-carriers out of ice, to save up on precious steel.

No such ships ever made it off the drawing-board.

American psychologists produced a Freudian-style profile of Adolf Hitler. As part of trying to understand their enemy, the Americans drew up a psychological profile of Adolf Hitler. Theories about Hitler’s personality and possible future actions were built up from known facts about the Fuhreur. These were gleamed from his published works, body-language in films, and from the few people who knew him intimately and had escaped to America. One of them was Dr. Eduard Bloch!

Dr. Eduard Bloch in his medical office in Austria, 1938. Two years before he fled to America with his family

Bloch (1872-1945) was the Hitler family doctor…and a Jew. For Bloch’s attempts at treating Hitler’s mother for breast-cancer (from which she subsequently died), Hitler gave Bloch special protection from Nazi antisemitic persecution. Despite this, Bloch felt unsafe, and fled from Austria to America in 1940.

Over a three year period, from 1941-1943, he was interviewed extensively by the Office of Strategic Services or “O.S.S.”, the precursor to the CIA. He provided the Americans with valuable insight into Hitler’s personality and early life, which helped them produce their psychological profile. He told them about such things as the death of Hitler’s mother, how Hitler reacted to the news, and details about Hitler’s childhood and upbringing.

Bloch settled in New York City. He lived long enough to see the defeat of Germany, and Nazism in Europe. He died on the 1st of June, 1945, at the age of 73.

The profile drawn up by the Americans was surprisingly accurate. It correctly predicted the July 20 bomb-plot of 1944, Hitler’s increasing withdrawal from public life, and even Hitler’s suicide in 1945!

During the war, many companies ceased production of their peacetime consumer-goods, and started manufacturing materials for the war-effort. Where possible, companies were asked to build things using materials or techniques and qualities which they already had. It wasn’t always a great success.

Steinway & Sons, the piano-manufacturers, produced lightweight wooden gliders for the Allies. These were used during D-Day, for the invasion of Normandy.

The Singer Manufacturing Company, world-famous producers of sewing-machines, was tasked by the Americans to produce sidearms for the army. They were given a contract to produce 500 Colt .45 automatic pistols. The pistols did not all pass muster, and Singer did not produce any more guns for the duration of the war. It produced bomb-sights instead!

Singer lost the pistol contract to Remington-Rand, the famous typewriter manufacturer! Remington was producing M-1911 pistols from 1942 until the war ended in 1945. In total, it cranked out 877, 751 firearms for the U.S. Armed Forces!

The Royal Typewriter Company ceased all production of civilian typewriters during WWII. From 1942 until the war ended in 1945, it cranked out rifles, bullets, machine-guns, and spare parts for airplane engines! It didn’t start making typewriters again until the war had been over for two months!

The Underwood Typewriter Company produced M1 carbines for the war-effort. In the late 1930s, it manufactured a gigantic, working typewriter as a marketing stunt for the World’s Fair:

Just like in 1915…this too, was melted down for the war-effort! This typewriter was a giant version of the Underwood Master standard typewriter:

Rationing on the British Home-Front was so severe, some people came up with interesting substitutes for some rare, rationed foodstuffs and goods…

Makeup for women was in short supply. Beetroot-juice was used for lipstick, gravy and pencil-marks were used to create the illusion of stockings.

Eggs were almost nonexistent. And if you wanted them, you had to open a can of egg-powder, instead! (Eugh…) Egg-powder was mixed with water, and the resultant slurry was fried on the pan.

Restaurants continued to operate throughout the War, but were not allowed to charge more than 5s (five shillings) per dish. Vegetables were not rationed in any way at all.

Fish and Chips were not rationed. But getting plaice, cod and other regular varieties of fish was almost impossible. Instead, Britons had to eat Snoek, (“Snook”), imported from South Africa.

Winston Churchill was an impossible workaholic. He worked day and night. He worked on the toilet. He worked in the bathroom. He worked in bed. He would stay up for hours and hours at a time, working. By comparison, Hitler enjoyed his shut-eye.

The British Army had its own magician! No, I’m serious. It really did.

His name was Jasper Maskelyne (1902-1973). Born into the famous Maskelyne stage family, Jasper was originally a magician performing in London’s West End theaterland. When war broke out, Maskelyne was recruited by the British Army to provide morale-boosting performances to allied troops. He soon grew bored of this, feeling that he was not doing enough for the war-effort. He offered his services to the army as an expert in camouflage and deception. The Army was not exactly taken by the idea. They thought Maskelyne was mad!

Maskelyne’s argument was that as a stage magician, he had a lifetime of experience in deception, trickery and illusion, which could surely be handy for the Army! But they weren’t interested. To this, Maskelyne famously retorted:

“If I could fool an audience only twenty feet away, I could certainly fool the enemy, a mile away, or more!” 

Maskelyne supposedly convinced the army that he had something to offer, when he successfully created the illusion of a German battleship. He was employed as a camouflage expert, and together with his team of men (the “Magic Gang” as they were called), Maskelyne set to work putting on his greatest show ever.

Among other things, Maskelyne disguised tanks as trucks, to make military-buildups look like harmless goods-deliveries. He set up blackouts, and fake lights at night, to shift the position of Alexandria Harbour (a key attack-point for the German air-force), and most amazingly, shrouded the Suez Canal (a vital link between Britain and its Empire) beneath ‘dazzle-lights’.

Dazzle-lights were powerful searchlights aimed at the sky. Twenty-one massive search-lights would have revolving heads, each head with two dozen smaller lights. Aimed at the sky and constantly spinning, the hundreds of lights created a glittering, dazzling effect. It was very pretty, but its purpose was to disorientate German pilots. Blinded by the dazzle, they wouldn’t be able to look down from their aircraft to spot the canal, and therefore wouldn’t be able to bomb it.

The canal is still here, so it obviously worked.

So there you have it. These are just a few of the weird, whacky little facts about the two World Wars which you probably won’t find in your history books.

Flaming Hell! A History of Fireplaces and Fire

 

Ooooh, burny…

My fireplace in winter.
It’s nice to sit and tapper away on the Underwood
in front of a big blazing inferno

Fire: Primal. Essential. The key to human survival. Used to describe everything from boiling passion and flaming love, to burning hatred and searing vengeance. What is the history of fire? How has it shaped the world? And how has the world shaped fire? Let’s find out together.

The Essence of Fire

There are innumerable milestones in the history of mankind, from walking upright, to using tools, to hunting, gathering, farming and the waging of war. But few inventions in history are as important as the creation, understanding, and use of fire. For thousands of years, fire was an essential to life. It heated homes, it gave us light, it cooked our meals, and gave us warmth and protection. Without fire, human migration and settlement would’ve been next to impossible. And human progress and creativity would’ve been greatly hindered. This posting will look at man’s use of fire, as well as the advancements of fire technologies and tools.

The Three Elements

A fire requires three things to burn:

Air. A fire cannot burn without sufficient oxygen.

Fuel. A fire cannot last without additional fuel to keep it going as it consumes its current supply and turns it to ash.

Heat. A fire does not burn and does not last without heat to get it going, and to keep it going.

It was early man’s understanding of these three components of fire that allowed him to use and control fire. Control it for heat, light and cooking. And control is vitally important – improperly used, fire can destroy as much as it can delight. But how do you get a fire?

To start a fire, you first need fuel. Small fuel at first – Tinder. Tinder is anything small, dry and extremely combustible. Cotton-wool, old thread, shredded cloth, dry straw, moss, grass and finely torn paper will all suffice for tinder.

On top of tinder, you require kindling, which is small pieces of wood to encourage the fire to burn and grow. Kindling wood needs to be small and dry – branches, off-cuts of planks, scrapwood, bark, etc, will all suffice.

On top of kindling, you require fuel-wood. Fuel-wood or firewood, are the larger logs, or segments of logs, which you load on top of the kindling once it’s burning sufficiently. As with the others, it needs to be dry. Start with small pieces of fuel-wood first (like thick branches) and then work your way up to larger logs or branches.

There are a million and one methods of building fires – Upside down fires, Teepee fires, log-cabin fires…the methods are endless – and so are the arguments for each one and why A is better than B. So I won’t cover that. Everyone has their own method that works for them.

But how do you LIGHT a fire? This, for centuries, was one of the hardest things to do…

Lighting a Fire

You have your kindling, tinder and firewood. Now you just need it to burn. A fire won’t burn without heat to get it going. To get heat, you need a concentration of energy. Before the advent of matches in the 1800s, fire-lighting was a laborious and at times fiddly task, and was achieved in one of two ways: Concentration of light-energy, and concentration of friction-energy.

Ever stolen grandpa’s magnifying glass and used it to burn ants? That’s starting a fire through concentrating light-energy. Specifically, concentrating the rays of the sun until they are focused on one spot for long enough that the intense heat generated causes your tinder to catch fire through solar energy!

The other method of creating fire, if the sun was not available, was to use friction. This is much more unpredictable and requires quite a bit of skill and patience, but it does work.

One of the most common ways of lighting a fire through friction was through the use of the bow-drill:

A piece of wood with a hole in it is placed on the ground over a piece of kindling-wood (the top piece of wood is used to provide stability). In the hole, a piece of tinder is placed. A wooden stake (the ‘drill’) is placed over  the tinder. The bowstring is then looped around the drill, and the bow is drawn rapidly back and forth and up and down the drill.

Driving the drill back and forth at high speed over the tinder creates friction, which creates heat. At 300 degrees Fahrenheit a spark is generated from the friction, which catches the tinder. Once the tinder is lit, the bow, the drill, and the top piece of wood are removed, and the tinder is fed with kindling to start a fire.

Placing the Fire

Gathering tinder, kindling and fuel-wood for a fire and drying it out was relatively easy. So was starting a fire, given the right tools and sufficient practice. The next thing for early man to conquer was the placement of a fire.

Fires had to be built and lit with careful consideration. Failure to light a fire in a safe place could result in catastrophic, uncontrollable infernos that could destroy grasslands, forests and settlements.

Controlling Fire

The first fires were simply built and lit inside ‘firepits’. A fire-pit was an area of land cleared of grass and wood, where a hole was dug. The hole had stones placed around it to create a fire-ring and hearth, and then the fire was simply built inside the ring and let to burn. And for centuries, this was the main method of fire-control and placement.

Having an open fire in the middle of your house or room or hut or cottage or cave had its advantages and disadvantages. First – the heat was all over the place – Lovely!

The problem was…so was the smoke! Although fireplace smoke can smell beautiful and tangy (which is why we love smoked foods and wood-fired pizzas so much), uncontrolled smoke could be deadly to the people around the fireplace.

To control the smoke, or to clear it out of the building, A simple solution was just to cut a hole in the roof of the building and let the smoke shoot up there. This worked…kinda. The smoke would leave the house through the hole in the roof…eventually. It would waft up there, not flow up there. So it took a while. And if the wind was against you, then you had real strife!

The chimney, followed by its companion, the fireplace, was invented in the 12th Century (1100s), although for a long time, they were considered features found only in wealthy homes and castles. Care had to be taken in their construction of stone, or brick, and this made them expensive. But by the 1500s and 1600s, fireplaces were slowly becoming more and more commonly found in the homes of regular people.

The Fireplace

Starting in the Medieval Period, houses of varying levels of grandeur were constructed with chimneys and fireplaces. Fireplaces were built out of stone or brick, and a typical fireplace setup involved…

The Chimney or Flue

The long stone, or brick pipe or vent which channeled the smoke up and out of the building.

The Smoke-box

The chamber at the bottom of the chimney-pipe, which acted as a buffer against downdrafts.

The Fire-Box/Fireplace

This is directly under the smoke-box, and it’s where the fire itself would be located.

The Hearth

The stone or brick platform on which the firebox and chimney is built. Sometimes extends outside of the fire-box into the room, to provide extra protection against rolling logs.

The Advantages of the Fireplace

The fireplace had numerous advantages over the everyday hole-in-the-ground fire-pit. The fire was now safely contained in its own little box, with a stone chute to carry away the smoke. A sliding or hinged shutter above the firebox, the damper, allowed you to close off the fireplace chimney in inclement weather, to prevent cold drafts and rain from coming down the chimney and into the room below. A big improvement on the hole in the roof which was a permanent opening to the weather outside!

In smaller dwellings, a fireplace was used as both a heater, and as a cooker. The fire kept the room and house warm, but also provided heat for cooking. Pots hung on hooks, or placed on trivets or stands over the coals and ashes of the fire, could hold food (usually soup or stew or some variety of pottage) which could be cooked, or kept warm over the coals and flames.

In and Around the Fireplace

As the fireplace started becoming more and more accepted and more a part of people’s homes and lives, a whole industry sprung up supplying equipment and accessories that the discerning homemaker could purchase for the fireplaces that were likely to have dotted the average home during the period from the 1600s up to the majority of the 20th century.

Andirons

Also called ‘fire-dogs’, andirons (sold in pairs) are iron (or in more expensive models, brass) stands used to support burning logs above the hearth of the fireplace, to encourage air-flow and improve a fire’s chances of burning more completely.

Brass andirons in a fireplace

Andirons could be simple iron bars or frames, or they could be elaborate, decorative stands made of brass. Some andirons had additional bars and hooks which could be attached or removed as required, so that buckets, pans and pots could be hung over or near the fire, to allow water to boil, or to cook a simple meal.

Andirons at work, supporting a stack of burning firewood

Fireplace Grate

The grate in my fireplace

Invented in the 1600s, fireplace grates were a big advancement on andirons. While andirons could hold large logs and chunks of firewood, a fireplace grate could contain the entire fire, kindling, charcoal, fuel-wood and all, and keep it off the floor of the fireplace, improving airflow. Made of wrought iron which was forge-welded together,  grates varied in size, from smaller, coal-burning grates, to much larger wood-burning grates, which could be several feet wide and several inches deep.

Fenders

Typically made of brass or iron, a fender is a wrap-around fire-guard placed on the hearth in front of the fire. It’s designed to prevent ash, coals or rolling logs from entering the room and creating a mess, or starting any unintentional fires.

A brass fireplace fender. Fenders are freestanding, and they can be moved to more easily clean the fireplace between uses

Fire-Irons

Fires were originally tended to using whatever utensils were close to hand, usually improvised. Old swords, iron bars, tree-branches and such. Eventually, pokers were created to give a person a permanent fire-tending tool. Ash-shovels, brooms and fire-tongs soon followed, and it’s these four items that typically make up the average set of fire-irons, usually stored on their own little iron or steel stand. Fire-irons are made of iron, or in more expensive sets, brass.

 

Fire-irons, stored on their own racks, became staples of homes around the world, and every household was likely to have at least one set. Smaller and shorter ones for coal-burning fireplaces and stoves, and larger, longer ones for wood-burning fireplaces.

Log-Cradle

Placed next to the fireplace, or directly outside the front/back door, a log-cradle (and it’s relation, the log-bin) became a necessity during very harsh winters.

When it became impossible to make the trek out to the wood-shed in the middle of the night, or when snow or rain proved too heavy, wood had to be stored near to the house. Log-cradles were designed to hold enough wood for anywhere between one night’s burning, or up to a week or more. These cradles are always held above the ground on legs, to stop moisture from gathering and allowing the wood to dry more effectively.

Dustbin

These days, a ‘dustbin’ is just another word for a rubbish bin or a garbage-bin. But in the days when wood and coal fires were a part of everyday life, a ‘dustbin’ was a separate and distinct entity. Specially made of metal with tight-fitting lids, carry-handles, and with raised bottoms, dustbins were constructed specifically for the task of holding household dust and fireplace ash and soot.

Storing ash from the fireplace in the dustbin was done usually only temporarily. When the bin was full, the ash would be dumped into the garden compost-heap. In large cities where this wasn’t possible, the dustbin was collected by the dustman in his dust-cart on a regular basis. The ash and dust in the bin was used for fertiliser out in the countryside.

Bellows

Fire was an important part of life for centuries, especially in places like the kitchen. Where-ever possible, man created instruments which improved and sped up the creation and maintenance of fire. You could continually blow on a fire, or fan it, to give it more airflow and oxygen, but blowing is exhausting, and fanning is imprecise.

Bellows are much more precise, regulated and forceful, which is why they’re preferred over other methods of giving a fire oxygen.  Giving a fire oxygen like this causes faster combustion and therefore, greater heat output.

Fireplace Reflector/Fireback

It may surprise some people, but fireplaces are not especially efficient. Crackling flames and wafting plumes of smoke give the impression of great energy and heat, but actually, only a small amount of that heat and light is projected into a given room. A fireplace is only open on one side, so only a quarter of the fire’s energy is projected into the room. The rest of the heat which the fire generated is absorbed by the iron grate, the floor, the three walls of the fireplace, or else goes up the chimney.

To improve fireplace heat-efficiency, a fireback is generally recommended. A fireback is a metallic panel placed behind the grate, between the fire and the back wall of the fireplace.

Firebacks come in one of two styles: Solid cast iron panels, or reflective steel, copper or aluminium panels (this latter called fire-reflectors). They both do the same thing, but in different ways.

An iron fireback absorbs the heat from the fire, and radiates this captured heat outwards. This increases the amount of heat that the fire produces, which would otherwise be wasted by being absorbed by the brickwork on the back wall.

Antique cast iron fireback

A reflective fireback or fireplace reflector works by reflecting the heat and light of the fire out into the room. This not only increases the heat output substantially, but also reflects a lot more light into the room, creating a brighter fire.

The reflector placed behind the grate in our fireplace. A homemade affair easily fashioned out of sheet-metal, a few screws and some metal bars

Fire Screens

The Great Fire of London Screen!

Along with fenders, fireplace screens started being used in the 18th and 19th centuries. Originally just a way to cover up the fireplace when it was not being used (to hide the unsightly vision of burnt charcoal and ashes), modern fireplace screens (made of copper or brass) serve a double-purpose of also protecting the room from sparks, flying embers or rolling logs.

Chimney-Sweeping

Chim-chimney-chim-chimney-chim-chim-cheree,
a sweep is as lucky as lucky could be…

Apart from giving us possibly the worst Cockney accent ever in movie history, the otherwise wonderful Dick Van Dyke furnished those living in the 21st century with another falsehood about the history of the fire – that chimney-sweeping was a jolly old lark full of fun and games!

If only t’were true.

A fireplace that is used for the majority of the year, every year, or one which is used every day for years on end, needs to be swept regularly. The rosy-cheeked fellow who does this is the humble chimney-sweep.

Every time you light a fire in your fireplace, soot and ash is drawn up the chimney by the updraft of smoke. Over the course of years, this soot and ash builds up inside the chimney, forming black, crumbly deposits called creosote. Just like how grease in your kitchen drain prevents water from going down the pipes effectively, buildups of ash in the chimney prevents smoke from going UP the pipes effectively – in this case, your chimney-pipe, or flue.

For this reason, it’s necessary every now and then to get your chimney swept. By a sweep. With a broom and a brush.

Men of the Stepped Gables

If you’ve ever been to Europe, you may have seen buildings with rather odd-shaped rooves, such as this:

At the peak of the roof, you can see the chimney-stack with the pots on top. Sloping away on either side is the roof. See how it’s staggered down like a staircase?

Called crow-step gables, this roofing-style was popular from the Middle Ages up to the 1700s. Although it looks very pretty and geometric, it actually serves a practical purpose: It’s a built-in chimney-sweep staircase!

In an age when ladders rarely went right up to the roof, buildings were constructed with crow-stepped gables to give the poor chimney sweep somewhere to stand and climb in relative safety, as he made his way to the chimney-top to sweep down the ashes. And it was just as well, because chimney-sweeping was rife with dangers! Rather ironic then, that chimney-sweeps are supposed to be symbols of good luck!

Up until the late 19th century, chimney-sweeping was an extremely dangerous and even lethal profession. But not always for the reasons you might suspect. Laws in the United Kingdom and the United States had to be passed, and then strengthened, before the practice of shoving boys up chimneys was finally abolished in the 1870s.

Child Chimney Sweeps

“It’s a nasty trade!”

“Young boys have been smothered in chimneys before now”

“That’s a’cause they damped down the straw afore they lit it in the chimbly to make ’em come down agin! That’s all smoke, and no blaze; whereas smoke ain’t o’ no use at all in making a boy come down, for it only sends him to sleep, and that’s wot he likes. Boys is wery obstinit, and wery lazy, gen’lemen, and there’s nothink like a good hot blaze to make ’em come down with a run…It’s humane, too!” 

– “Oliver Twist”, 1837

No write-up about chimneys and chimney-sweeping could possibly be complete without a part dedicated solely to the trials and tribulations of unfortunate apprentice-sweeps. Since the earliest days of chimney-sweeping, up until the last quarter of the 1800s, children were used to sweep chimneys. It was indeed a nasty trade, to say nothing of being extremely dangerous and lethal. But what made it so?

In England especially, but also in the United States, children, usually young boys between the ages of four and ten, were sent up chimneys with small brushes to sweep down the ashes inside the chimney-flues. It sounds harmless enough, but was actually phenomenally dangerous.

Imagine the following…

It’s 1830. You’re an orphan-boy, maybe six years old. You’re apprenticed to a Master Sweep. A typical assignment had you following your master to a well-to-do house somewhere in London, to sweep the chimney.

Now understand please, that it was NOT in most cases, the master sweep who did the sweeping – It was usually the job of his apprentice-boy to do that. The youth would be given a brush, and then he would literally have to crawl into the fireplace, and then climb up the chimney from the inside! In this dark, extremely cramped environment (usually less than 1ft square), the boy had to crawl up the chimney and stop every few inches to brush down the ash inside the flue, while the master sweep down below had the cushy job of sweeping the fallen ash into sacks to be removed from the building. In the most extreme of cases, boys were forced up chimney-flues which measured just NINE INCHES BY NINE INCHES! Measure that out with your ruler and see if you could get your son, or your nephew, or grandson, to squirm through a hole that size.

Now imagine a chimney-shaft 15 feet long, and getting him to crawl up that all the way to the top, and then crawl all the way back down again. Then imagine crawling up the chimney…and losing your footing…and falling two storeys down in the dark, and breaking your ankle on the hearth below. Or even worse, imagine getting your knees jammed up against your chest inside the pitch-black chimney, and being completely and utterly wedged into the chimney-pipe. You would choke on the ash, or die of asphyxiation from the smoke or from compression-injuries from the tight squeeze.

This did happen. And frequently. The ways to get boys out was either to drag them down with a rope, or to smash the chimney-flue open with a sledgehammer to break him out – before he either suffocated due to his cramped position, or choked to death on the falling ash.

Most chimneys were not large. Usually, one chimney was shared by two or three fireplaces, all stacked up on top of each other. So the bends, crooks and corners could very easily trap a child if he lost concentration, or panicked, and got himself wedged into the brickwork.

Young Master Oliver Twist was fortunate not become a “climbing boy” as chimney-sweep apprentices were called, and the British Government was genuinely concerned about the plight, and deaths of climbing-boys, but very little was ever done. The first act of parliament to try and regulate the chimney-sweeping trade was in 1788, but had little effect.

As early as the 1790s, longer, mechanical chimney-sweeping brushes had been invented, to try and replace climbing-boys, but due to the vast array of flue-types, the brushes were not always practical. Another act regulating chimney-sweeping came out in 1834, and another in 1840! But still the practice of sending boys up chimneys continued.

In the 1800s, the modern chimney-brush (still used by sweeps today, with big bushy brush-heads and segmented, screw-on handles) was invented. But its introduction was met with ignorance by chimney-sweeps. The new brushes were expensive and burdensome to carry around. It was much easier to pay a poor, starving peasant family, or a pauper family living in the East End of London ten shillings, or five shillings, to take their children away and make them climbing-boys.

Armed with scrapers and brushes, and usually stripped naked, these children were shoved up chimneys to clean them from the inside out. And not just for cleaning chimneys, but also to put out chimney-fires! Imagine being a 10-year-old waif, crawling up a chimney with a flaming hot blaze inside it, with a wet towel to extinguish it!

Although presented in a comical fashion, mocking the chimney sweep’s accent in his book, Dickens’ description of the working-conditions of climbing-boys was incredibly accurate, and some master sweeps really did light fires in fireplaces with the climbing-boys still up the flues! Unsurprisingly, some kids were literally roasted alive.

It was not until 1875, and the disaster attending a boy named George Brewster (aged 12) that sending boys up chimneys was finally outlawed in England! Poor George crawled up a chimney at the Fulbourn Hospital in Cambridgeshire, England.

Like so many hapless boys before him, he got hopelessly jammed in the flue. Sledgehammers and picks had to be brought out to smash the entire chimney down to get him out. He was dragged out alive, but died shortly after. The hospital staff were so appalled that they brought the incident to the attention of the police. George’s master sweep was given a sentence of six months’ hard labour on a charge of Manslaughter as a result.

George’s death in 1875 resulted in the passing of the Chimney Sweeper’s Act of 1875, which finally ended the practice of sending boys up chimneys.

Modern Chimney-Sweeping

After the 1875 abolition of child chimney-sweeps, sweeps had to rely on brushes to do their job for them. Or at least, in the United Kingdom. The practice of climbing boys continued in the United States, even after it had been abolished in England.

The standard chimney-sweeping brush has a round or square head, with stiff-bristles made out of wire for added abrasive action. The brush is fed up (or down) the chimney, and additional extension-rods are added to the brush to push it further up or down the chimney to scrape down the ash and soot.

These days, chimney-sweeps also use vacuum-cleaners and video-cameras to clean and inspect chimneys, but it remains a dirty, dusty job even today.

Like a Tinderbox

For centuries, the only way to light a fire was to do it the old-fashioned way – either through friction or concentrated sunlight. Eventually, mankind discovered that by striking certain materials together, sparks could be generated easily, and a fire could be started much more quickly.

To do this required three things: Flint, steel, and tinder.

Flint is a rock which can be easily chipped and fractured. When chipped to an angle, and struck or scraped down a piece of steel (such as a disc or a rod), sparks are generated by the friction, or the impact of stone and steel. These sparks, (shavings of steel, in fact), landing on a piece of tinder, would start a fire. Usually, flint and steel were kept together, along with a small, tightly-sealed container which held the tinder. This became known as a ‘tinderbox’. Tinderboxes had to be tightly sealed to keep the tinder as dry as possible so that it would catch fire instantly when sparks were showered upon it after flint and steel had been struck.

Even today, we have an expression about how something catches fire “like a tinderbox”, or how a potentially volatile situation is “like a tinderbox”, echoing the extremely combustible contents of these little metal boxes.

Striking a Light

For centuries, starting a fire was a fiddly, imprecise business. It was something which took skill and practice. Things improved when people realised that they could use steel and flint, but the absolutely best, idiot-proof way to light a fire came with striking matches.

Matches have a long history, and it goes all the way back to Ancient China. But modern striking matches, of the kind we purchase and use today, were invented in the 1800s. The first of this kind came out in 1816, and was invented by Frenchman Francois Derosne. Early matches were tipped with sulphur and white phosphorus.

These early French matches were fiddly to use and unpredictable. An improved version by Englishman John Walker, a chemist, was invented ten years later, in 1826, and is the basis of all matches we have today.

Walker’s early friction-matches were improved in 1829 by Scottish inventor Sir Isaac Holden (1807-1897), and were sold under the brand-name of ‘Lucifers’. Although they were an improvement, ‘Lucifer’ matches didn’t last, but the brand-name became a common nickname for matches during the 1800s and early 1900s, and matches were commonly referred to as ‘Lucifers’. The war song ‘Pack up your Troubles’ immortalised them with the line:

“So long as you’ve a Lucifer to light your fag, smile boys, that’s the style”

By the 1830s, more reliable friction-matches had been invented, these matches were stored in smart, silver or gold cases called vestas, which were commonly worn on pocketwatch chains and carried around with a gentleman, since one never knew when one might need a light. These vesta-cases often had corrugated striking-plates on the sides or bottom, so that a match could be retrieved and lit from the same container.

An antique silver vesta case. Note the striking-ridges on the bottom

Matches continued to be phosphorus-tipped, strike-anywhere friction matches until the last decades of the 1800s. Although convenient in the fact that these matches could catch fire after being struck against any sufficiently rough surface (even the sole of your shoe!), their convenience came at the price of being a fire-hazard in that they could be too-easily ignited.

On top of that, white phosphorus matches were extremely poisonous. The unfortunate ‘match-girls’ who made these things, by dipping the matchsticks into phosphorus solution developed a crippling infection called ‘Phossy Jaw’. In essence, the phosphorus fumes seeped into the body and rotted out your jaw-bone, resulting in bone-infections, gum-infections, losing your teeth…eugh.

This was stopped in the later 1800s when white phosphorus was replaced with safer red phosphorus, which is still used today.

Starting in the mid-1800s, poisonous, dangerous, white-phosphorus friction-matches were gradually replaced by safer red phosphorus matches. These were less poisonous, and also much safer because instead of having the phosphorus and sulphur on the match-head at the same time, these matches only contained phosphorus, and the sulphur striking-compound was painted onto the sides of new cardboard matchboxes. Behold the modern safety-match!

The safety-match which we know today works because when you strike a match against a box, the sulphur and phosphorus combine, while at the same time creating friction, which is what causes the match-head to ignite. With the two components of a burning match now separated from each other, it is impossible for a friction match to be lit purely by being struck against an abrasive surface. This made them safer to handle and store than traditional strike-anywhere matches.

Mankind Roasting on an Open Fire

For centuries, heating, lighting and cooking was done with an open flame and fire, using candles, lamps and fireplaces. The Industrial Revolution of the 1700s saw the first practical iron stoves being built in Europe. Made of cast iron, these stoves were able for the first time to allow people to do more of their cooking at home.

Previously, cooking on an open fire was fiddly and tricky – You were limited by what you could hang over the flames or sit on the hearth. The first stoves allowed mankind to fry, bake, steam, boil and roast a much greater variety of foods than a simple open fire would have permitted. This greater control of fire vastly improved home comfort.

Prior to the invention of the cast iron range stove, baking was a specialty art. The only people who could bake were the people who had ovens. And ovens were huge brick and stone structures which were expensive to build and took up a lot of precious space. Not everyone had them, and most people didn’t. To bake your pies, cakes and loaves, you had to take them to the village bake-house to be baked.

With the stove, it was now possible to bake at home! And with a much better fuel, too.

It was at this time that people started switching from wood as a fuel-source to coal, instead. Coal had advantages, but also disadvantages. Coal burns hotter than wood, and so produces much more heat for the same amount of fuel. The problem is that coal burns and produces nasty black smoke! Eugh!

Wood-smoke is lovely. Everyone loves wood-smoke. It smells wonderful. People have smoked meat, cheese, fish and all other sorts of things in wood-smoke for centuries. It preserves the food and gives it a lovely flavour! Yum! But mixing coal-smoke with your food was apt to put you off your appetite, and to prevent this, coal-burning stoves and fireplaces did everything to channel the smoke away from the rest of the house.

Fires in the 21st Century

In the Developed World, the wood or coal-fueled fire is no-longer the primary source of heat or light anymore. Most of us cook on gas or electric stoves and heat our homes with heaters or central heating or split-system air-conditioners. But in other places around the world, fires continue to burn bright. But what should you do if you want to get a fire going?

Using your Fireplace

Perhaps you live in an older house with a fireplace and you would like to start using it to warm the house in winter? What to do, what to do, what to do??

The first thing to do is to ensure that your fireplace is a working fireplace. By this, I mean that all the fittings are functional and undamaged. The chimney should be clear and undamaged, and the damper should open and close smoothly. If you are unsure about the condition of your chimney, then you should have it checked by a professional chimney-sweep. Or you can do it yourself – All you need is a ladder, a flue-brush (and extension-rods) a few drop-sheets and a vacuum cleaner (or a shovel and bucket).

Whenever a chimney is swept, you’re scraping out all the soot and ash which has caked onto the inside of the chimney. It’s called creosote. Here’s a picture:

Scraping this crap out of your chimney-pipe ensures that the air moves smoothly up the flue and that the smoke has an unimpeded passage to the outside world.

To prepare the fireplace, you need to ensure that you have all the right bits and pieces. The necessary bits and pieces are listed and illustrated earlier on in this posting.

Lighting a Fire…

There are a dozen methods for building and lighting a fire. Here are just two methods, and the bare essentials.

To light a fire, you will need a source of ignition – matches, a cigarette lighter, or flint and steel if you want to do it the old-school way.

You will also need tinder. Tinder is anything small, dry and shriveled. Grass. Straw. Shredded, scrunched or twisted paper. Old cloth. Tinder goes first, at the bottom of the fireplace grate.

On top of the tinder, you set up your kindling. Kindling is any small dry pieces of wood. Usually old branches or larger pieces of wood split into smaller pieces. Kindling should be small enough that you can grab a whole bundle of it in one hand. If you can’t, it’s probably too big.

Light the tinder and wait until the kindling is going. Once it is, you can lay on your pieces of fuel-wood. Start with smaller pieces and work your way up to progressively larger pieces.

Waiting for the kindling to light before going further is important. It allows the fire to get a foothold. But it also allows your chimney a chance to warm up. You can’t light a fire in a cold fireplace (trust me, I’ve tried. It doesn’t work). Letting the kindling burn for a bit sends hot air up the chimney. This drives out or warms up any cold air in the chimney, and establishes an updraft – a current of air that draws more air into the fireplace below, which stimulates the fire and encourages it to burn more intensely.

With this going, add on your fuel-wood in increasingly larger segments and logs. You have a fire!

As always, keep an eye on your fire. And if you’re not going to, then make sure that the safety-screen is across the fireplace to prevent accidents – Rolling logs do happen, and you don’t want to come back to your living-room to find one burning a hole in your carpet. You might want to keep a small bucket of water or a fire-extinguisher nearby, in case the unforeseen should occur.

Fire-Building Methods

The two most common fire-building methods are the Upside Down Fire, and the Tepee Fire.

The Tepee Fire works on the age old rule that fire always burns UPWARDS. So any extra fuel should be placed above and outwards, from the fire’s point of origin. You put your tinder in a little pile in the middle of the fireplace, then lean kindling sticks against it, like an American Indian tent, or ‘tepee’. Then lay fuelwood around it in the same manner with a little door open at one side, to stick a match into it to light the kindling.

The other fire-building method which has gained a lot of popularity is the Upside Down Fire.

While the Tepee fire works best with almost any size of wood, the Upside Down Fire works best with smaller, thinner pieces of wood. It’s built in the following method:

Get your fuel-logs and stack them in a criss-cross pattern, building up a tower of wood. At the top, build your fire-tepee with tinder and kindling, and a small amount of fuelwood. Then light the fire at the top.

The reason it’s called an UPSIDE DOWN fire should now be apparent – It goes AGAINST the rule that fire burns from the bottom up. The Upside Down Fire works in that the flaming materials burn DOWNWARDS through the tower of fuel-wood. As it does so, any unburned portions of the tower collapse inwards, further fueling the fire, until it reaches the very bottom, and burns out. Upside Down fires are meant to be maintenance free – Build it, light it, forget about it. Ideal for camping. Or lazy people.

Both methods work. It’s just a matter of which one is best for you in your situation.

Neither Rain, nor Snow, nor Sleet, nor Hail: A Compact History of the Components of Mail

 

These days, more people send emails than letters. They use the telephone more than they send telegrams. And yet, in this day and age of frantic internet buying, with sites like eBay and Gumtree, and the countless other online businesses offering all kinds of goodies with which to suck the money out of our wallets, mail delivery is just as important now as it has ever been before.

Postal systems have been around since the dawn of writing, and to cover the development of a mail-delivery system would take an entire book…which I’m not going to write. Instead, this posting will look at the history of the various aspects which make up the modern postal system.

Why is it called a “Postal System”?

We all get mail. We all send, deliver and receive mail. But people also tend to call it ‘post’. There’s the Royal Mail in England, Australia Post in Australia, the United States Postal Service in the U.S.A. Why does it switch between ‘mail’ and ‘post’?

‘Mail’ is the cargo which a postal system transports and delivers. Letters, postcards, parcels, packets, boxes, crates and so-forth. The system which delivers this cargo is the ‘postal system’. But why is it called a postal system?

The very word comes from the earliest days of mail delivery. Back in the 1500s, when Henry VIII developed the Royal Mail in England, mail-couriers or despatch-riders literally rode, on horseback, between mail-posts, set into the roadside from town to town. To send something by the postal service was to literally meet the post-rider…at the post, the wooden stake in the road…and give him your letter which you wanted to have delivered. These days, we might be familiar with the position of “Postmaster General“. This came from the original Tudor office of ‘Master of the Posts’, literally the man who was in charge of ensuring that the post-officers remained…at their posts!…and delivered mail in a safe and efficient manner.

Mail Delivery

Ever since the first mail-services were created, delivery was extremely slow for an extremely long time. A letter posted in London could take days to reach Edinburgh, or Paris, or Berlin. A letter posted in New York could take weeks to reach San Francisco. And a letter written on one side of the world to be sent to the other, could take months to get there, often relying on trade or naval ships to transport it in their cargo-holds, if they happened to be going in that particular direction.

One of the first attempts at prioritising the delivery of mail was made in the 1700s. For a roughly seventy-year period between the early 1780s until the late 1850s, the British Royal Mail relied on a fleet of mail coaches to speed deliveries of mail throughout the United Kingdom.

Mail delivery had previously been very slow, and dangerous! Post-riders transported not just mail, but also parcels and packages, which might contain valuable or expensive items. It wasn’t uncommon for lone post-riders to be set upon by highwaymen who would relieve them of their cargoes, steal their valuables and even kill them!

The coordinated system of mail-coaches changed this. Not only was delivering mail by coach relatively faster, but also safer. The mail-coach always had at least four men riding on it: A driver, his assistant, and two armed post-guards, who rode on the back running-board. This way, anyone attempting to rob the coach would have to deal with four armed men, first!


An actual British mail-coach from the early 1800s. This one ran the route between London and York. Note the huge storage-trunks over the axles for carrying mail

The mail coach was also used as a sort of long-haul public transport system. Passengers could pay a fee, and ride along inside the mail-coach during its journeys, to get to their destinations much faster than what they might ordinarily. Also, since the mail-coach was working for the Royal Mail, a government agency, it was illegal for anyone to stop a coach. Toll-men, highwaymen, nobody could halt a mail-coach, and they didn’t stop for anything less than a broken axle!

Steam-Powered mail took over from horse-drawn mail in the 1850s. With the improvement and expansion of railway networks around the United Kingdom, Europe, Canada and America, mail-coach services were eventually phased out when the postal-services realised that these newfangled wood- (later, coal-) fired, steam-powered locomotive engines could speed mail to every part of a given country. Soon, post-offices and railway stations merged, so that post could be transported by rail and steam as far and as fast as was necessary and possible.

Special mail-trains were used in most cases, and their only task was the delivery of mail. To save time, letters and parcels were often sorted en-route by the mail-handlers, so that when the train reached its next drop-off point, or station, the necessary mail-sacks could just be dumped off on the platform, without time wasted in needless waiting and sorting.

To save even MORE time and to further improve the efficiency of mail-delivery, rail-mail was collected and dropped off even when the train was on the move! Specially-designed mail-cranes were built next to major railroad-routes:

Different types of mail-cranes or mail-hooks were used. Some simply held up sacks of outgoing mail for the train to snatch off it as it rocketed past. More complex ones would collect incoming mail, and send off outgoing mail at the same time.

As sacks of mail were prepared onboard trains, they were hung on hooks outside the mail-carriage. At the same time, sacks of mail waiting to be picked up were hung onto the arm of the mail-crane. As the train whizzed past the crane, the crane-arm whipped the sorted mail-sack off the side of the train. At the same time, the arm swung around, and a second hook or arm on the railroad carriage yanked the outgoing mail-sack off the crane, throwing it into the mail-carriage! Later on, the local postman would show up and pick up the dropped-off mail, and possibly hang another sack of outgoing mail onto the crane, to be collected by the next train that came hurtling by.

This silent film from 1903 shows a mail-crane in action:

https://www.youtube.com/watch?v=2lVSC4jt2R8

Steam-power also changed the nature of international mail-delivery. With faster, steam-powered ocean-ships, mail delivery was cut from months to weeks, or even days! In the United Kingdom, ships with the prefix “R.M.S.” (“Royal Mail Ship/Steamer”) were officially licensed to transport shipments of British mail. As on trains, mail-clerks onboard ships would sort the mail en-route to their port of destination.

With all these innovations, it’s not surprising that the Victorians were the ones who had among the most efficient postal systems in the world. Up to twelve deliveries a day! No worries about not getting that contest-entry form in on time, huh?

In the interwar period of the 1920s and 30s, the first experiments were made in air-delivered mail. Not having to worry about signals and tracks and waves and oceans, an airplane could fly mail from city to city, dropping it by parachute, and then landing to pick up more mail to speed onto its next destination.

Envelopes

For much of history, whenever you posted a letter, not only did you have to write it by hand, you also had to produce your own envelopes by hand! Most people would fold their letters into envelope-shaped forms, write out their letter onto it, and then simply fold the letter up and sealed it with wax, so that the letter and envelope were one and the same, which saved time.

It wasn’t until the invention of the first purpose-made envelope-folding machine in 1845, that envelopes could be purchased separately from stationer’s shops.

The classic envelope was cut and folded so that when it was assembled, it created a neat rectangular or square shape:

But have you ever wondered why envelopes have four, triangular flaps meeting in the middle?

Although you could glue the flaps down with regular paper adhesive, envelopes were originally folded and set in this manner so that a single wax seal, placed in the center of the envelope, was all that was needed to hold the entire packet neatly closed.

Most of us don’t seal our envelopes anymore, and generally rely on the paper glue that comes with the envelope, to do that for us, or we simply lick the glue to moisten it and then smash the thing shut, but nevertheless, the triangular, X-form on the back of envelopes has remained to the present day.

Stamps

It used to be that when letters were sent by post, it was the duty of the recipient to pay for the letter’s delivery. This was seen as inefficient, difficult to enforce, and frankly – rude. Why should YOU have to pay for a letter which you might not have been expecting, or which you wouldn’t want to receive, anyway?

This widespread dissatisfaction with the payment of mail-delivery charges led to widespread corruption, abuse, frustration and distrust of the postal system. To combat these issues, and to ensure payment for poastage, the introduction of the postage-stamp was made in England in the 1840s. With the new ‘Penny Black’, the first-ever postage-stamp, the sender purchased the stamp along with his envelopes, and pre-paid for the delivery, which cost…one penny!

With payment taken care of before the letter was even picked up by the mailman, there were far fewer complaints from customers about who had to pay for postage, how much and when.

Mail Boxes

With their twelve-a-day system, you can bet that it was the Victorians who invented the concept of the mail-box! There would be no other way to organise the millions of letters, envelopes, cards and parcels that sped around the U.K. at the time!

Mail-slots for incoming mail came about in the 18th century in Paris, but it wasn’t until the 1800s in Britain that the idea of a mail-slot, or a mailbox for each residence or business really took off. As part of the reforms of the postal-service (which also saw the introduction of the penny-post), Britons were encouraged to have a mail-drop point somewhere on their residence for the convenience of themselves, and the postman! In more built-up areas, a simple letter-slot, sometimes with a basket hanging on the side of the door, was sufficient. In more suburban parts of town, actual kerbside mail-boxes were installed.

Pillar-boxes, or public post-boxes for the depositing of outgoing mail came about in the Georgian era. The oldest one thought to exist dates back to 1809 in England.

In the United States, mail-boxes became popular in the 1880s, when the U.S.P.S. encouraged people to have individual mail-boxes outside their houses for the speedy delivery and pick-up of mail. Instead of large, bulky public boxes that might take up space on the street, residential mailboxes in the ‘States were used for both incoming, and outgoing mail. Raising the red flag on the mailbox told the postman that there was outgoing mail which was to be collected.

The Mail Always Gets Through…

Mail has existed for thousands of years. But the icons of mail-delivery such as stamps, envelopes, mailboxes and dedicated postal-delivery men are all relatively recent developments. Where once mail took weeks and months to get anywhere (and sometimes still does!), technological advancements have meant that in the 21st century, mail is delivered faster and with less hassle. All the more important with the heavy reliance that all of us place on the postal-service, even now in the 21st century.

The Idiot and The Odyssey: The Complete Restoration of my Grandmother’s Singer Sewing Machine

 

In looking back over my blog, I realise that it’s been over a year since I started the seemingly ludicrous mission of restoring my grandmother’s 1950 Singer 99k sewing-machine. I am proud to say that as of the date of this posting, the restoration is complete!

Gran was born on the 7th of May, 1914 in Singapore. She died on the 28th of November, 2011, in Melbourne, Australia. Weet-Bix are suspected to have played a role in her demise. She was 97.

Granny was a dressmaker, and from the early 1950s until the early 1980s, was in this trade professionally. When she retired, she moved to Australia, and her Singer sewing machine came with her. A battered, but trusty Singer 99k knee-lever electric sewing machine. This machine was gran’s life and she used it in place of any other machine that might ever have been, or might have become available for her to use.

When gran moved to the nursing-home, in the early 2000s after worsening Alzheimer’s Disease, her most treasured possession, her Singer, was placed in the basement, where for the next eight or-so years, it sat in a corner at the bottom of a bookcase, gathering dust.

When gran died, I hauled the machine out of the basement and began a steady restoration process. I don’t know what possessed me to do this, other than the fact that this machine was gran’s livelihood for most of her adult life.

The majority of what happened next is covered in my earlier article. This posting is more of an addendum to what I’ve already written.

The Frankenstein Moment

MUAHAHAHAHAHAHAHAHA!!!

*Thunderclap!!*
*Flashlightning!*

…Ahem.

Actually getting the machine running and sewing for the first time really was an exhilarating experience. Second only to getting the machine-case off the base! It took a lot of oil and fiddling with a screwdriver, but I got it off eventually, and was very happy.

Getting the machine running was a considerable task. It was literally frozen solid when I got the lid off the machine-base, and not a single thing apart from the presser-foot lever and the bobbin-winder worked. Everything else was jammed solid from a complete and total absence of lubrication. And it’s no exaggeration to say that it took me nearly a week to lubricate the entire machine to a level where it would run as well as it did when it was brand-new.

I must admit, it was rather fun. There is the incredible thrill of a challenge, combined with the later sense of accomplishment, when it came to getting that machine running again.

I had almost given up at one point, but perseverance was the key. It was a real joy to see it running at full speed again, for the first time in probably ten years (or whenever the last time it was used, happened to be. At least ten years ago, though).

Duhr…Now What?

It’s working! Oh my god it’s working! It runs, it stitches, it sews, it runs at every speed,  the light turns on, gets hot enough to fry breakfast on, and then turns off. Everything is excellent! But what do we do now, huh?

I really wasn’t sure. Like I said, I didn’t have any real reasons for wanting to bring this thing out of the basement other than to tinker around with it. But once I’d got it running, I started thinking about these other things that I could do. And that’s when the thought entered my head that I could bring the machine back to its former glory, by tracking down and purchasing all the necessary bits and pieces for it. I had no idea where on earth I would begin. But as luck would have it, I live very close to a large and very well-stocked flea-market. And it was from that market that I purchased nearly everything for this machine.

The Scavenger Hunt

I started with simple things, like needles and bobbins. These were pretty easy to find. And all the while, I was busy cleaning and fixing the machine. It was like a gunk-generator. Every time I thought it was clean, I’d find some other part of the machine that required my attention. Like under the bed. Or behind the balance-wheel, or inside the electric motor, or underneath the bobbin-case. On top of everything, the machine required constant lubrication! It drinks oil like Barney Gumble drinks Duff Beer.

The harder things which I had to track down were the sewing-machine accessories boxes, the attachments that went inside them, the accessories that went with them, and the green oil-can that went inside the machine-case. I had no idea what these things looked like, and it took a long time to track them down. I actually ended up buying multiple boxes of attachments and pouring them all out, and scrambling them around until I had assembled one FULL box of attachments from the dribs and drabs found in other boxes. Those dribs and drabs would be useful for spares later on.

One big problem with this machine was finding the original square steel bobbin-plate or ‘slide-plate’. The slide-plate was a protective metal plate that shielded the spinning bobbin-mechanism from dust and tangling threads. There wasn’t anywhere local that I could buy one, and waiting for one to show up at the flea-market would take years.

The only way I could get one was to buy a replacement online. You can buy ORIGINAL Singer plates online (and there are people who sell these), but obviously, stock is limited, and as a result, prices are much higher. I had serious doubts about this. So instead, I went the reproduction route. With the help of a cousin, we bought the replacement plate from an eBay store based in the U.S.A., and had it shipped halfway around the world to…here.

Boy that took so long. I think it was something like a month or more, of waiting.

Finding the oil-can for the sewing-machine was rather challenging. There are all kinds of Singer oil-cans and bottles. And I had no idea which one I would need to fit the slot inside the case-lid. All I knew from what I saw, was that it had to have a flange at the bottom, and it had to have a curved base. Out of sheer luck, I found the can which I needed at the flea-market, hidden in the pre-dawn mists, amongst a bookcase full of all kinds of other cans which were for sale. I paid $5 for it and walked off.

Sentimental Attachments

Finding all the attachments for the sewing-machine was another big challenge. No one box of parts which I bought ever had the full set. So I was forced to buy four or five boxes of parts, and slowly piece them together, to form one big box of attachments. In the end, I had enough bits and pieces around to create two complete boxes!

On top of all the usual steel attachments, was the challenge of finding the zigzagger and buttonholer attachments. These old Singer sewing machines performed a very basic straight lockstitch. To allow these machines to make more complicated things like zigzags and buttonholes like modern machines can, the manufacturers came up with all kinds of fascinating gizmoes which you could bolt onto your machine.

Quality of Manufacture

One thing that I love about all these items is the quality of manufacture. The bobbins, the attachments, screws, plates…everything is made of solid steel, without exception. Nothing like that exists today. Today, bobbins are made of plastic, feet and other attachments are made of plastic. Even the screws are made of plastic. One crack or warping renders them useless. The older steel parts are nigh indestructable.

It’s stiff? Oil it. It’s rusty? Sand it. It’s dull? Polish it.

With plastic parts…it’s cracked?…Uh…I dunno. Throw it out and buy another one?

Money wasted and thrown down the toilet.

These steel pieces will literally last forever. And their simple, no-nonsense construction means that they will always do the job that they were made for, without any compromising on quality. Back in the good old days, this was standard. These days, we have to pay extra for quality that should come with the original product. Which doesn’t. They literally don’t make ’em like they used to.

The Last Piece

By the start of 2013, I had finally gathered all of the main components of the sewing-machine. I had the needles, oil, feet-attachments, the two main mechanical attachments, instruction manuals and other dribs and drabs. However, one piece remained elusive. The bed-extension table.

The bed-extension table came with most Singer sewing machines and it was used to extend the bed of the machine, to give you a larger work-area. This had the advantage of stopping your sewing-piece from sliding off the end of the machine-bed, and pulling your carefully-pinned cloth out of alignment with the needle and presser-foot.

Sadly, they’re not easy to find. The bed-extension table is of very simple construction, and it wasn’t uncommon for them to be thrown out or lost due to their rather bland and simple appearance. Unless you knew what you were looking at, the extension-table looks like just another plank of wood.

I discovered one recently at an antiques shop, along with a box of other bits and pieces, and snapped it up then and there. The standard Singer bed-extension table measures 8.5 inches wide (the width of the machine-bed), and about eight inches long.

Finding that final, missing piece means that the machine is finally back in its original and complete condition, having been reunited with all the items that would’ve come with it when it was purchased brand-new from the shop.

Like New!

The pictures below show the machine looking as it would’ve done back in the 1950s, complete with the parts that would’ve come with it when purchased brand-new:

Bits and pieces such as zigzaggers, buttonholers and other bits and pieces were purchased separately on a required basis. But those photos illustrate what came with the machine when it was brought home for the first time.

This model, the Singer 99 series, was manufactured from the mid-1920s up until the late 1950s, and came as a handcrank machine, or as a knee-lever machine. Knee-lever machines started coming out in the 1930s, and both hand-crank and knee-lever models were produced side by side until the model ceased production ca. 1958.

The body of the Model 99 changed significantly in the later years of its production, but the machine as it appears here would’ve been identical to one from the 1920s, minus the motor and the knee-lever, and with a spoked, instead of solid balance-wheel, with a crank-handle bolted to the side.

Built like a watch? More like a tank. The Model 66, the 99’s immediate predecessor, was highly popular, but extremely heavy and cumbersome.

The Singer 99 model was designed to be a 3/4 size “portable” machine, a step down from the full-size Singer 66 model, which came out in 1905. The 99 was designed to overcome the 66’s problems with regards to size and weight.

This advertisement from 1928 emphasizes the new machine’s portability! And with portability comes choice! You can now sew anywhere you want! Bedroom, living-room, parlour, guestroom, even outside if you wanted to. The one thing this advertisement does NOT publicise is the fact that this machine is DAMN HEAVY.

Keep in mind that the 99 was supposed to be a “portable” machine, a step down from the larger and highly popular 66 model. But despite the downsizing, the 99, complete with all its bits and pieces, still weighs in at 33.25lbs, or just over 15kg! I know this because I weighed it myself. Not so portable now, is it?

Nevertheless, it’s a practical, popular, stylish and robust machine, well worth restoring and using.

A Story on Two Wheels: The History of the Bicycle

 

In the history of transport, fewer inventions were more compact, innovative, liberating, practical and enjoyable than that of the bicycle. And yet, the bicycle as we know it today is only just over 100 years old. What is the story behind this invention? Why was it created? And how did it reach the design which we know so well today? Let’s take a ride…

The World Before the Bicycle

Before bicycles came onto the scene with their dingling bells and rattling drive-chains, transport was slow, dependent, and/or crowded. You had ships, boats, carriages, horseback, or your own two feet.

When it came to pre-bicycle travel, you had three options available when it came to the characteristics of the journey that you were likely to receive:

Fast, Private, Comfortable.

You may pick only two.

If it was fast and comfortable, such as a railroad-train, you were resigned to sharing the carriage, and even the compartment, with others.

If it was private and comfortable, such as a carriage, then it certainly wasn’t fast. The average speed of horse-drawn transport in the 19th century was about seven or ten miles an hour at best. In the same bag is walking. Private and relatively comfortable, but don’t expect to get anywhere in a hurry.

If it was fast and private, such as riding on horseback, alone, then it certainly wasn’t going to be very comfortable, being jolted around in a saddle for hours on end.

What was needed was a fast, relatively comfortable, individual mode of transport, that relied purely on the rider for propulsion, and which didn’t need to fed, fired, stabled, stoked, sailed, steamed or otherwise externally operated.

With the internal combustion-engine still a dream, and coal-fired steam-carriages being large, loud, slow and unpredictable (to say nothing of dangerous), there was a serious market for a convenient, fast, practical machine which a rider could use for individual transport: The Bicycle.

The First Bicycles

The first serious attempt at a bicycle-like machine was the German-made ‘hobby-horse’ or ‘dandy-horse‘ machine of the 1810s.

The ‘Dandy Horse’ bicycle was a fascinating…um…experiment. It was hardly what you could call a bicycle, and it was never utilised as a serious mode of transport. It was seen more as a toy, for the use and amusement of the ‘dandy’, the well-dressed, leisured, upper-class gentleman of Regency-era Europe. As you can see, the Dandy Horse has no seat to speak of, no driving-mechanism, no pedals, not even a real handlebar! Steering and propulsion are rudimentary at best, and without any form of suspension, riding one of these on the rough, dirt roads of 1810s Europe would’ve been hard on the back and spine!

You didn’t so much ‘drive’ or ‘operate’ the dandy-horse as you ‘glided’ on it, similar to a skateboard. You kicked it along the ground with your feet to build up speed and then coasted along until the momentum gave out. An amusing gimmick for a Regency garden-party, but hardly a practical form of transport!

During this time, the word ‘bicycle’ was not even coined. And wouldn’t be for several decades. Human-powered, wheeled land-machines were called ‘Velocipedes‘, from the Latin words for ‘Fast’ (as in ‘Velocity’), and ‘Foot’ (as in ‘pedestrian’). And as the 1800s progressed, there was a growing range of fantastical and ridiculous ‘velocipede’ machines with which to delight the population of Europe.

The next advancement in bicycle technology came from France, and we look to Joseph Niepce and his contraption known as the…um…’velocipede‘.

Any long-term readers of this blog may fancy that they’ve heard the name ‘Niepce’ before on this website. And you’d be right. Apart from tinkering with bicycles, he was also instrumental in the development of modern photography. 

Joseph N. Niepce’s contribution to the bicycle came in the early 1860s, although it wasn’t a great departure from what had existed before.

The Niepce ‘velocipede’ differed from the earlier ‘dandy-horse’, but only a couple of ways: The front wheel now had pedals, and a proper seat or saddle which was adjustable to the height of the rider, along with proper handlebars and steering. But other than these minor additions and improvements, the French velocipede was not much of an improvement.

A French ‘velocipede’, as invented by Joseph Niepce. Note the presence of the handlebars and steerable front wheel, and the centrally-mounted saddle

The Ordinary Bicycle came next. Invented in the late 1860s, the Ordinary was the first machine to be specifically called a ‘bicycle‘, using the two words ‘bi’, meaning ‘two’ and ‘cycle’.  The Ordinary also introduced something which has become commonplace among all bicycles to this day: Wire-spoked wheels!

The Ordinary was variously called a High Bicycle, a Boneshaker (due to its lack of suspension), or, most famously of all – a Penny Farthing, after the largest, and smallest denomination coins in circulation in Britain at the time.

The Ordinary was the first bicycle for which there was any serious commercial success, and they became popular for personal transport, as well as being used as racing-machines!

Despite its relative popularity, the Ordinary had some serious shortcomings: There were no brakes, there was no suspension, and they were incredibly dangerous to ride! The immense front wheel could tower up to six feet in the air, which made mounting and riding these machines quite a feat of acrobatics in itself! Accidents could cause serious injury and stopping, starting, mounting and dismounting were all big problems. Something better had to be devised!

The Safety Bicycle

The Ordinary or ‘Penny Farthing‘ was one of the first practical bicycle designs, but its many shortcomings and dangers meant that something better had to be found. Enter the ‘Safety Bicycle’.

The ‘Safety Bicycle’ is the direct ancestor to all bicycles manufactured today.

The prototype ‘safety bicycle’ came out in the late 1870s, in response to the public dissatisfaction with the fast, but dangerously uncontrollable Penny Farthing.

Henry John Lawson (1852-1925) developed the first such machine in 1876. Lawson, the son of a metalworker, was used to building things, and loved tinkering around with machines.

Lawson’s machine differed from others in that the rider sat on a saddle on a metal frame. At each end of the frame were spoked wheels of equal size, with a handlebar and steering-arrangement over the front wheel. The rear wheel was powered by the use of a simple crank-and-treadle-mechanism, similar to that used on old treadle-powered sewing-machines, a technology familiar to many people at the time.

The great benefit of Lawson’s bicycle was that the front wheel was used solely for steering, and the rear wheel was used solely for propulsion, and the rider’s legs were kept well away from both of them! On top of that, the wheels were of such a size that the rider’s feet could easily reach the ground, should it be necessary to stop, or dismount the machine in an emergency. Lawson was certainly onto something!

Lawson updated his machine in 1879, with a more reliable pedal-and-chain driving-mechanism, but sadly, although innovative, his bicycle failed to catch on. All the extra parts and the radical new design meant it was hard to produce and too costly to be sold to the general public.

Although Lawson’s machine was a commercial failure, his invention spurred on the development of this new contraption: The Safety Bicycle! Building on what Lawson had already established, over the next few years inventors and tinkerers all over the world started trying to produce a bicycle that would satisfy the needs of everyone. It had to be practical, fast, easy to use, safe to ride, mount and dismount, it had to stop easily, start easily, and be easily controlled.

All manner of machines came out of the workshops of the world, but in 1885, one man made something that would blast all the others off the road.

His name was John Kemp Starley.

Starley, (1854-1901), was the man who invented the modern bicycle as we know it today. And every single one that we see on the road today, is descendant from his machine.

Building on the ideas of Mr. Lawson, Starley rolled out his appropriately-named ‘Starley Rover’ safety bicycle in 1885.

The Starley Rover was revolutionary. Like the Lawson machine, it had equal-sized (or near-equal), spoked wheels, a diamond-frame made of hollow steel, a seat over the back wheel, handles over the front wheel, and a pedal-powered chain-drive in the middle, linking the drive-wheel and the rear wheel with a long drive-chain.

By the late 1880s, the modern bicycle had arrived. It was Starley who had brought it, and he cycled off into the history books on one of these:

This model from the late 1880s has everything that a modern bicycle has, apart from a kick-stand. And this is the machine that has revolutionised the world of transport ever since!

The ‘Rover’ was so much better than everything that had come before it. It was easy to ride, easy to mount, easy to dismount. It was close to the ground, but did not compromise on speed with smaller wheels, because of the 1:2 ratio between the pedal-wheel and the rear wheel. You could reach tremendous speeds without great exertion, and you could stop just as easily!

The Bicycle Boom!

At last! A functional, fun, fast machine. Something you could ride that was safe, quick, light, portable, quiet, comfortable, practical, and which could get you almost anywhere you wanted to go!

With machines like the Rover, and the ones which came after it, all other bicycle-designs were considered obsolete! The Rover had shown the way, and others would follow.

With the success of this newly-designed bicycle came the cycling boom of the the 1890s! For the first time in history, you didn’t need a horse to get anywhere! You needn’t spoil your best shoes in the mud! You didn’t have to worry about smoke and steam and soot! Just roll your bicycle onto the road, hop on it, kick off, and down the road you went. What a dream!

With a truly practical design, the true practicality of the bicycle was at last, fully realised. At last, the ordinary man or woman on the street had a machine which they could ride anywhere! Although, that said, most bicycles in the late Victorian era were expensive toys for the wealthy. But nonetheless, they were used for everything from cycling through the park, cycling around town running errands, cycling to and from work, cycling to visit friends and relations across town, cycling to take in the sights! What a wonderful invention!

The ‘Gay Nineties‘, as this period of history is fondly called, saw the first big boom of the bicycle. Or a medium-sized one, at any rate. There were still a few problems: Bicycles were still rather expensive. And it was considered scandalous for a woman to ride a bicycle! Women opened their legs for one thing, and one thing only. How dare they sit, mounted…on a bicycle! Lord knows what other things they might be mounting next!

Women and Bicycles

A woman on a bicycle? Who’da thunk it?

The mere idea of this radical collaboration sent Victorian men into a tizz! Famously straitlaced and buttoned-up, Victorian morality dictated that a woman’s legs remained covered and obscured at all times. In fact, legs of ANY kind had to be covered at all times. Some people even draped floor-length covers over their pianos to prevent offense to visitors!

Women were generally expected to ride a horse side-saddle. But it was impossible to do this on a bicycle, since both legs were required to drive the pedals. And it was also impossible to ride a bicycle with the huge, floor-sweeping dresses and skirts of the era.  Something had to be done!

Fortunately, tailors came up with a solution!

The second half of the 1800s saw the arrival of the Rational Dress Movement, also known as the Victorian Dress Reform. Aimed mostly at women, this movement said that it was impractical for women to wear the clothes that they did, and still be expected to do all their wifely and womanly duties. The clothes were too bulky, too restricting and far too uncomfortable! Especially for such activities as sports, riding, walking and bicycling! Something had to be done! And fortunately, something was.

It came about in the 1850s, when Elizabeth Smith Miller of New York State, invented a sort of pair of baggy trousers for women. When their legs were together, they looked like a full skirt, but they parted company quite easily, for greater comfort and freedom of movement.

Women’s Rights advocate Amelia Bloomer, a strong supporter of more sensible women’s attire, liked the idea of these newfangled trousers, and they were eventually named after her: ‘Bloomers‘.

With bloomers, a woman could ride a bicycle safely and comfortably. But even if she didn’t have bloomers, a woman could still ride a bicycle in a skirt. She simply had to buy a woman’s bicycle!

Instead of a regular bicycle with a diamond-shaped frame, a woman could buy a step-through bicycle, like this one:

A step-through was identical to a regular bicycle in every way, except one. Figured it out yet?

Without a central bar between the handles and the seat, it was possible for a woman wearing a skirt to ‘step through’ the frame, so that she could get her feet either side of the pedals. Then, she simply hopped onto the seat, put her feet onto the pedals, and cycled away!

If that wasn’t handy enough, a woman could also purchase bicycle-clips, or ‘skirt-lifters’, which clipped onto the waist of her dress or skirt, and trailed down the sides of her skirt. Here, they were clipped onto the fabric to keep the hem of the skirt or dress off the road, but also, away from the pedals, where the fabric might get caught and tangled in the drive-chain!

The Safety Bicycle was ideal for women. Even with bloomers or bicycle-clips or skirt-lifters, it was almost impossible for a lady dressed in Victorian or Edwardian garb, to operate a Penny Farthing! The bikes were too big, too cumbersome, far too unstable, and generally unladylike to ride!

With the safety bicycle, a woman was able to ride with much greater comfort and security. The risk of accidents was smaller, they were easier to mount and dismount, and much easier to operate and control.

The Social Impact of the Bicycle

From the mid-1880s onwards, the bicycle became more and more popular, as safer, easier-to-ride models were invented, produced, and put on sale to the general public around the world. Bicycles caught on quickly, and were popular then, as they are now, for the very same reasons.

They provided free, motorless, quiet, smooth, quick transport, without the need of a horse. They were relatively easy to ride and control, and with a little practice, you could use one to get almost anywhere, and so much faster than walking!

A bicycle also had load-bearing capabilities, and could be used to transport and carry all kinds of things, provided that they could either fit in the front basket, or were strapped securely enough to the rear luggage-rack. Some bicycles even had side-satchels which hung over the back wheel for even greater storage.

Bicycles allowed people who previously couldn’t travel very far, the chance to explore much further afield. Women and children were no-longer restricted to riding in carriages or on railways, or horseback – they could climb onto a bicycle and ride around the village, go to the park, cycle through town, ride along the canal-paths. They did not need men, or older people around, to operate a horse and carriage, or a railroad train, or a steam-powered canal-boat. They simply needed two functional legs, and a decent sense of balance.

This ease of use and versatility allowed the bicycle to be used for almost anything. It was a commuting vehicle for office-workers and labourers. It was a cargo vehicle for anything from the weekly trip to the high street, to a day on the town. With the spread of bicycles came the rise of home-delivery and advertising. Now, bicycles could be used by butcher’s boys and apprentice bakers, shop-boys and telegraph-delivery boys, to provide effective and swift home-delivery of everything from bread, to meat, parcels, mail, telegrams and pre-ordered items of clothing or other items that might be small enough to be delivered safely on a bicycle.

Their open, light frames meant that it was possible to hang signs from the horizontal connecting-bars between the seat and the handlebars. Local businesses could paint advertisements on these signs, or on the mudguards of their store-owned bicycles. At the same time, a business could deliver merchandise or produce, and tell strangers where these things could be purchased.

Cycling clubs became incredibly popular. Friends and relations would gather and ride around the countryside for a day’s outing. They might go picnicking, or they might ride from town to town, visiting new shops, restaurants and public houses. This kind of freedom of movement had never been possible before. Not with a horse, that you had to feed and rest and saddle, not with a carriage which was slow and cumbersome. Not even with a steam locomotive and carriages, which was restricted to the railway lines. Before the rise of the automobile, only a bicycle allowed this level of freedom. No waiting, no fuss. Jump on, kick off, and pedal down the road.

Bicycles in Literature

The impact of the bicycle can be seen by its inclusion in literature of the late Victorian and Edwardian age. In ‘The Adventure of the Solitary Cyclist‘, by Sir Arthur Conan Doyle, Sherlock Holmes’ client is a piano-teacher who uses her bicycle as her main mode of transport, and who is shadowed everywhere by another cyclist.

In the mid-1890s in Australia, Andrew Barton ‘Banjo‘ Paterson, wrote the famous comic poem, “Mulga Bill’s Bicycle“. The cocky Mulga Bill declares that he can control absolutely any form of transport, even this newfangled ‘safety bicycle machine’. He purchases it from the local store and cycles off down the street with it, before losing control of the machine and spectacularly crashing it into a pond, deciding thereafter to stick to riding a horse!

The Bicycle in Wartime

During times of war, the bicycle proved to be a very popular mode of transport. Driving off-road was almost impossible, and at any rate, petrol was often in short supply and severely rationed. On the home-front and on the battlefront, civilians and soldiers often left motor-vehicles behind and fell back to the old-fashioned, reliable bicycle to get themselves around. During the First World War, British soldiers even formed bicycle infantry units! Bicycles didn’t need to be fed like horses, they were quieter, and they could get troops moving a lot faster!

During the Second World War, bicycles were used extensively by both sides. The Allies developed folding bicycles which soldiers could strap to their backs and jump out of airplanes with. Once they landed, they threw away their parachutes, unfolded their bicycles, braced them up, and cycled off to their rendezvous points.

The soldiers of the Japanese Imperial Army, maybe even to mock the British and their severe lack of preparation, invaded the Malaysian Peninsula and Singapore…on bicycles! It was impossible to drive tanks through the thick Asian jungles, but a bicycle on a dirt track could go anywhere!

As well as being used for military transport, bicycles were also highly popular on the home front. With petrol-rationing strictly enforced, driving became almost impossible. Unless you were in a reserved occupation (you had a job which was essential to the war-effort), or had some other important status which allowed you a larger petrol-ration, chances were that your car was going to be up on blocks for the duration of the war.

Bicycles don’t need petrol. They only needed whatever strength you could muster from your new diet of rationed food. At any rate, it would be easier to cycle through the bomb-shattered streets of London, Coventry, Singapore and Shanghai, than to drive a car! Most roads were so covered in craters, downed powerlines or the rubble from collapsed buildings that even if your car had fuel, it wouldn’t be able to make it down the road for all the obstructions!

Bells and Whistles

As bicycles became more and more popular during the Edwardian era, more and more features were added to them. One of the most famous additions is the bicycle-bell!

The idea of some variety of warning-device on a bicycle goes back to the 1870s, when the safety bicycle was in its infancy. The modern, thumb-operated bicycle-bell, which you clamp onto the handlebars of your machine, was invented in 1877 by John Richard Dedicoat, an inventor and eventual bicycle-manufacturer in his own right.

The bicycle bell works on a very simple spring-operated lever system. Pressing the button on the side of the bell rotates gears inside, which vibrates a pair of discs which jangle and ring when they move, a bit like a tiny pair of cymbals. This dingling noise is amplified by the bell-housing. Then, the spring simply pushes the bell-button back, ready for the next ring.

Dedicoat also invented a sort of spring-loaded step for helping people mount their bicycles. When Penny Farthings were still the rage, the step was designed to give the rider a boost into his seat. It worked rather well, but if the spring was more powerful than the rider was heavy, it might accidentally shoot him over the handlebars, instead of giving him a helping leg up onto his bicycle-seat!

The popularity of the safety bicycle meant that it was ridden at all times of the day, and night! To make it safer to ride at night, bicycle lamps were clipped to the front shaft, underneath the handlebars.

As with automobiles of the Edwardian era, bicycle headlamps were gas-fired calcium-carbide acetylene lamps. The reaction of water and calcium-carbide produced a flammable gas which could be ignited, and produced a bright, sustained glow. These lamps and their reaction-chambers were small enough to clamp onto the handlebars of early safety bicycles.

Pellets or chunks, or even powdered calcium-carbide was stored in the lower reservoir of a two-chamber reaction-canister. Water was poured into the upper chamber, and a valve between the two chambers allowed water to drip from the top canister onto the calcium-carbide stored in the lower canister. The reaction caused the production of acetylene gas, which escaped through a valve into the headlamp, where it could be ignited, producing light.

Increasing or decreasing the amount of light coming from your bicycle lamp was a simple process of adjusting the opening of the water-valve on the reaction-canister. The more water, the greater the reaction, the greater the amount of gas, which caused the flame to burn brighter. Less water meant fewer chemical reactions, which reduced the overall supply of gas to the headlamp.

At the dawn of the 20th century, bicycles could also be fitted with dry-cell battery-powered headlamps, and alternating-current dynamo-systems. A dynamo really works very simply: You clip the headlamp to the front of the bicycle, and clip the dynamo and its lead, near to a wheel on your bicycle, usually on the mudguard, or on the frame if there isn’t a guard. Engaging the dynamo presses a small wheel against one of your bicycle wheels. As the bike wheel spins, it rotates the dynamo generator, which produces the electricity necessary to power the lamp.

The Bicycle Today

Whether it be a racing-machine, a manner of commuting, an A-to-B mode of transport, a delivery-wagon, a cargo-bicycle or a method of exercising, the humble 1885 safety bicycle remains essentially unchanged since its entrance onto the transport stage back in the closing decades of the Victorian era. The bicycle remains popular because of its simplicity, ease of use, and its seemingly endless practical advantages over various other forms of transport.

The Bicycle World Record


‘Flying Pigeon’ bicycle manufactured in China

Based in Tianjin, in northeast China, the Flying Pigeon is the most popular make of bicycle in the WORLD. In fact, it’s the most popular VEHICLE in the world. That includes motor-cars. The Flying Pigeon company was established in Tianjin in 1936. The Flying Pigeon model, after which the company was renamed, came out in 1950. The communist government in China demanded that the company produce a strong, practical, easy-to-use, and aesthetically pleasing bicycle. It had to ride good, and look good. And it’s been doing that for the past sixty-odd years. Cars were expensive in China, and bicycles were far cheaper and more practical for the average working Chinaman. So much so that the Flying Pigeon was seen as a sign of prosperity in China.

Echoing Franklin Delano Roosevelt, Chinese president Deng Xiaoping said that prosperity in China meant that every household would own its own Flying Pigeon bicycle.

Most popular car in the world: Toyota Corolla
Units made: 35,000,000+

Most popular bike in the world: Flying Pigeon
Units made: 500,000,000+

I think we have a winner.

More Information?

I found the documentary “Thoroughly Modern: ‘Bicycles‘”, to be very helpful. I wonder why…At any rate, it’s fascinating watching.

World’s Top Five Most Successful Cars

Singer Sewing Machine – Bed-Extension Table

 

It’s taken years and months, but my grandmother’s Singer 99k vintage sewing-machine is finally, and at last, complete! It has reached this level of completion thanks to the procurement of the last, and most hard-to-find Singer sewing-machine accessory…the bed-extension table. The extension-table may be seen here, hooked onto the end of the needle-bar side of the sewing-machine:

It’s the thing with the three spare vintage lightbulbs on top. The lightbulbs are spares for the one which goes into the light-socket at the back of the sewing-machine. They came as part of the package.

The extension-table came as standard with some models of vintage Singer sewing machines, such as the Singer Model 99 and it’s variants. However, not all of Singer’s sewing-machines were sold with this very handy feature included, which I think is a pity. The table measures roughly eight inches by eight inches, and the steel hook at the end simply slots into the lock-plate of the machine-bed. It extends the sewing-machine bed. That’s why it’s called a bed-extension table. Duh!

Sadly, these handy little extension-tables are not easy to find these days, and I had almost given up hope of ever getting one. I had even considered fabricating a homemade one! But fortunately, I found this, instead.

Their handiness lies in the fact that they give you a larger work-area when sewing, to stop your pieces of fabric from flopping off the end of the sewing-machine (and possibly pulling out of alignment). They also give you somewhere to rest your left hand and arm as you feed the fabric through the machine.

This is what the extension-table looks like, when it’s housed inside the case:

You can see it in this picture from a 1930s Singer 99k user-manual. It’s on the bottom of the picture (labeled ‘D’ in this picture).

It’s rather amazing how much those innovative Singer chaps could cram into such a restricted space as the lid of a sewing-machine! This is what the same arrangement looks like in real life; again, using my grandmother’s 99k as the example:

In all the same positions, you can see the green SINGER accessories box (on the left), the ‘?’-shaped knee-lever at the back, the oval-based green SINGER oil-can on the right, and at the bottom, the extension-table. Amazingly, even with all this stuff in-place, you can still put the lid comfortably over the top of the sewing-machine and lock it down tight!

Bed-extension tables. If you have a vintage Singer sewing machine and you don’t have one of these…start looking for one. They’re getting harder and harder to find, so don’t waste time!

The Bombing of Darwin – Australia’s First Taste of War

 

Countries considered virtually untouched by the ravages of the Second World War include the United States, the Dominion of Canada, New Zealand, and the Commonwealth of Australia, even though this was not entirely true. About all that most people know about the bombing of Darwin is what’s featured in the film “Australia“, starring Hugh Jackman.

The United States naval-base at Pearl Harbor in Hawaii was hit hard in 1941 by a surprise Japanese air-raid which killed thousands of American servicemen, planes and ships. But while the surprise attack on Pearl Harbor has gone down in history as one of the most famous surprise-attacks of all time, most people have completely forgotten about another, similar, and even more devastating attack, which took place in northern Australia, in the early months of 1942.

This posting will look at the famous Darwin air-raids, the two Japanese airborne attacks on the town of Darwin in Australia’s Northern Territory during the Second World War, and the effects that these raids had on the city and its inhabitants, and the rest of Australia.

Darwin, 1942

Darwin, named after the famous naturalist, Charles Darwin, is the capital city of the Northern Territory of Australia. It was founded in 1869, and was originally named “Palmerston”. It gained the name “Darwin” in 1911. Darwin was a small-fry among Australia’s bigger and more prominent cities. Population-centers such as Melbourne and Sydney were famous around the world, they were major ports and trading-centers. Darwin, by contrast, was a sleepy backwater town that most people had never even heard of!

In the 1940s, Darwin was little more than an isolated country town at the top end of Australia. Its population in 1940 was a minuscule 5,800 people. By comparison, Melbourne at the same time had a population of over a million. This, when the population of Australia numbered some 6,900,000 people in 1939.

Darwin and the Second World War

Darwin in 1939 was an isolated country town, at the top of the nation, but at the bottom of the population-ladder. War seemed far away, and any notion that Australia might be threatened by enemy action were laughable. Germany was on the other side of the world! Who cared what happened? If anything did happen, it wasn’t going to happen in Australia, anyway! Apart from the blackout, rationing and military service, life went on more or less as it had always done.

It’s widely believed that Australia was largely untouched by the War, which is more or less true. Air-raid sirens never wailed across the city center of Melbourne, and Sydney was never rocked by Japanese bomb-blasts, but the threat, real or not, hung in the air.

In the early years of the war, the idea that Australia might be threatened were passed off as sensational and unfounded. The main aggressor, Germany, was on the other side of the world. And Japan was more interested in China than Alice Springs. But in 1941, everything changed.

With the attack against Pearl Harbor, Australia realised that its safety was threatened…probably. The Japs were never going to reach this far south! They’d be stopped at Singapore, and blasted into the sea! End of story. Roll over and go back to sleep.

Posters like this one from 1942 are believed to exaggerate the Japanese threat to Australia. However, they were probably closer to the truth than most people knew, or were willing to admit

However, the swiftness of Japanese advances struck terror into the hearts of Darwinians. Since 1937, Japan had taken Peking and Nanking. It had bombed Hawaii, invaded Shanghai, in less than a month, it had invaded and captured British Hong Kong. It invaded American possessions in the South Pacific, and was making sweeping advances down the Malay Peninsula.

In February, 1942, the island nation of Singapore, the “Gibraltar of the East”, Australia’s first, last, and only line of defense against Japanese aggression, collapsed and surrendered in just one week!

Suddenly, Darwin felt very exposed.

The Threat Against Darwin

To protect against Japanese aggression, Darwin was to be Australia’s first mainland line of defense. To this end, it had been equipped with anti-aircraft guns, an airbase with fighter-planes of the Royal Australian Air Force, and there was even a small naval-base run by the Royal Australian Navy. These were to be the two main fighting forces which would meet the Japanese threat if they ever came south to Australia.

With the attack on Pearl Harbor and the Japanese advances through Southeast Asia being swift and brutal, Australia began to feel increasingly threatened. In the days and weeks after the Japanese December-1941 offensive in the South Pacific, the vast majority of Darwin’s civilian population had been evacuated, and the town’s already small population shrank from 5,800 in 1939, to just 2,000 people in 1942. Most of the 2,000 people were essential civilians, government and military officials, and servicemen. The majority of the women and children had been evacuated from town by railway, or else, had boarded specially-charted evacuation-ships, which would steam them south, to Brisbane, Sydney or Melbourne, well out of harm’s way.

Darwin’s location at the top of Australia, its harbour, and its proximity to Japan made it a natural target for the Japanese. But as with many defense-plans in the South Pacific at this time, Darwin was not prepared for any kind of substantial and sustained attack.

British colonial bastions such as Hong Kong and Singapore had been overrun in days and weeks. The very might of the United States Navy had been challenged! What chance did a tiny, sparrow-fart town in the middle of nowhere have, against such a superior enemy?

Why the Japanese Attacked Darwin

If Darwin was such a tiny, insignificant town, with barely any armed forces or defenses to speak of, why did the Japanese see it as such a threat and target?

As with any real-estate…location, location, location.

Darwin’s location and its large harbour made it a natural base for the Allies. Any British, American or Australian forces in the area would surely gather there. They would use the harbour for their warships, and the flat ground around the town for its flak-guns and airforce bases. At the time, the Japanese wanted to destroy any and ALL competition in the area, no matter how large or small. Their next target, after China, Hong Kong, Singapore, Malaya and the islands of the South Pacific, was the Dutch East Indies (what is today, Indonesia).

To take Indonesia without any opposition, the Japanese had to attack Darwin, to knock out any chance of the Allies to mount some sort of counterattack. And this is why Darwin became a target.

Darwin’s Defenses

Despite the threat against Darwin, the town’s defense was ridiculously small. Darwin Harbour had 45 ships, and the surrounding airfields had only 30 airplanes. Of the 45 vessels in Darwin Harbour, 21 were merchant-ships. Of the other 24 ships, five were destroyers (one of these was the U.S.S. Peary), and another ship was the U.S.S. Langley, a primitive vessel launched in 1912! This was the U.S. Navy’s first aircraft-carrier, a role into which it had been converted in the 1920s.

To protect against the threat of a Japanese air-attack, Darwin was more than capably defended by 18 anti-aircraft cannons, and a smattering of WWI-era Lewis-style machine-guns.

But they had hardly any ammunition between them. And hadn’t for weeks. As a result, the guns had never been fired, and the crews to operate the guns had never been trained!

On top of everything else, Darwin had almost no air-raid precautions. It had only one operational air-raid siren, barely any shelters, no radar, and barely any lookout posts.

At any rate, even if everything was working, they still wouldn’t have been able to mount any sort of serious defense. It was estimated that to defend Darwin effectively, the town would require at least three dozen anti-aircraft cannons or guns, and at least 250 aircraft.

Instead, it had barely twenty guns, and only thirty aircraft.

In the event of an enemy air-attack on Darwin, civilian aircraft-spotters on nearby Bathurst Island (namely the local priest, Father John McGrath), were to sight the aircraft, identify them, count their numbers, and then relay this information via radio, to the authorities in Darwin. Radio-operators in Darwin would then sound Red Danger over the air-raid sirens (the famous, classic high-low wail of an air-raid siren), signalling for the population to seek cover.

The warning would only give people a few minutes to duck and cover, but it gave them a fighting chance to seek shelter before the Japanese reached Darwin. At the sound of the sirens, the flak-cannons would be manned and loaded, and the aircraft on the ground would be readied for take-off, to engage the incoming enemy.

That was how it was supposed to happen.

The Darwin Raid: 19th February, 1942

Less than a week after the fall of Singapore, on the 15th of February, Australia was about to  find out how vulnerable it really was. With a flimsy northern defense, and nearly all its soldiers fighting in Africa or the Middle East, or captured in the South Pacific, and hardly any air-power and hardly two ships to race together, Australia was ripe for the taking.

On the 19th of February, Japanese aircraft carriers sailed south towards Australia. They parked themselves a few miles off the coast, and sent in over 200 fighter and bomber aircraft. 242, to be precise.

242 aircraft of the Imperial Japanese Air Force, against just 30 aircraft belonging to the Royal Australian Air Force.

As the planes flew south towards Australia, they passed over Bathurst Island. Father McGrath, the mission priest on the island, spotted the aircraft, and radioed his warning to land-stations near Darwin, that a large concentration of aircraft were headed their way. Another aircraft-spotter on Melville Island also spotted the aircraft, and he too, sent a radio-warning to Darwin.

However, much like at Pearl Harbor, the authorities believed the aircraft to be returning American fighter-planes, which had been out on practice-runs and recon-missions. So, no heed was taken of these radio-warnings. The sirens remained silent and no guns were manned in preparation. Darwin was a sitting duck.

The First Raid

The first raid against Darwin was at 10:00am that morning. Even though the town had been warned well in advance by its aircraft spotters, no action was taken in the time between about 9:15, when the first radio-warning was sent out, and 10:00am, a period of forty-five minutes. Then, the bombs began to fall.

With no warning at all, the remaining civilian population of Darwin was bombed relentlessly by the Japanese. After the first explosions, the town’s single operational air-raid siren went off, sounding out the alarm, but it was already too late.

The ships in the harbour were bombed and strafed, and among the casualties were the U.S.S. Peary, which was hit, and sunk. It was just one of eight ships destroyed. In the town, bombs rained down, destroying vital structures as the docks (where 21 longshoremen were killed when the quays received a direct hit), and Government House. The Darwin Post Office was obliterated in a direct hit. The postmaster and his family, sheltering in the nearby air-raid shelter, were killed instantly.


The town post office after the raid

The anti-aircraft defenses of Darwin were woefully unprepared for the raid. For nearly all the soldiers there, this was the first time they’d fired any sort of gun at all! Most of the ground units had no rifles. And if they had rifles, they had no ammunition. And if they had ammunition, they had no training, so most of the shots went wild. Nevertheless, of the 188 aircraft that struck Darwin in the first raid of the day, seven were shot down by Allied flak-guns. A paltry number. The 188 planes in the first wave decimated much of the town, and destroyed the two airbases nearby, as well as wreaking havoc on the harbour and ships therein.

The Second Raid

At 10:40am, the first raid ended. But another one came at a few minutes before midday. This raid, consisting of the remaining 54 of the full force of 242 Japanese airplanes, attacked  the airbases  and town yet again, in a smaller raid lasting just 20 minutes.

At the end of the second raid, the All Clear sounded and the damage was examined. 23 of the 30 airplanes had been destroyed, and in all, 10 ships had been sunk, and another 25 were damaged. 320 people had been killed, either from drowning, burns or bombing, and another 400 people had been injured.

The Aftermath

The air-raids on Darwin were devastating on many levels. Although the majority of the population had been evacuated before the raids, poor preparations and management meant that even with a reduced population, the town suffered high casualty-rates and significant damage. Electrical power was cut, water and gas-mains destroyed and telecommunications disrupted.

The town post-office was blown to oblivion, along with the town postmaster and his family.

What followed after the raids was a complete breakdown of civil and military leadership. Soldiers raided empty houses, and evacuation-marches were bungled up. This last with the result that soldiers and airmen were scattered all over the Northern Territory with no definite rallying point.

The damage and disaster was on such a huge scale that for days, weeks, months, years and even decades after the bombings, the full extent of the catastrophe was hidden from the public.

A Dog Named Gunner

Out of the raids on Darwin came one remarkable story about a dog. An Australian Kelpie puppy called ‘Gunner’. Gunner’s claim to fame was being the canine radar for Allied military forces in the Darwin area during the Second World War.

Gunner possessed remarkably sharp hearing, and was able to detect the sound of incoming aircraft from miles away. Furthermore, he was able to differentiate between friendly Australian and American airplanes, and enemy airplanes flown by the Japanese, based on the sounds of their engines.

Gunner was injured during the raid on Darwin and was taken to the nearby hospital for treatment. The doctor on duty insisted that he couldn’t treat the dog without knowing its name, rank and serial-number! Gunner’s owner, Percy Westcott, fired off that the dog’s name and rank was that of Gunner, and that he held serial No. 0000 in the Royal Australian Air Force!

Gunner’s remarkable ability for accurately alerting ground-crews to incoming enemy attacks was soon noticed. And his success-rate at accurately picking up on enemy aircraft was so high that Westcott’s commanding officer gave him permission to operate a portable air-raid siren whenever Gunner started whining and whimpering, to alert his comrades of an incoming Japanese raid.

Gunner’s extremely sharp hearing meant that he was literally better than radar and on more than one occasion, accurately picked up on the presence of an incoming raid up to twenty minutes in advance, far outside the capabilities of radar-equipment at the time!

During the later stages of the war, Gunner’s owner, Westcott, was posted to Melbourne, and had to leave Gunner behind in Darwin. What happened to the dog remains unknown.

The Affect of the Raids

Australia had previously considered itself untouchable by the hand of war. The war was happening in Europe, anyway! And in Asia, the might of the British Empire would protect Australia from harm.

After these first raids, Australia realised its own vulnerability, and made moves towards securing its own defence. One of the most significant moves was to recall thousands of Australian troops (then fighting in the Middle East and Africa) back to their homeland, a decision made by prime minister John Curtin.

Curtin’s decision was a popular one…but only with Australians. He encountered fierce resistance from both the American and British governments, especially from Winston Churchill, who wanted to send the Australian troops to Burma to fight against the Japanese. However, Curtin was so worried about Australia’s position in the war that he insisted on overruling Churchill and to have the troops steamed home as soon as possible, something which did happen, after many lengthy exchanges through letters and telegrams.

Future Raids on Darwin

Darwin, along with other cities and town in northern Australia, were bombed repeatedly throughout the war during 1942-43. By the time the war ended, the Australian mainland had been hit by no fewer than 62 separate air-raids in the space of two years.

More Information?

Looking for more information? I strongly suggest watching the documentary: “The Bombing of Darwin: An Awkward Truth”, about the air-raids, and the cover-up which followed.

Anzacday.org website-entry.

Tales of Robin Hood – The History Around an Outlaw

 

Whether or not Robin Hood, the legendary outlaw of English folklore ever really ever existed…is entirely up in the air. At best, Robin Hood can be said to be an amalgamation of a variety of actual outlaws from the period, at worse, he would be seen as the romanticised figure of the age. But while Robin Hood may not have been a real person, his world and everything about it, still fascinates us to this day. Just a few years back, we watched Russell Crowe in “Robin Hood”, in 2010. So, centuries after the time he lived, we remain enthralled with this fantastical figure who may never even have lived.

Robin Hood was an outlaw, who lived in Sherwood Forest in the English midlands county of Nottinghamshire. So famous is his legend that the flag of Nottinghamshire even has a picture of Hood on there! Hood was known as an archer, a swordsman, and as a crusader of sorts, who stole from the rich to give to the poor. Here, we’ll look at the various parts of his legend and just how romantic and brave they really were.

Robin Hood: Outlaw at Large

Before Robin Hood was anything else, an archer, rider, horseman and all-round good-guy, he is most famously known as being an outlaw, living in Sherwood Forest in Nottinghamshire. Gee, it must be nice, living in the midst of nature with your band of merry men and the Maid Marion, holding up stagecoaches, and giving money and food to the needy.

…Not really.

In Medieval times, being an outlaw was a real problem. To become an outlaw, you had to have committed a crime, of course. And if the prosecuting party (the king, the local sheriff or landlord) did not want you executed, he could simply declare you to be an outlaw. Or, in the Latin legalese: Caput Lupinum.

To be an outlaw meant that the law no-longer applied to you. You were literally ‘outside’ the law. You had no obligation to follow it. However, this also meant that the law had no obligation, thereafter, to protect you! Enter ‘Caput Lupinum‘.

It literally means ‘Head of the Wolf‘, or ‘Wolf’s Head’. To be branded a wolf’s head outlaw meant that, not only were you outside the law, and its protection, it also meant that you would forever be hunted…like a wolf. And, like a wolf, anyone who killed you, no matter how it was done, no matter where it was done, automatically received the king’s royal pardon. There was no price or penalty to be paid by anyone for the death of a wolf. Or an outlaw. They were considered scum, and anyone who successfully killed an outlaw was seen as doing the king (and his subjects) a favour.

Robin Hood: The Archer

In the days of Robin Hood, the main long-range weapon was the bow and arrow. Known since antiquity, bows and arrows were simple, but lethal weapons, able to bring death to its target from several yards away. Robin Hood was supposed to be an excellent archer, able to hit targets from impossible distances with remarkable accuracy.

But what was the reality of medieval archery?

To be an archer took great skill. Skill and experience gained over years of practice. It took skill to aim and shoot reliably. But it also took great strength. No weakling would be able to simply pick up a bow, load an arrow and fire it. Considerable arm-strength was required to force the bowstring back to produce the energy required to fire an arrow over dozens of yards, and hit with enough force to kill or at least injure your enemy, or quarry.

Before the age of firearms, archers were essential in any army. Able to stand well back from the field of battle and rain down volley after volley of lethal fire from above, from the relative safety of a hilltop, or behind a castle wall. Since archers were so important, in England, the practice of archery was made a law. Anyone desirous of becoming an archer had to train from the age of seven (co-incidentally, the same age that a boy training to be a knight, also had to start from!), to build up the speed, strength and accuracy required to reliably fire a bow and arrow. In villages and towns, archery-practice was mandatory; at least two hours a day, at least once a week. Usually, this was two hours on Sundays, since that was the one time that people in the community gathered together, for church. After religious services, the men would go out for target-practice every week.

Although bows came in several shapes and sizes, for a full-grown male, the weapon of choice was usually the military longbow. Made from the wood of the yew tree, the longbow was not named-so for nothing. Up to five or six feet high, a longbow was generally designed to fire an arrowshaft up to nearly three feet long!

The first book written in English, on the subject of the longbow, and on archery in general, was produced in the mid-1540s, by Roger Ascham (1515-1568). An educated man of letters, Ascham was a private tutor, and a university lecturer. He also happened to be Princess Elizabeth’s Latin tutor; so when he wrote his book, (titled “Toxophilus“), he dedicated it to King Henry VIII, Elizabeth’s father.

The Sheriff of Nottingham

We don’t generally associate sheriffs with England, do we? They’re something you find in the United States, along with their cohorts, the sheriff’s deputy. But the sheriff actually originates in England.

Originally, areas of land in England were governed by Ealdormen. Literally ‘Elder Man’ or ‘Older man’, meaning a man of age, and therefore, experience. These men were royal officials and were in charge of keeping law and order within their allotments of land. The position survives today in the word ‘alderman’.

Eventually, the alderman died out in that capacity, and his duties were taken over by another man: The Sheriff.

The original title was “Shire Reeve”. A shire is a stretch of land, synonymous with the word ‘County’. A shire reeve was the administrative official responsible for the preservation of law and order within that shire. Eventually, the two words were melted into the one word: “Sheriff”.

Much like a modern sheriff, the sheriff of Robin Hood’s day was responsible for the upholding of the law, such as the capture of outlaws like Robin Hood.

Rule Britannia: A History of the British Empire

 

From the close of the 1500s, until the end of the Second World War, the British Empire grew, spread and eventually dominated the world, and for two hundred years, from the 1700s until the mid-20th century, was the largest empire in the world.

By the 1920s, the British Empire covered up to 22.6%+ of the globe, covered 13.1 million square miles (33.7 million square kilometers), and its subjects and citizens numbered some 458 MILLION PEOPLE. At its height, 20% of the people on earth were British, or British subjects, living in one of its dozens of colonies, dependencies or protectorates.

The British Empire was, is, and will forever be (until there’s another one), the biggest and arguably, the most famous, of all the Empires that the world has ever seen.

But how was it that the British Empire grew so large? Why was it so big? What was the purpose? What was to be gained from it? Why and how did it collapse? And what became of it? Join me on a journey across the centuries and oceans, to find out what caused the Empire to take root, grow, prosper, dwindle and decline. As this posting progresses, I’ll show the changing flags, and explain the developments behind each one.

The Need for Conquest

The British Empire was born in the late 1500s. During the reign of Henry VIII, England was a struggling country, mostly on its own. Ever since the king’s break from Rome, and the foundation of the Church of England, the Kingdom of England was on its own. Most countries saw it as being radical and nonconformist. It had dared to break away from Catholicism, the main religion of Europe at the time. England was seen as weak, and other countries, such as Spain, were eager to invade England, and either claim it for themselves, or seat a Catholic English monarch on the throne.

It was to protect against threats like these, that Henry VIII improved on what would become England’s most famous fighting force, and the tool which would build an empire:

The British Royal Navy.

The Royal Navy had existed ever since the 1400s, mostly as a hodge-podge of ships and boats. There was no REAL navy to speak of. Ships were simply requisitioned as needed during times of war, and then returned to their owners when war was over. Even in Elizabethan times, British fishing-boats doubled up as the navy.

It was Henry VIII, and his daughter, Elizabeth I, who began to build up the Navy as a serious fighting force, to protect against Spanish threats to their kingdom.

But having a navy was not quite enough. What if the Spanish, or the French, tried to establish colonies elsewhere, where they could grow in strength and strike the British? There is talk of a new world, somewhere far to the West across the seas. If the British could grab a slice of the action, then they would surely be more secure?

It was originally for reasons of security, but eventually, trade and commerce, that the idea of a British Empire was thought up. And it would be these reasons that the British Empire grew, flourished, and lasted, for as long as it did.


The English flag. St. George’s Cross (red) on a white background

British America

In 1707, Great Britain emerges. No longer is it an ‘English Empire’, but a British Empire. Great Britain is formed by the Act of Union between Scotland, England, and the Principality of Wales.

By 1703, England and Scotland had already been ruled by the same family (the Stuarts), for a hundred years, ever since Elizabeth I died in 1603, and her cousin, King James of Scotland inherited her kingdom as her closest surviving relative.

The flag of the Kingdom of Great Britain. The red St. George’s Cross with the white background, over the white St. Andrew’s Cross and blue background, of Scotland. This would remain the flag for nearly 100 years, until the addition of Ireland

It seemed only to make more sense, therefore, that since England and Scotland were ruled by the same family, they may as well be the same kingdom. The Kingdom of Great Britain.

By this time, British holdings had grown to include Newfoundland, and more and more holdings on the North American mainland. At the time, America was being carved up by the great European powers. France, Britain, Holland and Spain were all fighting for a slice of this new, delicious pie called the New World.

And they were, quite literally, fighting over it. Ever heard of a series of conflicts called the French and Indian Wars? From 1689 until 1763, the colonial powers fought for control over greater parcels of land on the American continent. America had valuable commodities such as furs, timber and farmland, which the European powers were eager to get their hands on.

By the end of the 1700s, Britain’s colonial ambitions and fortunes had changed greatly. It retained Newfoundland, but had gained Canada from France, but had lost its possessions in America to these new “United States” guys. Part of the deal with France over getting their Canadian land was that the French be allowed to stay. As a result, Canada at the time (in the 1790s), was divided into Upper and Lower Canada (Ontario, and Quebec, today). Even in the 21st century, we have French-speaking Canadians.

British colonies in the Americas wasn’t just limited to the top end, either. Since the mid-1600s, the British also controlled Jamaica (a colony taken, not from the French, but this time, from the Dutch). British rule of Jamaica lasted from 1655, until the late 1950s!

Just as its former American colonies had provided Britain with fur pelts and cotton, Jamaica was also colonised so that it could provide the growing empire with a valuable commodity – in this case, sugar. In the 1500s, sugar was incredibly rare, and the few countries which grew sugarcane were far from England. Extracting and transporting this sweet, white powder was labour-intensive and dangerous. But now, England had its own sugar-factory, in the middle of the Caribbean.

British India

It was during the 1700s, that the British got their hands on one of the most famous colonies in their growing empire. They might have lost America and gained Canada, but in the 1750s, they gained something much more interesting, thanks to an entity called the East India Trading Company, a corporation which effectively colonised lands on behalf of the British.

In 1800, another Act of Union formed the United Kingdom of Great Britain (England, Scotland and Wales) and Ireland. The flag now depicts the diagonal red cross of St. Patrick, over that of St. Andrew, but with both below the cross of St. George. This has remained the British flag for over 200 years, up to the present day

Formed as a trading company to handle imports and exports out of countries in the Far East, the East India Company (founded in 1600), got their hands on the Subcontinent of India. And for a hundred years, between 1757 and 1858, more or less controlled it for the British Government.

Indians were not happy about being controlled by a company. True, it had brought such things as trade, wealth, transport, communications and education to the Indian Subcontinent, but the company’s presence was not welcomed.

The end of Company Rule in India came in 1857, a hundred years after they had established themselves there. The Indian Rebellion of 1857 occurred when the Indian soldiers who worked for the Company rebelled over the Company’s religious insensitivity. Offended by the liberties and insults which the Company took, and dished out, Indian soldiers under Company pay, revolted against their masters.

The rebellion spread around India, and fighting was fierce on both sides. It eventually ended in 1859, with an Indian defeat, but at least it also ended Company Rule in India.

However, the British were not willing to let go of India. It had too many cool things. Like spices and ivory, exotic foods and fine cloth. Oh, and a little drug called opium.

In the end, the British formed British India (also called the British Raj), in the late 1850s.

To appease the local Indian population and prevent another uprising, a system of suzerainty was established. Never heard of it? Neither had I.

Suzerainty is a system whereby a major controlling power (in this case, Britain), rules a country (India), and handles its foreign policy as well as other controlling interests. In return, the controlling power allows native peoples (in this case, the Indians) to have their own, self-governing states within their own country.

When applied to India, this allowed for 175 “Princely States”. The princely states were ruled by Indian princes, or minor monarchs, (the maharajahs),  while the other states within India were ruled by the British. As such, India was thereafter divided into “British India”, and the “Princely States”.

British India was ruled by the Viceroy of India, and its legal system was determined by the big-wigs in London. The Princely States were allowed to have their own Indian rulers, and were allowed to govern themselves according to their own laws. Not entirely ideal, but much better than being ruled over by a trading company!

The Indians largely accepted this way of life. It was in a way, similar to their lives under the Mongol Empire before. It was a way of life with which they were familiar and comfortable with. In return for various concessions, changes and improvements, the Indians would allow the British control of their land.

The number of princely states rose and fell over the years, but this system remained in place until Indian independence was granted by Britain in the years after the Second World War.

The Viceroy of India was the head British representative in India, and ruled over British India, and was the person to whom Indian princes went to, if they had concerns about British involvement within India.

Pacific Britain

Entering the 1800s, Britain became more and more interested in the Far East. Britain realised that establishing trading-posts and military bases in Asia could bring them the riches of the Orient and a greater say in world affairs. To this end, it colonised Malaya, Singapore, Australia, New Zealand, Hong Kong, Fiji, Penang, Kowloon, Malacca, Ceylon (modern day Sri Lanka) and Burma. It even tried to colonise mainland China, but only succeeded in grabbing a few small concessions from the Qing Government, such as Shanghai.

The Pacific portion of the British Empire was involved heavily in trade and commerce, and a great many port cities sprang up in the area. Singapore, Hong Kong, Rangoon, Calcutta, Bombay, Melbourne and Sydney all became major trading-stops for ocean-liners, cargo-ships and tramp-steamers sailing the world. From these exotic locales, Britain could get gold, wool, rubber, tin, oil, tea and other essential, exotic and rare materials.

The British were not alone in the Pacific, so the need for military strength was important. The Dutch, the Germans and the French were also present, in the Indies, New Guinea, and Indochina, respectively.

Britain and the Scramble for Africa

The Industrial Revolution brought all kinds of new technology to the world. Railways, steamships, mills, factories, mass-production, telecommunications and improved medical care, to name but a few. And Britain, like other colonial powers, was eager to see that its colonial holdings got the best of these new technologies that they could.

However, these improvements also spurred on the desire for greater control of the world. And from the second half of the 1800s, saw the “scramble for Africa“.

The ‘Scramble’ or ‘Race’ for Africa, was a series of conquests by the colonial powers, to snatch up as much of the African continent as they could. The Dutch, Germans, French and British all duked it out to carve up hunks of Africa.

The French got most of northwest Africa, including the famous city of Casablanca, in Morocco. They also controlled Algeria. The British got their hands on Egypt, and a collection of holdings (including previous Dutch colonies, won from them after the Boer Wars) which they called the Union of South Africa. The British also got their hands on Nigeria, British East Africa (“Kenya”) and the Sudan.  Egypt was never officially a British colony, but remained a British protectorate (a country which Britain swore to provide military assistance, or ‘protection’ to). It was a crafty way of adding Egypt to the British Empire without actually colonising it.

British interest in Egypt and southern Africa was related less to what Egypt could provide the empire, and more about what it would allow the empire to do. Egypt was the location of the Suez Canal, and important shipping-channel between Europe and the Far East. Control of Egypt was seen as essential by the British, for quick access to their colonies in the Far East, such as India, Singapore and Australia.

A map of the world in 1897.
The British Empire comprised of any country marked in pink

Justification for Empire

As the British Empire grew during the Victorian era, and the early 20th century, with wars of conquest, and with other European powers, some sort of justification seemed to be wanting. Why should Britain control so much of the world? What gave it this right? How did it explain it to the other European powers, or the the Arsenal of Democracy that was the rising power of the United States? How did it justify the colonisation of countries to the peoples of the countries which they colonised?

Leave it to a writer to find the right choice of words.

Rudyard Kipling, author of “The Jungle Book“, was the man who came up with the phrase, “The White Man’s Burden“, in a poem he wrote in 1899.

Put simply, the burden of the white man; the white, European man, is to bring civilisation, culture, refinement and proper breeding and upbringing to the wild and uncouth savages of the world. Such as those savages likely to be found in Africa, the Middle East and the isolated isles of the South Pacific.

Britain, being naturally the most civilised, cultured, refined and most well-bred country on earth, producing only the most civilised, cultured, refined and most well-bred of citizens, was of course, the best country on earth, with the best people on earth, to introduce these wonderful virtues to the savages of the world. And to bring them up to date with technology, science, architecture, engineering, and to imbue them with good Christian virtues. Britain after all, had the best schools and universities: Eton, Harrow, Oxford, Cambridge, St. Peter’s., the list goes on. They were naturally God’s choice for teaching refinement, culture and all that went with it, to the rest of the world.

This was one of the main ways in which Britain justified its empire. By colonising other nations, it was making them better, more modern, and more cultured, in line with the West. It brought them out of the Dark Ages and into the light of modernity.

The British colonised certain countries (such as Australia) under the justification of the Ancient Roman law of “Terra Nuliius“. Translated, it means “No Man’s Land”, or “Empty Land” (“Terra” = Land, as in ‘terra-firma’; “Nullius” = Nothing, as in ‘null and void’).

By the British definition of Terra Nullius, a native people only had a right to sovereignty over its land if it changed the landscape in some manner, such as through construction, industry, craft, agriculture, or manufacturing. It had to show some degree of outward intelligence beyond hunter-gatherer society and starting a fire with two sticks.

They did not recognise these traits in the local Aboriginal peoples, and saw no evidence of such activites. Therefore, they claimed that the land was untouched, and the people had minimal intelligence. Otherwise, they would’ve done something with their land! And since they hadn’t, they had forfeited their claim to sovereignty over their land. Under the British definition of Terra Nullius, this meant that the land was theirs for the taking. Up for grabs! Up for sale! And they jumped on it like a kangaroo stomping on a rabbit.

The Peak of Empire

British control of the world, and the fullest extent of its imperial holdings came during the period after the First World War. One of the perks of defeating Germany was that Britain got to snap up a whole heap of German ex-colonies. A lot of them were in Africa, but there were also some in the Far East, most notably, German New Guinea, thereafter simply named ‘New Guinea’ (today, ‘Papua New Guinea’).

It was during the interwar period of the early 20th century that the British Empire was at its peak. By the early 1920s, Britain had added such notables as the Mandate of Palestine (modern Israel), and Weihai, in China, to its list of trophies (although its Chinese colony did not last very long).

The extent of the British Empire by 1937. Again, anything marked in pink is a colony, dominion, or protectorate of the British Empire

The Colony of Weihai

For a brief period (32 years), Great Britain counted a small portion of China as part of its empire. Britain already had Hong Kong, but in 1898, it added the northern city of Weihai to its Oriental possessions. Originally, it was a deal between the Russians and the British. So long as the Russians held onto Port Arthur (a city in Liaoning Province in northern China), the British could have control of Weihai.

In 1905, the Russians suffered a humiliating defeat to the Japanese, in the Russo-Japanese War. Part of the Russian defeat was the Japanese occupation of Port Arthur. Britain then made a deal with Japan that they could remain in Weihai so long as the Japanese held onto Port Arthur. To appease the Chinese, the British signed a lease with the Chinese for twenty-five years, agreeing to return Weihai to the Chinese Imperial Government when the lease expired, which in 1905, meant that Weihai would be returned in 1930.

The Glory of the British Empire

The early 20th century was the Golden Age of the British Empire. From the period after the Great War, to the onset of the Second World War, Britain was powerful, far-reaching and dominant. British culture, customs, legal systems, education, dress, and language were spread far around the world.

Children in school learnt about the Empire, and the role it played in making Britain great. People in countries like New Zealand and Australia saw themselves as being British, rather than being Australian or New Zealanders. After the First World War, monuments and memorials were erected to those who had died for the “Empire”, rather than for Australia, New Zealand, or Canada. Strong colonial and cultural ties held the empire together and drew soldiers to fight for Britain as their ‘mother country’, who had brought modernisation, culture and civility to their lands.

The Definitions of Empire

If you read any old documents about the British Empire, such as maps, letters, newspapers and so-forth, you’ll notice that each country within the empire is generally called something different. Some are labeled ‘colonies’, others are ‘mandates’, some are ‘protectorates’, and a rare few are named as ‘dominions’. And yet, they were all considered part of the Empire. What is the difference between all these terms?

The Colonies

Also sometimes called a Crown Colony, a colony, such as the Colony of Hong Kong, was a country, or landmass, or part of a landmass, which was ruled by the British Government. The government’s representative in said colony was the local governor. He reported to the Colonial Office in London.

The Protectorates

A protectorate sounds just like what it does. And it can be a rather cushy arrangement, if you can get it. As the name implies, a country becomes a protectorate of the British Empire when it allows the Empire to control certain parts of its government policies, such as foreign policy, and its policies concerning the country’s defence from foreign aggression. One example of this is the Protectorate of Egypt.

In return for allowing the British to control such things as foreign relations and trade, and in return for having British military protection against their enemies, a country’s ruler, or government, could continue running their country as they did, with certain things lifted off their shoulders. But with other things added on. For example, the British weren’t interested in Egypt for the free tours of the Valley of the Kings. They were interested in it because of the Suez Canal, the water-highway to their jewel in the Far East, known as India! In return for use and control of the Canal, the British allowed the Egyptians to run their own country as they had always done.

The Mandates

The most famous British mandate was the Mandate of Palestine (modern Israel).

In the 1920s, the newly-formed League of Nations (the direct predecessor to the U.N.) confiscated former German and Turkish colonies, and distributed them among the two main victors of the Great War; Britain, and France. Basically, Britain and France got Turkish and German colonies as ‘prizes’ or ‘compensation’ of the war.

Legally, these mandates were under the control of the League of Nations. But the League of Nations was well…a League! A body. Not a country. And the League couldn’t control a mandate directly. So they passed control of these mandates to the victors of the Great War.

The Dominions

On a lot of old maps, you’ll see things like the Dominion of Canada, the Dominion of Australia, and the Dominion of New Zealand. What are Dominions?

Dominions were colonies of the Empire which had ‘grown up’, basically. They were seen as highly-developed, modern countries, well-capable of self-governance and self-preservation, without the aid of Mother England. They were like the responsible teenagers in an imperial family, to whom old John Bull had given them the keys to the family car, on the condition that they didn’t trash it, or crash it, and that they returned it in good working order.

The Dominions were therefore allowed to be more-or-less self governing. After 1930, the Dominions became countries in their own rights, no-longer legal colonies. But they were still seen as being part of the Empire, and bound to give imperial service if war came. Indeed, when war did come, all the Dominions pledged troops to help defend the Empire.

There was talk of making a ‘Dominion of India’. India wanted independence, but Britain was not willing to let go of its little jewel. It saw making India a Dominion as a happy compromise between the two polar options of remaining a colony, or becoming totally independent.  However, a little incident called the Second World War interrupted these plans before they could be fully carried out.

War and the Empire

The British Empire was constantly at war. In one way, or another, with one country, or another, it was at war. The French and Indian Wars, the American War of Independence, the War of 1812, the French Revolutionary Wars, the Napoleonic Wars,  the Opium Wars, the Crimean War of the 1850s, the First and Second Afghan Wars. The Mahdist War of 1881 lasted nearly twenty years! Then you had the Boer War, the Great War, and the Second World War.

One reason why Britain managed to engage in so many wars, and survive, and in some cases, prosper from them, was due in a large part to its empire. In very few of these wars did Britain ever fight alone. Even when it didn’t have allies fighting alongside it, Britain’s fighting force was comprised of both home-born Englishmen, but also a large number of imperial troops. Indians, Australians, Canadians, New Zealanders, and Africans. They all signed up for war! When Australia became a nation in 1901, instead of being a gaggle of colonies, Australian colonial soldiers, freshly-returned from the Boer War in Africa, marched through the streets of Australian cities as part of the federation celebrations.

“The Sun Never Sets on the British Empire”

The cohesion of the British Empire began to crumble during the Second World War.

Extensive demilitarisation during the interwar years had greatly weakened Britain’s ability to wage war. Britons and their colonial counterparts became complacent in their faith in the might of the British Navy, which for two hundred years, had been the preeminent naval force in the world.

Defending the Empire became increasingly difficult as it grew in size. In the years after WWI, Britain believed that the “Singapore Strategy”, its imperial defence-plan based around Singapore, would protect its holdings in the Far East.

The strategy involved building up Singapore as a military stronghold for both the army, navy and Royal Air Force. In the event of Japanese aggression, the Navy could intercept Japanese warships, and the air-force and army could protect Singapore from land or air-based invasions. The Navy would be able to protect Singapore, Hong Kong and Malaya from Japanese invasion, or would be able to drive out the Japanese, if they did invade.

Under ideal circumstances, such a plan would be wonderful. But in practice, it fell flat. The British Royal Navy simply did not have the seapower that it once had. It had neither the ships, sailors, airmen or aircraft required to protect both England and Singapore. Great Britain was having enough troubles defending its waters against German U-boats, let alone Japanese battleships and aircraft carriers in the Pacific.

On top of that, Singapore simply wasn’t equipped to hold off a Japanese advance, from any direction under any means. Even though the British and colonial forces on Singapore vastly outnumbered those of the Japanese, the British lacked food, ammunition, firearms, artillery, aircraft, and naval firepower, resources stretched thinly enough already due to British disarmament during the 1920s and 30s.

The fall of Singapore after just one week of fighting the Japanese was a great shock to the Empire, especially to Australia and New Zealand, who had relied on Singapore to hold back the Japanese. In Australia, the fall of Singapore showed the government that Britain could not be trusted to protect its empire.

When Darwin was bombed, defying orders from Churchill, Australian prime minister John Curtin ordered all Australian troops serving overseas (in the Middle East and Africa) to be returned home at once. For once, protection of the homeland was to take precedence over protection of the Empire, since the Empire wasn’t able to provide protection, Australia would have to provide its own, even if it came at the expense of keeping the Empire together.

Even Winston Churchill, an ardent Imperialist, realised the futility of protecting the entire empire, and realised that certain sections would be impossible to defend without seriously compromising the defence of Great Britain. British colonies in Hong Kong, Malaya, Singapore and in the Pacific islands gradually fell to Japanese invasion and occupation.

By the 1930s, the Empire was already beginning to fall apart. Independence movements in countries like Iraq (a British mandate from 1920), Palestine (a mandate since 1920) and India, was forcing Britain to let go of its imperial ambitions.

For the most part, independence from Britain for these countries came relatively peacefully, and in the years after the Second World War, many of the British colonies gained independence.

Independence was desired for a number of reasons, from the simple want for a country’s people to rule themselves, the lack of contact and cohesion felt with Great Britain, or in some cases, the realisation or belief that the British Empire could not protect them in times of war, as had been the case with Singapore and Hong Kong.

The British Commonwealth

Also called the Commonwealth of Nations, the British Commonwealth was formed in the years during and after the Second World War. The Commonwealth is not an empire, but rather a collection of countries tied together by cultural, historical and economic similarities. Almost all of them are former British colonies.

The Commonwealth recognises that no one country within this exclusive ‘club’ is below another, and that each should share equal status within the Commonwealth. It was formed when the British realised that some of their colonies (such as Australia, New Zealand and Canada) were becoming developed, cultured, civilised and powerful in their own rights. So that these progressive countries did not feel like Britain’s underlings, the Commonwealth was formed. Now, the Dominions would not be above the Colonies, and the colonies would not be below Britain, they would all be on the same level, and part of the same ‘club’, the British Commonwealth.

The Empire Today

The sun has long since set on the British Empire, as it has on nearly all great empires. But even after the end of the empire, it still makes appearances in the news and in films, documentaries and works of fiction, as a look back at an age that was.

During the 1982 Falklands War, a conflict that lasted barely three months, the famous New York magazine “Newsweek” printed this cover:

An imperial war without the Empire…

In four simple words, this title comments on the recent Stars Wars film of the same name, the former British Empire, and on the fabled might of the Royal Navy, which had allowed the formation of that empire, so many hundreds of years ago.

Imperial Reading and Watching

The British Empire lasted for hundreds of years. It grew, it shrank, it grew again, until it came to dominate the world, spreading British customs, ideals, education, government, culture, food and language around the globe. This posting only covers the major elements of the history of the British Empire. But if you want to find out more, there’s plenty of information to be found in the following locations and publications. They’ll cover the empire in much greater detail.

The British Empire

“The Victorian School” on the British Empire.

Documentary: “The Fall of the British Empire

Documentary Series: “Empire” (presented by Jeremy Paxman).