A Story on Two Wheels: The History of the Bicycle

 

In the history of transport, fewer inventions were more compact, innovative, liberating, practical and enjoyable than that of the bicycle. And yet, the bicycle as we know it today is only just over 100 years old. What is the story behind this invention? Why was it created? And how did it reach the design which we know so well today? Let’s take a ride…

The World Before the Bicycle

Before bicycles came onto the scene with their dingling bells and rattling drive-chains, transport was slow, dependent, and/or crowded. You had ships, boats, carriages, horseback, or your own two feet.

When it came to pre-bicycle travel, you had three options available when it came to the characteristics of the journey that you were likely to receive:

Fast, Private, Comfortable.

You may pick only two.

If it was fast and comfortable, such as a railroad-train, you were resigned to sharing the carriage, and even the compartment, with others.

If it was private and comfortable, such as a carriage, then it certainly wasn’t fast. The average speed of horse-drawn transport in the 19th century was about seven or ten miles an hour at best. In the same bag is walking. Private and relatively comfortable, but don’t expect to get anywhere in a hurry.

If it was fast and private, such as riding on horseback, alone, then it certainly wasn’t going to be very comfortable, being jolted around in a saddle for hours on end.

What was needed was a fast, relatively comfortable, individual mode of transport, that relied purely on the rider for propulsion, and which didn’t need to fed, fired, stabled, stoked, sailed, steamed or otherwise externally operated.

With the internal combustion-engine still a dream, and coal-fired steam-carriages being large, loud, slow and unpredictable (to say nothing of dangerous), there was a serious market for a convenient, fast, practical machine which a rider could use for individual transport: The Bicycle.

The First Bicycles

The first serious attempt at a bicycle-like machine was the German-made ‘hobby-horse’ or ‘dandy-horse‘ machine of the 1810s.

The ‘Dandy Horse’ bicycle was a fascinating…um…experiment. It was hardly what you could call a bicycle, and it was never utilised as a serious mode of transport. It was seen more as a toy, for the use and amusement of the ‘dandy’, the well-dressed, leisured, upper-class gentleman of Regency-era Europe. As you can see, the Dandy Horse has no seat to speak of, no driving-mechanism, no pedals, not even a real handlebar! Steering and propulsion are rudimentary at best, and without any form of suspension, riding one of these on the rough, dirt roads of 1810s Europe would’ve been hard on the back and spine!

You didn’t so much ‘drive’ or ‘operate’ the dandy-horse as you ‘glided’ on it, similar to a skateboard. You kicked it along the ground with your feet to build up speed and then coasted along until the momentum gave out. An amusing gimmick for a Regency garden-party, but hardly a practical form of transport!

During this time, the word ‘bicycle’ was not even coined. And wouldn’t be for several decades. Human-powered, wheeled land-machines were called ‘Velocipedes‘, from the Latin words for ‘Fast’ (as in ‘Velocity’), and ‘Foot’ (as in ‘pedestrian’). And as the 1800s progressed, there was a growing range of fantastical and ridiculous ‘velocipede’ machines with which to delight the population of Europe.

The next advancement in bicycle technology came from France, and we look to Joseph Niepce and his contraption known as the…um…’velocipede‘.

Any long-term readers of this blog may fancy that they’ve heard the name ‘Niepce’ before on this website. And you’d be right. Apart from tinkering with bicycles, he was also instrumental in the development of modern photography. 

Joseph N. Niepce’s contribution to the bicycle came in the early 1860s, although it wasn’t a great departure from what had existed before.

The Niepce ‘velocipede’ differed from the earlier ‘dandy-horse’, but only a couple of ways: The front wheel now had pedals, and a proper seat or saddle which was adjustable to the height of the rider, along with proper handlebars and steering. But other than these minor additions and improvements, the French velocipede was not much of an improvement.

A French ‘velocipede’, as invented by Joseph Niepce. Note the presence of the handlebars and steerable front wheel, and the centrally-mounted saddle

The Ordinary Bicycle came next. Invented in the late 1860s, the Ordinary was the first machine to be specifically called a ‘bicycle‘, using the two words ‘bi’, meaning ‘two’ and ‘cycle’.  The Ordinary also introduced something which has become commonplace among all bicycles to this day: Wire-spoked wheels!

The Ordinary was variously called a High Bicycle, a Boneshaker (due to its lack of suspension), or, most famously of all – a Penny Farthing, after the largest, and smallest denomination coins in circulation in Britain at the time.

The Ordinary was the first bicycle for which there was any serious commercial success, and they became popular for personal transport, as well as being used as racing-machines!

Despite its relative popularity, the Ordinary had some serious shortcomings: There were no brakes, there was no suspension, and they were incredibly dangerous to ride! The immense front wheel could tower up to six feet in the air, which made mounting and riding these machines quite a feat of acrobatics in itself! Accidents could cause serious injury and stopping, starting, mounting and dismounting were all big problems. Something better had to be devised!

The Safety Bicycle

The Ordinary or ‘Penny Farthing‘ was one of the first practical bicycle designs, but its many shortcomings and dangers meant that something better had to be found. Enter the ‘Safety Bicycle’.

The ‘Safety Bicycle’ is the direct ancestor to all bicycles manufactured today.

The prototype ‘safety bicycle’ came out in the late 1870s, in response to the public dissatisfaction with the fast, but dangerously uncontrollable Penny Farthing.

Henry John Lawson (1852-1925) developed the first such machine in 1876. Lawson, the son of a metalworker, was used to building things, and loved tinkering around with machines.

Lawson’s machine differed from others in that the rider sat on a saddle on a metal frame. At each end of the frame were spoked wheels of equal size, with a handlebar and steering-arrangement over the front wheel. The rear wheel was powered by the use of a simple crank-and-treadle-mechanism, similar to that used on old treadle-powered sewing-machines, a technology familiar to many people at the time.

The great benefit of Lawson’s bicycle was that the front wheel was used solely for steering, and the rear wheel was used solely for propulsion, and the rider’s legs were kept well away from both of them! On top of that, the wheels were of such a size that the rider’s feet could easily reach the ground, should it be necessary to stop, or dismount the machine in an emergency. Lawson was certainly onto something!

Lawson updated his machine in 1879, with a more reliable pedal-and-chain driving-mechanism, but sadly, although innovative, his bicycle failed to catch on. All the extra parts and the radical new design meant it was hard to produce and too costly to be sold to the general public.

Although Lawson’s machine was a commercial failure, his invention spurred on the development of this new contraption: The Safety Bicycle! Building on what Lawson had already established, over the next few years inventors and tinkerers all over the world started trying to produce a bicycle that would satisfy the needs of everyone. It had to be practical, fast, easy to use, safe to ride, mount and dismount, it had to stop easily, start easily, and be easily controlled.

All manner of machines came out of the workshops of the world, but in 1885, one man made something that would blast all the others off the road.

His name was John Kemp Starley.

Starley, (1854-1901), was the man who invented the modern bicycle as we know it today. And every single one that we see on the road today, is descendant from his machine.

Building on the ideas of Mr. Lawson, Starley rolled out his appropriately-named ‘Starley Rover’ safety bicycle in 1885.

The Starley Rover was revolutionary. Like the Lawson machine, it had equal-sized (or near-equal), spoked wheels, a diamond-frame made of hollow steel, a seat over the back wheel, handles over the front wheel, and a pedal-powered chain-drive in the middle, linking the drive-wheel and the rear wheel with a long drive-chain.

By the late 1880s, the modern bicycle had arrived. It was Starley who had brought it, and he cycled off into the history books on one of these:

This model from the late 1880s has everything that a modern bicycle has, apart from a kick-stand. And this is the machine that has revolutionised the world of transport ever since!

The ‘Rover’ was so much better than everything that had come before it. It was easy to ride, easy to mount, easy to dismount. It was close to the ground, but did not compromise on speed with smaller wheels, because of the 1:2 ratio between the pedal-wheel and the rear wheel. You could reach tremendous speeds without great exertion, and you could stop just as easily!

The Bicycle Boom!

At last! A functional, fun, fast machine. Something you could ride that was safe, quick, light, portable, quiet, comfortable, practical, and which could get you almost anywhere you wanted to go!

With machines like the Rover, and the ones which came after it, all other bicycle-designs were considered obsolete! The Rover had shown the way, and others would follow.

With the success of this newly-designed bicycle came the cycling boom of the the 1890s! For the first time in history, you didn’t need a horse to get anywhere! You needn’t spoil your best shoes in the mud! You didn’t have to worry about smoke and steam and soot! Just roll your bicycle onto the road, hop on it, kick off, and down the road you went. What a dream!

With a truly practical design, the true practicality of the bicycle was at last, fully realised. At last, the ordinary man or woman on the street had a machine which they could ride anywhere! Although, that said, most bicycles in the late Victorian era were expensive toys for the wealthy. But nonetheless, they were used for everything from cycling through the park, cycling around town running errands, cycling to and from work, cycling to visit friends and relations across town, cycling to take in the sights! What a wonderful invention!

The ‘Gay Nineties‘, as this period of history is fondly called, saw the first big boom of the bicycle. Or a medium-sized one, at any rate. There were still a few problems: Bicycles were still rather expensive. And it was considered scandalous for a woman to ride a bicycle! Women opened their legs for one thing, and one thing only. How dare they sit, mounted…on a bicycle! Lord knows what other things they might be mounting next!

Women and Bicycles

A woman on a bicycle? Who’da thunk it?

The mere idea of this radical collaboration sent Victorian men into a tizz! Famously straitlaced and buttoned-up, Victorian morality dictated that a woman’s legs remained covered and obscured at all times. In fact, legs of ANY kind had to be covered at all times. Some people even draped floor-length covers over their pianos to prevent offense to visitors!

Women were generally expected to ride a horse side-saddle. But it was impossible to do this on a bicycle, since both legs were required to drive the pedals. And it was also impossible to ride a bicycle with the huge, floor-sweeping dresses and skirts of the era.  Something had to be done!

Fortunately, tailors came up with a solution!

The second half of the 1800s saw the arrival of the Rational Dress Movement, also known as the Victorian Dress Reform. Aimed mostly at women, this movement said that it was impractical for women to wear the clothes that they did, and still be expected to do all their wifely and womanly duties. The clothes were too bulky, too restricting and far too uncomfortable! Especially for such activities as sports, riding, walking and bicycling! Something had to be done! And fortunately, something was.

It came about in the 1850s, when Elizabeth Smith Miller of New York State, invented a sort of pair of baggy trousers for women. When their legs were together, they looked like a full skirt, but they parted company quite easily, for greater comfort and freedom of movement.

Women’s Rights advocate Amelia Bloomer, a strong supporter of more sensible women’s attire, liked the idea of these newfangled trousers, and they were eventually named after her: ‘Bloomers‘.

With bloomers, a woman could ride a bicycle safely and comfortably. But even if she didn’t have bloomers, a woman could still ride a bicycle in a skirt. She simply had to buy a woman’s bicycle!

Instead of a regular bicycle with a diamond-shaped frame, a woman could buy a step-through bicycle, like this one:

A step-through was identical to a regular bicycle in every way, except one. Figured it out yet?

Without a central bar between the handles and the seat, it was possible for a woman wearing a skirt to ‘step through’ the frame, so that she could get her feet either side of the pedals. Then, she simply hopped onto the seat, put her feet onto the pedals, and cycled away!

If that wasn’t handy enough, a woman could also purchase bicycle-clips, or ‘skirt-lifters’, which clipped onto the waist of her dress or skirt, and trailed down the sides of her skirt. Here, they were clipped onto the fabric to keep the hem of the skirt or dress off the road, but also, away from the pedals, where the fabric might get caught and tangled in the drive-chain!

The Safety Bicycle was ideal for women. Even with bloomers or bicycle-clips or skirt-lifters, it was almost impossible for a lady dressed in Victorian or Edwardian garb, to operate a Penny Farthing! The bikes were too big, too cumbersome, far too unstable, and generally unladylike to ride!

With the safety bicycle, a woman was able to ride with much greater comfort and security. The risk of accidents was smaller, they were easier to mount and dismount, and much easier to operate and control.

The Social Impact of the Bicycle

From the mid-1880s onwards, the bicycle became more and more popular, as safer, easier-to-ride models were invented, produced, and put on sale to the general public around the world. Bicycles caught on quickly, and were popular then, as they are now, for the very same reasons.

They provided free, motorless, quiet, smooth, quick transport, without the need of a horse. They were relatively easy to ride and control, and with a little practice, you could use one to get almost anywhere, and so much faster than walking!

A bicycle also had load-bearing capabilities, and could be used to transport and carry all kinds of things, provided that they could either fit in the front basket, or were strapped securely enough to the rear luggage-rack. Some bicycles even had side-satchels which hung over the back wheel for even greater storage.

Bicycles allowed people who previously couldn’t travel very far, the chance to explore much further afield. Women and children were no-longer restricted to riding in carriages or on railways, or horseback – they could climb onto a bicycle and ride around the village, go to the park, cycle through town, ride along the canal-paths. They did not need men, or older people around, to operate a horse and carriage, or a railroad train, or a steam-powered canal-boat. They simply needed two functional legs, and a decent sense of balance.

This ease of use and versatility allowed the bicycle to be used for almost anything. It was a commuting vehicle for office-workers and labourers. It was a cargo vehicle for anything from the weekly trip to the high street, to a day on the town. With the spread of bicycles came the rise of home-delivery and advertising. Now, bicycles could be used by butcher’s boys and apprentice bakers, shop-boys and telegraph-delivery boys, to provide effective and swift home-delivery of everything from bread, to meat, parcels, mail, telegrams and pre-ordered items of clothing or other items that might be small enough to be delivered safely on a bicycle.

Their open, light frames meant that it was possible to hang signs from the horizontal connecting-bars between the seat and the handlebars. Local businesses could paint advertisements on these signs, or on the mudguards of their store-owned bicycles. At the same time, a business could deliver merchandise or produce, and tell strangers where these things could be purchased.

Cycling clubs became incredibly popular. Friends and relations would gather and ride around the countryside for a day’s outing. They might go picnicking, or they might ride from town to town, visiting new shops, restaurants and public houses. This kind of freedom of movement had never been possible before. Not with a horse, that you had to feed and rest and saddle, not with a carriage which was slow and cumbersome. Not even with a steam locomotive and carriages, which was restricted to the railway lines. Before the rise of the automobile, only a bicycle allowed this level of freedom. No waiting, no fuss. Jump on, kick off, and pedal down the road.

Bicycles in Literature

The impact of the bicycle can be seen by its inclusion in literature of the late Victorian and Edwardian age. In ‘The Adventure of the Solitary Cyclist‘, by Sir Arthur Conan Doyle, Sherlock Holmes’ client is a piano-teacher who uses her bicycle as her main mode of transport, and who is shadowed everywhere by another cyclist.

In the mid-1890s in Australia, Andrew Barton ‘Banjo‘ Paterson, wrote the famous comic poem, “Mulga Bill’s Bicycle“. The cocky Mulga Bill declares that he can control absolutely any form of transport, even this newfangled ‘safety bicycle machine’. He purchases it from the local store and cycles off down the street with it, before losing control of the machine and spectacularly crashing it into a pond, deciding thereafter to stick to riding a horse!

The Bicycle in Wartime

During times of war, the bicycle proved to be a very popular mode of transport. Driving off-road was almost impossible, and at any rate, petrol was often in short supply and severely rationed. On the home-front and on the battlefront, civilians and soldiers often left motor-vehicles behind and fell back to the old-fashioned, reliable bicycle to get themselves around. During the First World War, British soldiers even formed bicycle infantry units! Bicycles didn’t need to be fed like horses, they were quieter, and they could get troops moving a lot faster!

During the Second World War, bicycles were used extensively by both sides. The Allies developed folding bicycles which soldiers could strap to their backs and jump out of airplanes with. Once they landed, they threw away their parachutes, unfolded their bicycles, braced them up, and cycled off to their rendezvous points.

The soldiers of the Japanese Imperial Army, maybe even to mock the British and their severe lack of preparation, invaded the Malaysian Peninsula and Singapore…on bicycles! It was impossible to drive tanks through the thick Asian jungles, but a bicycle on a dirt track could go anywhere!

As well as being used for military transport, bicycles were also highly popular on the home front. With petrol-rationing strictly enforced, driving became almost impossible. Unless you were in a reserved occupation (you had a job which was essential to the war-effort), or had some other important status which allowed you a larger petrol-ration, chances were that your car was going to be up on blocks for the duration of the war.

Bicycles don’t need petrol. They only needed whatever strength you could muster from your new diet of rationed food. At any rate, it would be easier to cycle through the bomb-shattered streets of London, Coventry, Singapore and Shanghai, than to drive a car! Most roads were so covered in craters, downed powerlines or the rubble from collapsed buildings that even if your car had fuel, it wouldn’t be able to make it down the road for all the obstructions!

Bells and Whistles

As bicycles became more and more popular during the Edwardian era, more and more features were added to them. One of the most famous additions is the bicycle-bell!

The idea of some variety of warning-device on a bicycle goes back to the 1870s, when the safety bicycle was in its infancy. The modern, thumb-operated bicycle-bell, which you clamp onto the handlebars of your machine, was invented in 1877 by John Richard Dedicoat, an inventor and eventual bicycle-manufacturer in his own right.

The bicycle bell works on a very simple spring-operated lever system. Pressing the button on the side of the bell rotates gears inside, which vibrates a pair of discs which jangle and ring when they move, a bit like a tiny pair of cymbals. This dingling noise is amplified by the bell-housing. Then, the spring simply pushes the bell-button back, ready for the next ring.

Dedicoat also invented a sort of spring-loaded step for helping people mount their bicycles. When Penny Farthings were still the rage, the step was designed to give the rider a boost into his seat. It worked rather well, but if the spring was more powerful than the rider was heavy, it might accidentally shoot him over the handlebars, instead of giving him a helping leg up onto his bicycle-seat!

The popularity of the safety bicycle meant that it was ridden at all times of the day, and night! To make it safer to ride at night, bicycle lamps were clipped to the front shaft, underneath the handlebars.

As with automobiles of the Edwardian era, bicycle headlamps were gas-fired calcium-carbide acetylene lamps. The reaction of water and calcium-carbide produced a flammable gas which could be ignited, and produced a bright, sustained glow. These lamps and their reaction-chambers were small enough to clamp onto the handlebars of early safety bicycles.

Pellets or chunks, or even powdered calcium-carbide was stored in the lower reservoir of a two-chamber reaction-canister. Water was poured into the upper chamber, and a valve between the two chambers allowed water to drip from the top canister onto the calcium-carbide stored in the lower canister. The reaction caused the production of acetylene gas, which escaped through a valve into the headlamp, where it could be ignited, producing light.

Increasing or decreasing the amount of light coming from your bicycle lamp was a simple process of adjusting the opening of the water-valve on the reaction-canister. The more water, the greater the reaction, the greater the amount of gas, which caused the flame to burn brighter. Less water meant fewer chemical reactions, which reduced the overall supply of gas to the headlamp.

At the dawn of the 20th century, bicycles could also be fitted with dry-cell battery-powered headlamps, and alternating-current dynamo-systems. A dynamo really works very simply: You clip the headlamp to the front of the bicycle, and clip the dynamo and its lead, near to a wheel on your bicycle, usually on the mudguard, or on the frame if there isn’t a guard. Engaging the dynamo presses a small wheel against one of your bicycle wheels. As the bike wheel spins, it rotates the dynamo generator, which produces the electricity necessary to power the lamp.

The Bicycle Today

Whether it be a racing-machine, a manner of commuting, an A-to-B mode of transport, a delivery-wagon, a cargo-bicycle or a method of exercising, the humble 1885 safety bicycle remains essentially unchanged since its entrance onto the transport stage back in the closing decades of the Victorian era. The bicycle remains popular because of its simplicity, ease of use, and its seemingly endless practical advantages over various other forms of transport.

The Bicycle World Record


‘Flying Pigeon’ bicycle manufactured in China

Based in Tianjin, in northeast China, the Flying Pigeon is the most popular make of bicycle in the WORLD. In fact, it’s the most popular VEHICLE in the world. That includes motor-cars. The Flying Pigeon company was established in Tianjin in 1936. The Flying Pigeon model, after which the company was renamed, came out in 1950. The communist government in China demanded that the company produce a strong, practical, easy-to-use, and aesthetically pleasing bicycle. It had to ride good, and look good. And it’s been doing that for the past sixty-odd years. Cars were expensive in China, and bicycles were far cheaper and more practical for the average working Chinaman. So much so that the Flying Pigeon was seen as a sign of prosperity in China.

Echoing Franklin Delano Roosevelt, Chinese president Deng Xiaoping said that prosperity in China meant that every household would own its own Flying Pigeon bicycle.

Most popular car in the world: Toyota Corolla
Units made: 35,000,000+

Most popular bike in the world: Flying Pigeon
Units made: 500,000,000+

I think we have a winner.

More Information?

I found the documentary “Thoroughly Modern: ‘Bicycles‘”, to be very helpful. I wonder why…At any rate, it’s fascinating watching.

World’s Top Five Most Successful Cars

Singer Sewing Machine – Bed-Extension Table

 

It’s taken years and months, but my grandmother’s Singer 99k vintage sewing-machine is finally, and at last, complete! It has reached this level of completion thanks to the procurement of the last, and most hard-to-find Singer sewing-machine accessory…the bed-extension table. The extension-table may be seen here, hooked onto the end of the needle-bar side of the sewing-machine:

It’s the thing with the three spare vintage lightbulbs on top. The lightbulbs are spares for the one which goes into the light-socket at the back of the sewing-machine. They came as part of the package.

The extension-table came as standard with some models of vintage Singer sewing machines, such as the Singer Model 99 and it’s variants. However, not all of Singer’s sewing-machines were sold with this very handy feature included, which I think is a pity. The table measures roughly eight inches by eight inches, and the steel hook at the end simply slots into the lock-plate of the machine-bed. It extends the sewing-machine bed. That’s why it’s called a bed-extension table. Duh!

Sadly, these handy little extension-tables are not easy to find these days, and I had almost given up hope of ever getting one. I had even considered fabricating a homemade one! But fortunately, I found this, instead.

Their handiness lies in the fact that they give you a larger work-area when sewing, to stop your pieces of fabric from flopping off the end of the sewing-machine (and possibly pulling out of alignment). They also give you somewhere to rest your left hand and arm as you feed the fabric through the machine.

This is what the extension-table looks like, when it’s housed inside the case:

You can see it in this picture from a 1930s Singer 99k user-manual. It’s on the bottom of the picture (labeled ‘D’ in this picture).

It’s rather amazing how much those innovative Singer chaps could cram into such a restricted space as the lid of a sewing-machine! This is what the same arrangement looks like in real life; again, using my grandmother’s 99k as the example:

In all the same positions, you can see the green SINGER accessories box (on the left), the ‘?’-shaped knee-lever at the back, the oval-based green SINGER oil-can on the right, and at the bottom, the extension-table. Amazingly, even with all this stuff in-place, you can still put the lid comfortably over the top of the sewing-machine and lock it down tight!

Bed-extension tables. If you have a vintage Singer sewing machine and you don’t have one of these…start looking for one. They’re getting harder and harder to find, so don’t waste time!

The Bombing of Darwin – Australia’s First Taste of War

 

Countries considered virtually untouched by the ravages of the Second World War include the United States, the Dominion of Canada, New Zealand, and the Commonwealth of Australia, even though this was not entirely true. About all that most people know about the bombing of Darwin is what’s featured in the film “Australia“, starring Hugh Jackman.

The United States naval-base at Pearl Harbor in Hawaii was hit hard in 1941 by a surprise Japanese air-raid which killed thousands of American servicemen, planes and ships. But while the surprise attack on Pearl Harbor has gone down in history as one of the most famous surprise-attacks of all time, most people have completely forgotten about another, similar, and even more devastating attack, which took place in northern Australia, in the early months of 1942.

This posting will look at the famous Darwin air-raids, the two Japanese airborne attacks on the town of Darwin in Australia’s Northern Territory during the Second World War, and the effects that these raids had on the city and its inhabitants, and the rest of Australia.

Darwin, 1942

Darwin, named after the famous naturalist, Charles Darwin, is the capital city of the Northern Territory of Australia. It was founded in 1869, and was originally named “Palmerston”. It gained the name “Darwin” in 1911. Darwin was a small-fry among Australia’s bigger and more prominent cities. Population-centers such as Melbourne and Sydney were famous around the world, they were major ports and trading-centers. Darwin, by contrast, was a sleepy backwater town that most people had never even heard of!

In the 1940s, Darwin was little more than an isolated country town at the top end of Australia. Its population in 1940 was a minuscule 5,800 people. By comparison, Melbourne at the same time had a population of over a million. This, when the population of Australia numbered some 6,900,000 people in 1939.

Darwin and the Second World War

Darwin in 1939 was an isolated country town, at the top of the nation, but at the bottom of the population-ladder. War seemed far away, and any notion that Australia might be threatened by enemy action were laughable. Germany was on the other side of the world! Who cared what happened? If anything did happen, it wasn’t going to happen in Australia, anyway! Apart from the blackout, rationing and military service, life went on more or less as it had always done.

It’s widely believed that Australia was largely untouched by the War, which is more or less true. Air-raid sirens never wailed across the city center of Melbourne, and Sydney was never rocked by Japanese bomb-blasts, but the threat, real or not, hung in the air.

In the early years of the war, the idea that Australia might be threatened were passed off as sensational and unfounded. The main aggressor, Germany, was on the other side of the world. And Japan was more interested in China than Alice Springs. But in 1941, everything changed.

With the attack against Pearl Harbor, Australia realised that its safety was threatened…probably. The Japs were never going to reach this far south! They’d be stopped at Singapore, and blasted into the sea! End of story. Roll over and go back to sleep.

Posters like this one from 1942 are believed to exaggerate the Japanese threat to Australia. However, they were probably closer to the truth than most people knew, or were willing to admit

However, the swiftness of Japanese advances struck terror into the hearts of Darwinians. Since 1937, Japan had taken Peking and Nanking. It had bombed Hawaii, invaded Shanghai, in less than a month, it had invaded and captured British Hong Kong. It invaded American possessions in the South Pacific, and was making sweeping advances down the Malay Peninsula.

In February, 1942, the island nation of Singapore, the “Gibraltar of the East”, Australia’s first, last, and only line of defense against Japanese aggression, collapsed and surrendered in just one week!

Suddenly, Darwin felt very exposed.

The Threat Against Darwin

To protect against Japanese aggression, Darwin was to be Australia’s first mainland line of defense. To this end, it had been equipped with anti-aircraft guns, an airbase with fighter-planes of the Royal Australian Air Force, and there was even a small naval-base run by the Royal Australian Navy. These were to be the two main fighting forces which would meet the Japanese threat if they ever came south to Australia.

With the attack on Pearl Harbor and the Japanese advances through Southeast Asia being swift and brutal, Australia began to feel increasingly threatened. In the days and weeks after the Japanese December-1941 offensive in the South Pacific, the vast majority of Darwin’s civilian population had been evacuated, and the town’s already small population shrank from 5,800 in 1939, to just 2,000 people in 1942. Most of the 2,000 people were essential civilians, government and military officials, and servicemen. The majority of the women and children had been evacuated from town by railway, or else, had boarded specially-charted evacuation-ships, which would steam them south, to Brisbane, Sydney or Melbourne, well out of harm’s way.

Darwin’s location at the top of Australia, its harbour, and its proximity to Japan made it a natural target for the Japanese. But as with many defense-plans in the South Pacific at this time, Darwin was not prepared for any kind of substantial and sustained attack.

British colonial bastions such as Hong Kong and Singapore had been overrun in days and weeks. The very might of the United States Navy had been challenged! What chance did a tiny, sparrow-fart town in the middle of nowhere have, against such a superior enemy?

Why the Japanese Attacked Darwin

If Darwin was such a tiny, insignificant town, with barely any armed forces or defenses to speak of, why did the Japanese see it as such a threat and target?

As with any real-estate…location, location, location.

Darwin’s location and its large harbour made it a natural base for the Allies. Any British, American or Australian forces in the area would surely gather there. They would use the harbour for their warships, and the flat ground around the town for its flak-guns and airforce bases. At the time, the Japanese wanted to destroy any and ALL competition in the area, no matter how large or small. Their next target, after China, Hong Kong, Singapore, Malaya and the islands of the South Pacific, was the Dutch East Indies (what is today, Indonesia).

To take Indonesia without any opposition, the Japanese had to attack Darwin, to knock out any chance of the Allies to mount some sort of counterattack. And this is why Darwin became a target.

Darwin’s Defenses

Despite the threat against Darwin, the town’s defense was ridiculously small. Darwin Harbour had 45 ships, and the surrounding airfields had only 30 airplanes. Of the 45 vessels in Darwin Harbour, 21 were merchant-ships. Of the other 24 ships, five were destroyers (one of these was the U.S.S. Peary), and another ship was the U.S.S. Langley, a primitive vessel launched in 1912! This was the U.S. Navy’s first aircraft-carrier, a role into which it had been converted in the 1920s.

To protect against the threat of a Japanese air-attack, Darwin was more than capably defended by 18 anti-aircraft cannons, and a smattering of WWI-era Lewis-style machine-guns.

But they had hardly any ammunition between them. And hadn’t for weeks. As a result, the guns had never been fired, and the crews to operate the guns had never been trained!

On top of everything else, Darwin had almost no air-raid precautions. It had only one operational air-raid siren, barely any shelters, no radar, and barely any lookout posts.

At any rate, even if everything was working, they still wouldn’t have been able to mount any sort of serious defense. It was estimated that to defend Darwin effectively, the town would require at least three dozen anti-aircraft cannons or guns, and at least 250 aircraft.

Instead, it had barely twenty guns, and only thirty aircraft.

In the event of an enemy air-attack on Darwin, civilian aircraft-spotters on nearby Bathurst Island (namely the local priest, Father John McGrath), were to sight the aircraft, identify them, count their numbers, and then relay this information via radio, to the authorities in Darwin. Radio-operators in Darwin would then sound Red Danger over the air-raid sirens (the famous, classic high-low wail of an air-raid siren), signalling for the population to seek cover.

The warning would only give people a few minutes to duck and cover, but it gave them a fighting chance to seek shelter before the Japanese reached Darwin. At the sound of the sirens, the flak-cannons would be manned and loaded, and the aircraft on the ground would be readied for take-off, to engage the incoming enemy.

That was how it was supposed to happen.

The Darwin Raid: 19th February, 1942

Less than a week after the fall of Singapore, on the 15th of February, Australia was about to  find out how vulnerable it really was. With a flimsy northern defense, and nearly all its soldiers fighting in Africa or the Middle East, or captured in the South Pacific, and hardly any air-power and hardly two ships to race together, Australia was ripe for the taking.

On the 19th of February, Japanese aircraft carriers sailed south towards Australia. They parked themselves a few miles off the coast, and sent in over 200 fighter and bomber aircraft. 242, to be precise.

242 aircraft of the Imperial Japanese Air Force, against just 30 aircraft belonging to the Royal Australian Air Force.

As the planes flew south towards Australia, they passed over Bathurst Island. Father McGrath, the mission priest on the island, spotted the aircraft, and radioed his warning to land-stations near Darwin, that a large concentration of aircraft were headed their way. Another aircraft-spotter on Melville Island also spotted the aircraft, and he too, sent a radio-warning to Darwin.

However, much like at Pearl Harbor, the authorities believed the aircraft to be returning American fighter-planes, which had been out on practice-runs and recon-missions. So, no heed was taken of these radio-warnings. The sirens remained silent and no guns were manned in preparation. Darwin was a sitting duck.

The First Raid

The first raid against Darwin was at 10:00am that morning. Even though the town had been warned well in advance by its aircraft spotters, no action was taken in the time between about 9:15, when the first radio-warning was sent out, and 10:00am, a period of forty-five minutes. Then, the bombs began to fall.

With no warning at all, the remaining civilian population of Darwin was bombed relentlessly by the Japanese. After the first explosions, the town’s single operational air-raid siren went off, sounding out the alarm, but it was already too late.

The ships in the harbour were bombed and strafed, and among the casualties were the U.S.S. Peary, which was hit, and sunk. It was just one of eight ships destroyed. In the town, bombs rained down, destroying vital structures as the docks (where 21 longshoremen were killed when the quays received a direct hit), and Government House. The Darwin Post Office was obliterated in a direct hit. The postmaster and his family, sheltering in the nearby air-raid shelter, were killed instantly.


The town post office after the raid

The anti-aircraft defenses of Darwin were woefully unprepared for the raid. For nearly all the soldiers there, this was the first time they’d fired any sort of gun at all! Most of the ground units had no rifles. And if they had rifles, they had no ammunition. And if they had ammunition, they had no training, so most of the shots went wild. Nevertheless, of the 188 aircraft that struck Darwin in the first raid of the day, seven were shot down by Allied flak-guns. A paltry number. The 188 planes in the first wave decimated much of the town, and destroyed the two airbases nearby, as well as wreaking havoc on the harbour and ships therein.

The Second Raid

At 10:40am, the first raid ended. But another one came at a few minutes before midday. This raid, consisting of the remaining 54 of the full force of 242 Japanese airplanes, attacked  the airbases  and town yet again, in a smaller raid lasting just 20 minutes.

At the end of the second raid, the All Clear sounded and the damage was examined. 23 of the 30 airplanes had been destroyed, and in all, 10 ships had been sunk, and another 25 were damaged. 320 people had been killed, either from drowning, burns or bombing, and another 400 people had been injured.

The Aftermath

The air-raids on Darwin were devastating on many levels. Although the majority of the population had been evacuated before the raids, poor preparations and management meant that even with a reduced population, the town suffered high casualty-rates and significant damage. Electrical power was cut, water and gas-mains destroyed and telecommunications disrupted.

The town post-office was blown to oblivion, along with the town postmaster and his family.

What followed after the raids was a complete breakdown of civil and military leadership. Soldiers raided empty houses, and evacuation-marches were bungled up. This last with the result that soldiers and airmen were scattered all over the Northern Territory with no definite rallying point.

The damage and disaster was on such a huge scale that for days, weeks, months, years and even decades after the bombings, the full extent of the catastrophe was hidden from the public.

A Dog Named Gunner

Out of the raids on Darwin came one remarkable story about a dog. An Australian Kelpie puppy called ‘Gunner’. Gunner’s claim to fame was being the canine radar for Allied military forces in the Darwin area during the Second World War.

Gunner possessed remarkably sharp hearing, and was able to detect the sound of incoming aircraft from miles away. Furthermore, he was able to differentiate between friendly Australian and American airplanes, and enemy airplanes flown by the Japanese, based on the sounds of their engines.

Gunner was injured during the raid on Darwin and was taken to the nearby hospital for treatment. The doctor on duty insisted that he couldn’t treat the dog without knowing its name, rank and serial-number! Gunner’s owner, Percy Westcott, fired off that the dog’s name and rank was that of Gunner, and that he held serial No. 0000 in the Royal Australian Air Force!

Gunner’s remarkable ability for accurately alerting ground-crews to incoming enemy attacks was soon noticed. And his success-rate at accurately picking up on enemy aircraft was so high that Westcott’s commanding officer gave him permission to operate a portable air-raid siren whenever Gunner started whining and whimpering, to alert his comrades of an incoming Japanese raid.

Gunner’s extremely sharp hearing meant that he was literally better than radar and on more than one occasion, accurately picked up on the presence of an incoming raid up to twenty minutes in advance, far outside the capabilities of radar-equipment at the time!

During the later stages of the war, Gunner’s owner, Westcott, was posted to Melbourne, and had to leave Gunner behind in Darwin. What happened to the dog remains unknown.

The Affect of the Raids

Australia had previously considered itself untouchable by the hand of war. The war was happening in Europe, anyway! And in Asia, the might of the British Empire would protect Australia from harm.

After these first raids, Australia realised its own vulnerability, and made moves towards securing its own defence. One of the most significant moves was to recall thousands of Australian troops (then fighting in the Middle East and Africa) back to their homeland, a decision made by prime minister John Curtin.

Curtin’s decision was a popular one…but only with Australians. He encountered fierce resistance from both the American and British governments, especially from Winston Churchill, who wanted to send the Australian troops to Burma to fight against the Japanese. However, Curtin was so worried about Australia’s position in the war that he insisted on overruling Churchill and to have the troops steamed home as soon as possible, something which did happen, after many lengthy exchanges through letters and telegrams.

Future Raids on Darwin

Darwin, along with other cities and town in northern Australia, were bombed repeatedly throughout the war during 1942-43. By the time the war ended, the Australian mainland had been hit by no fewer than 62 separate air-raids in the space of two years.

More Information?

Looking for more information? I strongly suggest watching the documentary: “The Bombing of Darwin: An Awkward Truth”, about the air-raids, and the cover-up which followed.

Anzacday.org website-entry.

Tales of Robin Hood – The History Around an Outlaw

 

Whether or not Robin Hood, the legendary outlaw of English folklore ever really ever existed…is entirely up in the air. At best, Robin Hood can be said to be an amalgamation of a variety of actual outlaws from the period, at worse, he would be seen as the romanticised figure of the age. But while Robin Hood may not have been a real person, his world and everything about it, still fascinates us to this day. Just a few years back, we watched Russell Crowe in “Robin Hood”, in 2010. So, centuries after the time he lived, we remain enthralled with this fantastical figure who may never even have lived.

Robin Hood was an outlaw, who lived in Sherwood Forest in the English midlands county of Nottinghamshire. So famous is his legend that the flag of Nottinghamshire even has a picture of Hood on there! Hood was known as an archer, a swordsman, and as a crusader of sorts, who stole from the rich to give to the poor. Here, we’ll look at the various parts of his legend and just how romantic and brave they really were.

Robin Hood: Outlaw at Large

Before Robin Hood was anything else, an archer, rider, horseman and all-round good-guy, he is most famously known as being an outlaw, living in Sherwood Forest in Nottinghamshire. Gee, it must be nice, living in the midst of nature with your band of merry men and the Maid Marion, holding up stagecoaches, and giving money and food to the needy.

…Not really.

In Medieval times, being an outlaw was a real problem. To become an outlaw, you had to have committed a crime, of course. And if the prosecuting party (the king, the local sheriff or landlord) did not want you executed, he could simply declare you to be an outlaw. Or, in the Latin legalese: Caput Lupinum.

To be an outlaw meant that the law no-longer applied to you. You were literally ‘outside’ the law. You had no obligation to follow it. However, this also meant that the law had no obligation, thereafter, to protect you! Enter ‘Caput Lupinum‘.

It literally means ‘Head of the Wolf‘, or ‘Wolf’s Head’. To be branded a wolf’s head outlaw meant that, not only were you outside the law, and its protection, it also meant that you would forever be hunted…like a wolf. And, like a wolf, anyone who killed you, no matter how it was done, no matter where it was done, automatically received the king’s royal pardon. There was no price or penalty to be paid by anyone for the death of a wolf. Or an outlaw. They were considered scum, and anyone who successfully killed an outlaw was seen as doing the king (and his subjects) a favour.

Robin Hood: The Archer

In the days of Robin Hood, the main long-range weapon was the bow and arrow. Known since antiquity, bows and arrows were simple, but lethal weapons, able to bring death to its target from several yards away. Robin Hood was supposed to be an excellent archer, able to hit targets from impossible distances with remarkable accuracy.

But what was the reality of medieval archery?

To be an archer took great skill. Skill and experience gained over years of practice. It took skill to aim and shoot reliably. But it also took great strength. No weakling would be able to simply pick up a bow, load an arrow and fire it. Considerable arm-strength was required to force the bowstring back to produce the energy required to fire an arrow over dozens of yards, and hit with enough force to kill or at least injure your enemy, or quarry.

Before the age of firearms, archers were essential in any army. Able to stand well back from the field of battle and rain down volley after volley of lethal fire from above, from the relative safety of a hilltop, or behind a castle wall. Since archers were so important, in England, the practice of archery was made a law. Anyone desirous of becoming an archer had to train from the age of seven (co-incidentally, the same age that a boy training to be a knight, also had to start from!), to build up the speed, strength and accuracy required to reliably fire a bow and arrow. In villages and towns, archery-practice was mandatory; at least two hours a day, at least once a week. Usually, this was two hours on Sundays, since that was the one time that people in the community gathered together, for church. After religious services, the men would go out for target-practice every week.

Although bows came in several shapes and sizes, for a full-grown male, the weapon of choice was usually the military longbow. Made from the wood of the yew tree, the longbow was not named-so for nothing. Up to five or six feet high, a longbow was generally designed to fire an arrowshaft up to nearly three feet long!

The first book written in English, on the subject of the longbow, and on archery in general, was produced in the mid-1540s, by Roger Ascham (1515-1568). An educated man of letters, Ascham was a private tutor, and a university lecturer. He also happened to be Princess Elizabeth’s Latin tutor; so when he wrote his book, (titled “Toxophilus“), he dedicated it to King Henry VIII, Elizabeth’s father.

The Sheriff of Nottingham

We don’t generally associate sheriffs with England, do we? They’re something you find in the United States, along with their cohorts, the sheriff’s deputy. But the sheriff actually originates in England.

Originally, areas of land in England were governed by Ealdormen. Literally ‘Elder Man’ or ‘Older man’, meaning a man of age, and therefore, experience. These men were royal officials and were in charge of keeping law and order within their allotments of land. The position survives today in the word ‘alderman’.

Eventually, the alderman died out in that capacity, and his duties were taken over by another man: The Sheriff.

The original title was “Shire Reeve”. A shire is a stretch of land, synonymous with the word ‘County’. A shire reeve was the administrative official responsible for the preservation of law and order within that shire. Eventually, the two words were melted into the one word: “Sheriff”.

Much like a modern sheriff, the sheriff of Robin Hood’s day was responsible for the upholding of the law, such as the capture of outlaws like Robin Hood.

Rule Britannia: A History of the British Empire

 

From the close of the 1500s, until the end of the Second World War, the British Empire grew, spread and eventually dominated the world, and for two hundred years, from the 1700s until the mid-20th century, was the largest empire in the world.

By the 1920s, the British Empire covered up to 22.6%+ of the globe, covered 13.1 million square miles (33.7 million square kilometers), and its subjects and citizens numbered some 458 MILLION PEOPLE. At its height, 20% of the people on earth were British, or British subjects, living in one of its dozens of colonies, dependencies or protectorates.

The British Empire was, is, and will forever be (until there’s another one), the biggest and arguably, the most famous, of all the Empires that the world has ever seen.

But how was it that the British Empire grew so large? Why was it so big? What was the purpose? What was to be gained from it? Why and how did it collapse? And what became of it? Join me on a journey across the centuries and oceans, to find out what caused the Empire to take root, grow, prosper, dwindle and decline. As this posting progresses, I’ll show the changing flags, and explain the developments behind each one.

The Need for Conquest

The British Empire was born in the late 1500s. During the reign of Henry VIII, England was a struggling country, mostly on its own. Ever since the king’s break from Rome, and the foundation of the Church of England, the Kingdom of England was on its own. Most countries saw it as being radical and nonconformist. It had dared to break away from Catholicism, the main religion of Europe at the time. England was seen as weak, and other countries, such as Spain, were eager to invade England, and either claim it for themselves, or seat a Catholic English monarch on the throne.

It was to protect against threats like these, that Henry VIII improved on what would become England’s most famous fighting force, and the tool which would build an empire:

The British Royal Navy.

The Royal Navy had existed ever since the 1400s, mostly as a hodge-podge of ships and boats. There was no REAL navy to speak of. Ships were simply requisitioned as needed during times of war, and then returned to their owners when war was over. Even in Elizabethan times, British fishing-boats doubled up as the navy.

It was Henry VIII, and his daughter, Elizabeth I, who began to build up the Navy as a serious fighting force, to protect against Spanish threats to their kingdom.

But having a navy was not quite enough. What if the Spanish, or the French, tried to establish colonies elsewhere, where they could grow in strength and strike the British? There is talk of a new world, somewhere far to the West across the seas. If the British could grab a slice of the action, then they would surely be more secure?

It was originally for reasons of security, but eventually, trade and commerce, that the idea of a British Empire was thought up. And it would be these reasons that the British Empire grew, flourished, and lasted, for as long as it did.


The English flag. St. George’s Cross (red) on a white background

British America

In 1707, Great Britain emerges. No longer is it an ‘English Empire’, but a British Empire. Great Britain is formed by the Act of Union between Scotland, England, and the Principality of Wales.

By 1703, England and Scotland had already been ruled by the same family (the Stuarts), for a hundred years, ever since Elizabeth I died in 1603, and her cousin, King James of Scotland inherited her kingdom as her closest surviving relative.

The flag of the Kingdom of Great Britain. The red St. George’s Cross with the white background, over the white St. Andrew’s Cross and blue background, of Scotland. This would remain the flag for nearly 100 years, until the addition of Ireland

It seemed only to make more sense, therefore, that since England and Scotland were ruled by the same family, they may as well be the same kingdom. The Kingdom of Great Britain.

By this time, British holdings had grown to include Newfoundland, and more and more holdings on the North American mainland. At the time, America was being carved up by the great European powers. France, Britain, Holland and Spain were all fighting for a slice of this new, delicious pie called the New World.

And they were, quite literally, fighting over it. Ever heard of a series of conflicts called the French and Indian Wars? From 1689 until 1763, the colonial powers fought for control over greater parcels of land on the American continent. America had valuable commodities such as furs, timber and farmland, which the European powers were eager to get their hands on.

By the end of the 1700s, Britain’s colonial ambitions and fortunes had changed greatly. It retained Newfoundland, but had gained Canada from France, but had lost its possessions in America to these new “United States” guys. Part of the deal with France over getting their Canadian land was that the French be allowed to stay. As a result, Canada at the time (in the 1790s), was divided into Upper and Lower Canada (Ontario, and Quebec, today). Even in the 21st century, we have French-speaking Canadians.

British colonies in the Americas wasn’t just limited to the top end, either. Since the mid-1600s, the British also controlled Jamaica (a colony taken, not from the French, but this time, from the Dutch). British rule of Jamaica lasted from 1655, until the late 1950s!

Just as its former American colonies had provided Britain with fur pelts and cotton, Jamaica was also colonised so that it could provide the growing empire with a valuable commodity – in this case, sugar. In the 1500s, sugar was incredibly rare, and the few countries which grew sugarcane were far from England. Extracting and transporting this sweet, white powder was labour-intensive and dangerous. But now, England had its own sugar-factory, in the middle of the Caribbean.

British India

It was during the 1700s, that the British got their hands on one of the most famous colonies in their growing empire. They might have lost America and gained Canada, but in the 1750s, they gained something much more interesting, thanks to an entity called the East India Trading Company, a corporation which effectively colonised lands on behalf of the British.

In 1800, another Act of Union formed the United Kingdom of Great Britain (England, Scotland and Wales) and Ireland. The flag now depicts the diagonal red cross of St. Patrick, over that of St. Andrew, but with both below the cross of St. George. This has remained the British flag for over 200 years, up to the present day

Formed as a trading company to handle imports and exports out of countries in the Far East, the East India Company (founded in 1600), got their hands on the Subcontinent of India. And for a hundred years, between 1757 and 1858, more or less controlled it for the British Government.

Indians were not happy about being controlled by a company. True, it had brought such things as trade, wealth, transport, communications and education to the Indian Subcontinent, but the company’s presence was not welcomed.

The end of Company Rule in India came in 1857, a hundred years after they had established themselves there. The Indian Rebellion of 1857 occurred when the Indian soldiers who worked for the Company rebelled over the Company’s religious insensitivity. Offended by the liberties and insults which the Company took, and dished out, Indian soldiers under Company pay, revolted against their masters.

The rebellion spread around India, and fighting was fierce on both sides. It eventually ended in 1859, with an Indian defeat, but at least it also ended Company Rule in India.

However, the British were not willing to let go of India. It had too many cool things. Like spices and ivory, exotic foods and fine cloth. Oh, and a little drug called opium.

In the end, the British formed British India (also called the British Raj), in the late 1850s.

To appease the local Indian population and prevent another uprising, a system of suzerainty was established. Never heard of it? Neither had I.

Suzerainty is a system whereby a major controlling power (in this case, Britain), rules a country (India), and handles its foreign policy as well as other controlling interests. In return, the controlling power allows native peoples (in this case, the Indians) to have their own, self-governing states within their own country.

When applied to India, this allowed for 175 “Princely States”. The princely states were ruled by Indian princes, or minor monarchs, (the maharajahs),  while the other states within India were ruled by the British. As such, India was thereafter divided into “British India”, and the “Princely States”.

British India was ruled by the Viceroy of India, and its legal system was determined by the big-wigs in London. The Princely States were allowed to have their own Indian rulers, and were allowed to govern themselves according to their own laws. Not entirely ideal, but much better than being ruled over by a trading company!

The Indians largely accepted this way of life. It was in a way, similar to their lives under the Mongol Empire before. It was a way of life with which they were familiar and comfortable with. In return for various concessions, changes and improvements, the Indians would allow the British control of their land.

The number of princely states rose and fell over the years, but this system remained in place until Indian independence was granted by Britain in the years after the Second World War.

The Viceroy of India was the head British representative in India, and ruled over British India, and was the person to whom Indian princes went to, if they had concerns about British involvement within India.

Pacific Britain

Entering the 1800s, Britain became more and more interested in the Far East. Britain realised that establishing trading-posts and military bases in Asia could bring them the riches of the Orient and a greater say in world affairs. To this end, it colonised Malaya, Singapore, Australia, New Zealand, Hong Kong, Fiji, Penang, Kowloon, Malacca, Ceylon (modern day Sri Lanka) and Burma. It even tried to colonise mainland China, but only succeeded in grabbing a few small concessions from the Qing Government, such as Shanghai.

The Pacific portion of the British Empire was involved heavily in trade and commerce, and a great many port cities sprang up in the area. Singapore, Hong Kong, Rangoon, Calcutta, Bombay, Melbourne and Sydney all became major trading-stops for ocean-liners, cargo-ships and tramp-steamers sailing the world. From these exotic locales, Britain could get gold, wool, rubber, tin, oil, tea and other essential, exotic and rare materials.

The British were not alone in the Pacific, so the need for military strength was important. The Dutch, the Germans and the French were also present, in the Indies, New Guinea, and Indochina, respectively.

Britain and the Scramble for Africa

The Industrial Revolution brought all kinds of new technology to the world. Railways, steamships, mills, factories, mass-production, telecommunications and improved medical care, to name but a few. And Britain, like other colonial powers, was eager to see that its colonial holdings got the best of these new technologies that they could.

However, these improvements also spurred on the desire for greater control of the world. And from the second half of the 1800s, saw the “scramble for Africa“.

The ‘Scramble’ or ‘Race’ for Africa, was a series of conquests by the colonial powers, to snatch up as much of the African continent as they could. The Dutch, Germans, French and British all duked it out to carve up hunks of Africa.

The French got most of northwest Africa, including the famous city of Casablanca, in Morocco. They also controlled Algeria. The British got their hands on Egypt, and a collection of holdings (including previous Dutch colonies, won from them after the Boer Wars) which they called the Union of South Africa. The British also got their hands on Nigeria, British East Africa (“Kenya”) and the Sudan.  Egypt was never officially a British colony, but remained a British protectorate (a country which Britain swore to provide military assistance, or ‘protection’ to). It was a crafty way of adding Egypt to the British Empire without actually colonising it.

British interest in Egypt and southern Africa was related less to what Egypt could provide the empire, and more about what it would allow the empire to do. Egypt was the location of the Suez Canal, and important shipping-channel between Europe and the Far East. Control of Egypt was seen as essential by the British, for quick access to their colonies in the Far East, such as India, Singapore and Australia.

A map of the world in 1897.
The British Empire comprised of any country marked in pink

Justification for Empire

As the British Empire grew during the Victorian era, and the early 20th century, with wars of conquest, and with other European powers, some sort of justification seemed to be wanting. Why should Britain control so much of the world? What gave it this right? How did it explain it to the other European powers, or the the Arsenal of Democracy that was the rising power of the United States? How did it justify the colonisation of countries to the peoples of the countries which they colonised?

Leave it to a writer to find the right choice of words.

Rudyard Kipling, author of “The Jungle Book“, was the man who came up with the phrase, “The White Man’s Burden“, in a poem he wrote in 1899.

Put simply, the burden of the white man; the white, European man, is to bring civilisation, culture, refinement and proper breeding and upbringing to the wild and uncouth savages of the world. Such as those savages likely to be found in Africa, the Middle East and the isolated isles of the South Pacific.

Britain, being naturally the most civilised, cultured, refined and most well-bred country on earth, producing only the most civilised, cultured, refined and most well-bred of citizens, was of course, the best country on earth, with the best people on earth, to introduce these wonderful virtues to the savages of the world. And to bring them up to date with technology, science, architecture, engineering, and to imbue them with good Christian virtues. Britain after all, had the best schools and universities: Eton, Harrow, Oxford, Cambridge, St. Peter’s., the list goes on. They were naturally God’s choice for teaching refinement, culture and all that went with it, to the rest of the world.

This was one of the main ways in which Britain justified its empire. By colonising other nations, it was making them better, more modern, and more cultured, in line with the West. It brought them out of the Dark Ages and into the light of modernity.

The British colonised certain countries (such as Australia) under the justification of the Ancient Roman law of “Terra Nuliius“. Translated, it means “No Man’s Land”, or “Empty Land” (“Terra” = Land, as in ‘terra-firma’; “Nullius” = Nothing, as in ‘null and void’).

By the British definition of Terra Nullius, a native people only had a right to sovereignty over its land if it changed the landscape in some manner, such as through construction, industry, craft, agriculture, or manufacturing. It had to show some degree of outward intelligence beyond hunter-gatherer society and starting a fire with two sticks.

They did not recognise these traits in the local Aboriginal peoples, and saw no evidence of such activites. Therefore, they claimed that the land was untouched, and the people had minimal intelligence. Otherwise, they would’ve done something with their land! And since they hadn’t, they had forfeited their claim to sovereignty over their land. Under the British definition of Terra Nullius, this meant that the land was theirs for the taking. Up for grabs! Up for sale! And they jumped on it like a kangaroo stomping on a rabbit.

The Peak of Empire

British control of the world, and the fullest extent of its imperial holdings came during the period after the First World War. One of the perks of defeating Germany was that Britain got to snap up a whole heap of German ex-colonies. A lot of them were in Africa, but there were also some in the Far East, most notably, German New Guinea, thereafter simply named ‘New Guinea’ (today, ‘Papua New Guinea’).

It was during the interwar period of the early 20th century that the British Empire was at its peak. By the early 1920s, Britain had added such notables as the Mandate of Palestine (modern Israel), and Weihai, in China, to its list of trophies (although its Chinese colony did not last very long).

The extent of the British Empire by 1937. Again, anything marked in pink is a colony, dominion, or protectorate of the British Empire

The Colony of Weihai

For a brief period (32 years), Great Britain counted a small portion of China as part of its empire. Britain already had Hong Kong, but in 1898, it added the northern city of Weihai to its Oriental possessions. Originally, it was a deal between the Russians and the British. So long as the Russians held onto Port Arthur (a city in Liaoning Province in northern China), the British could have control of Weihai.

In 1905, the Russians suffered a humiliating defeat to the Japanese, in the Russo-Japanese War. Part of the Russian defeat was the Japanese occupation of Port Arthur. Britain then made a deal with Japan that they could remain in Weihai so long as the Japanese held onto Port Arthur. To appease the Chinese, the British signed a lease with the Chinese for twenty-five years, agreeing to return Weihai to the Chinese Imperial Government when the lease expired, which in 1905, meant that Weihai would be returned in 1930.

The Glory of the British Empire

The early 20th century was the Golden Age of the British Empire. From the period after the Great War, to the onset of the Second World War, Britain was powerful, far-reaching and dominant. British culture, customs, legal systems, education, dress, and language were spread far around the world.

Children in school learnt about the Empire, and the role it played in making Britain great. People in countries like New Zealand and Australia saw themselves as being British, rather than being Australian or New Zealanders. After the First World War, monuments and memorials were erected to those who had died for the “Empire”, rather than for Australia, New Zealand, or Canada. Strong colonial and cultural ties held the empire together and drew soldiers to fight for Britain as their ‘mother country’, who had brought modernisation, culture and civility to their lands.

The Definitions of Empire

If you read any old documents about the British Empire, such as maps, letters, newspapers and so-forth, you’ll notice that each country within the empire is generally called something different. Some are labeled ‘colonies’, others are ‘mandates’, some are ‘protectorates’, and a rare few are named as ‘dominions’. And yet, they were all considered part of the Empire. What is the difference between all these terms?

The Colonies

Also sometimes called a Crown Colony, a colony, such as the Colony of Hong Kong, was a country, or landmass, or part of a landmass, which was ruled by the British Government. The government’s representative in said colony was the local governor. He reported to the Colonial Office in London.

The Protectorates

A protectorate sounds just like what it does. And it can be a rather cushy arrangement, if you can get it. As the name implies, a country becomes a protectorate of the British Empire when it allows the Empire to control certain parts of its government policies, such as foreign policy, and its policies concerning the country’s defence from foreign aggression. One example of this is the Protectorate of Egypt.

In return for allowing the British to control such things as foreign relations and trade, and in return for having British military protection against their enemies, a country’s ruler, or government, could continue running their country as they did, with certain things lifted off their shoulders. But with other things added on. For example, the British weren’t interested in Egypt for the free tours of the Valley of the Kings. They were interested in it because of the Suez Canal, the water-highway to their jewel in the Far East, known as India! In return for use and control of the Canal, the British allowed the Egyptians to run their own country as they had always done.

The Mandates

The most famous British mandate was the Mandate of Palestine (modern Israel).

In the 1920s, the newly-formed League of Nations (the direct predecessor to the U.N.) confiscated former German and Turkish colonies, and distributed them among the two main victors of the Great War; Britain, and France. Basically, Britain and France got Turkish and German colonies as ‘prizes’ or ‘compensation’ of the war.

Legally, these mandates were under the control of the League of Nations. But the League of Nations was well…a League! A body. Not a country. And the League couldn’t control a mandate directly. So they passed control of these mandates to the victors of the Great War.

The Dominions

On a lot of old maps, you’ll see things like the Dominion of Canada, the Dominion of Australia, and the Dominion of New Zealand. What are Dominions?

Dominions were colonies of the Empire which had ‘grown up’, basically. They were seen as highly-developed, modern countries, well-capable of self-governance and self-preservation, without the aid of Mother England. They were like the responsible teenagers in an imperial family, to whom old John Bull had given them the keys to the family car, on the condition that they didn’t trash it, or crash it, and that they returned it in good working order.

The Dominions were therefore allowed to be more-or-less self governing. After 1930, the Dominions became countries in their own rights, no-longer legal colonies. But they were still seen as being part of the Empire, and bound to give imperial service if war came. Indeed, when war did come, all the Dominions pledged troops to help defend the Empire.

There was talk of making a ‘Dominion of India’. India wanted independence, but Britain was not willing to let go of its little jewel. It saw making India a Dominion as a happy compromise between the two polar options of remaining a colony, or becoming totally independent.  However, a little incident called the Second World War interrupted these plans before they could be fully carried out.

War and the Empire

The British Empire was constantly at war. In one way, or another, with one country, or another, it was at war. The French and Indian Wars, the American War of Independence, the War of 1812, the French Revolutionary Wars, the Napoleonic Wars,  the Opium Wars, the Crimean War of the 1850s, the First and Second Afghan Wars. The Mahdist War of 1881 lasted nearly twenty years! Then you had the Boer War, the Great War, and the Second World War.

One reason why Britain managed to engage in so many wars, and survive, and in some cases, prosper from them, was due in a large part to its empire. In very few of these wars did Britain ever fight alone. Even when it didn’t have allies fighting alongside it, Britain’s fighting force was comprised of both home-born Englishmen, but also a large number of imperial troops. Indians, Australians, Canadians, New Zealanders, and Africans. They all signed up for war! When Australia became a nation in 1901, instead of being a gaggle of colonies, Australian colonial soldiers, freshly-returned from the Boer War in Africa, marched through the streets of Australian cities as part of the federation celebrations.

“The Sun Never Sets on the British Empire”

The cohesion of the British Empire began to crumble during the Second World War.

Extensive demilitarisation during the interwar years had greatly weakened Britain’s ability to wage war. Britons and their colonial counterparts became complacent in their faith in the might of the British Navy, which for two hundred years, had been the preeminent naval force in the world.

Defending the Empire became increasingly difficult as it grew in size. In the years after WWI, Britain believed that the “Singapore Strategy”, its imperial defence-plan based around Singapore, would protect its holdings in the Far East.

The strategy involved building up Singapore as a military stronghold for both the army, navy and Royal Air Force. In the event of Japanese aggression, the Navy could intercept Japanese warships, and the air-force and army could protect Singapore from land or air-based invasions. The Navy would be able to protect Singapore, Hong Kong and Malaya from Japanese invasion, or would be able to drive out the Japanese, if they did invade.

Under ideal circumstances, such a plan would be wonderful. But in practice, it fell flat. The British Royal Navy simply did not have the seapower that it once had. It had neither the ships, sailors, airmen or aircraft required to protect both England and Singapore. Great Britain was having enough troubles defending its waters against German U-boats, let alone Japanese battleships and aircraft carriers in the Pacific.

On top of that, Singapore simply wasn’t equipped to hold off a Japanese advance, from any direction under any means. Even though the British and colonial forces on Singapore vastly outnumbered those of the Japanese, the British lacked food, ammunition, firearms, artillery, aircraft, and naval firepower, resources stretched thinly enough already due to British disarmament during the 1920s and 30s.

The fall of Singapore after just one week of fighting the Japanese was a great shock to the Empire, especially to Australia and New Zealand, who had relied on Singapore to hold back the Japanese. In Australia, the fall of Singapore showed the government that Britain could not be trusted to protect its empire.

When Darwin was bombed, defying orders from Churchill, Australian prime minister John Curtin ordered all Australian troops serving overseas (in the Middle East and Africa) to be returned home at once. For once, protection of the homeland was to take precedence over protection of the Empire, since the Empire wasn’t able to provide protection, Australia would have to provide its own, even if it came at the expense of keeping the Empire together.

Even Winston Churchill, an ardent Imperialist, realised the futility of protecting the entire empire, and realised that certain sections would be impossible to defend without seriously compromising the defence of Great Britain. British colonies in Hong Kong, Malaya, Singapore and in the Pacific islands gradually fell to Japanese invasion and occupation.

By the 1930s, the Empire was already beginning to fall apart. Independence movements in countries like Iraq (a British mandate from 1920), Palestine (a mandate since 1920) and India, was forcing Britain to let go of its imperial ambitions.

For the most part, independence from Britain for these countries came relatively peacefully, and in the years after the Second World War, many of the British colonies gained independence.

Independence was desired for a number of reasons, from the simple want for a country’s people to rule themselves, the lack of contact and cohesion felt with Great Britain, or in some cases, the realisation or belief that the British Empire could not protect them in times of war, as had been the case with Singapore and Hong Kong.

The British Commonwealth

Also called the Commonwealth of Nations, the British Commonwealth was formed in the years during and after the Second World War. The Commonwealth is not an empire, but rather a collection of countries tied together by cultural, historical and economic similarities. Almost all of them are former British colonies.

The Commonwealth recognises that no one country within this exclusive ‘club’ is below another, and that each should share equal status within the Commonwealth. It was formed when the British realised that some of their colonies (such as Australia, New Zealand and Canada) were becoming developed, cultured, civilised and powerful in their own rights. So that these progressive countries did not feel like Britain’s underlings, the Commonwealth was formed. Now, the Dominions would not be above the Colonies, and the colonies would not be below Britain, they would all be on the same level, and part of the same ‘club’, the British Commonwealth.

The Empire Today

The sun has long since set on the British Empire, as it has on nearly all great empires. But even after the end of the empire, it still makes appearances in the news and in films, documentaries and works of fiction, as a look back at an age that was.

During the 1982 Falklands War, a conflict that lasted barely three months, the famous New York magazine “Newsweek” printed this cover:

An imperial war without the Empire…

In four simple words, this title comments on the recent Stars Wars film of the same name, the former British Empire, and on the fabled might of the Royal Navy, which had allowed the formation of that empire, so many hundreds of years ago.

Imperial Reading and Watching

The British Empire lasted for hundreds of years. It grew, it shrank, it grew again, until it came to dominate the world, spreading British customs, ideals, education, government, culture, food and language around the globe. This posting only covers the major elements of the history of the British Empire. But if you want to find out more, there’s plenty of information to be found in the following locations and publications. They’ll cover the empire in much greater detail.

The British Empire

“The Victorian School” on the British Empire.

Documentary: “The Fall of the British Empire

Documentary Series: “Empire” (presented by Jeremy Paxman).

A Random History of Popular Foodstuffs – #2

 

This is a continuation of a previous posting, which I wrote a couple of years back. And it will cover the histories behind more popular foods which we take for granted today.

Jelly!

Mmm, jelly. Cold, jiggly, wobbly, sweet, wiggly, wriggly jelly! Or, as the Americans call it…Jello, which is actually a brand-name, not an actual foodstuff. But jelly it certainly is.

These days, we associate jelly with dessert, with children, with ice-cream, and with catchy little TV jingles (“I like Aeroplane Jelly, Aeroplane Jelly for me…“). But for centuries, jelly was a luxury food. Incredibly laborious and time-consuming to produce, it could only be eaten by the richest of people, during only the most special of special feasts, dinners, parties, holidays or other significant occasions in a history that dates back to medieval times.

We’re familiar with jelly as that stuff that you buy in a packet. You pour the powder into a bowl, you mix it with water, you pour the sloshy, syrupy mixture into a mold, and then chuck it in the fridge or freezer to cool and set, into pretty, jiggly shapes which are red, and green, and yellow and purple, and which look like everything from flowers to pyramids.

That’s what jelly is today. But in older times, jelly was obtained only after hours and hours and hours of extremely labour-intensive work. Jelly wasn’t simply mixed with water and chucked in a cold spot. It was boiled, and strained, and purified, in a process that would eat up almost all the hours of the day. This is why it was eaten by only the wealthiest people, who could afford the servants and the time to make it.

So how do you make jelly the old fashioned way?

To make jelly as they might’ve done back in the Middle Ages, you first required gelatin. Gelatin comes from collagen, a type of protein. And you get collagen from…

…pigs.

For centuries, well up to the Georgian era, the only way to make jelly was to boil the feet of pigs or cattle. In an incredibly time-consuming process, the salvaged pig’s feet would be placed in a pot of boiling water. The pig’s feet and water would be left to boil for the better part of eight or ten hours. This intense boiling extracted the gelatin from within the pigs’ feet, and mixed it with the water. Once the gelatin had been boiled out, the entire mixture had to be strained. First, it was strained to remove the pigs’ feet. Then it was strained to remove any debris. Then it was strained to remove any fat. Then it was strained to remove any impurities. And then it was strained again. And again. And again.

The repeated straining and purification removed all the impurities from the mixture so that in the end, you were left with nothing but water, and gelatin.

Left on its own in a suitably cool spot, the gelatin would eventually solidify. If you wanted flavoured jelly, then it was simply a matter of mixing in the required fruit-juices, such as lemon, lime, orange, strawberries and so-forth. These extra ingredients being added, the entire mixture was stirred up, poured into a mold, and then dunked in the cellar (or other suitably cold room) to solidify and set.

It seems easy, but when making jelly could take the better part of the entire day, and could require the efforts of at least two people (there’s a lot of water to strain!), you can understand why, for centuries, it remained a food for the wealthy. Poor people simply did not have the time, the money, or the space to dedicate, or waste, on such a frivolous dessert.

It was not until the mid-1800s, when it was discovered that you could dry out the mixture and create gelatin powder, that it was possible to sell gelatin in a convenient packet for the average consumer. All the buyer had to do was mix it with water to help the powder congeal, flavour it to his or her taste, pour the mixture into a mold, and set it. Before that was possible, hardcore boiling and tiresome straining and purifying was the only way to make jelly.

Sausages

Oooh, we all love sausages. Beef, pork, chicken, lamb…delicious!

These days, sausages are made out of synthetic casings, although there is a significant number of sausage-makers and butcheries, which are manufacturing sausages the old-fashioned way.

We love sausages. Convenient, easy to cook, easy to hold, easy to store and easy to hang up on a peg. We even have gourmet sausages stuffed with herbs and spices and cheese. But the origin of the sausage is far from gourmet.

Imagine a cow, or sheep, or chicken, or a pig. You’ve gutted it, you’ve taken off the ham, the bacon, the ribs, the cutlets, the various cuts of steak, the wings, the legs, the breasts, and everything really worth eating. What did you do with the rest? The carcass that’s left over?

Bones might be used to boil up for soup. Feathers, wool or fur might be removed for clothing. But there’s still the leftover carcass and the organs and innards that nobody wants. Now what?

If you lived in older times, you certainly did not throw it out. Catching and killing animals was hard work, and cooks were encouraged to cook and eat every single part of an animal which was worth eating…even the organs. Or the feet (if they weren’t being saved for jelly…). Or the head. The cheeks. Anything that wasn’t already removed. The offal, basically.

But how to dress the dregs of animals so that they looked appealing?

One way to do this was to take the intestines of the animal, pump water through them, wash them clean, and then fill the intestines with ground up animal leftovers, twist them into convenient lengths…and sell that, if you were a butcher, to your unsuspecting customers, or serve it to your diners, if you were a cook. It was still meat. It was still beef. Or pork. Or chicken. Or lamb. It was just…um…’modified’.

And that’s all a sausage is.

…did I put you off of your dinner yet?

In older times, all the leftovers from a dead animal were diced, sliced and minced up. Then, these animal unmentionables were pumped into the cleaned out intestines of the animal in question. The big long sausage was twisted around, every few inches, to make sausages of convenient lengths, and then the whole thing was cooked up.

Some butchers still make sausages like that today, although most cheaper sausages use edible plastic or synthetic sausage-skins instead. But it is, nonetheless, how it was done.

…Hotdog, anyone?

Pies


Pies…Cake is the lie

Mmm. We like pies. Chicken pie, beef pie, steak and kidney, apple, blueberry, custard-cream…sweet, savory, spicy, simple, splendid. We love pies!

One of the reasons we love pies is because they’re fun to make. We love creating pretty, patterned crusts, with cris-crossing strips, vents held open by pie-birds, pastry-leaves, and pretty, rippling, wavering sides.


A pie-bird. These painted clay birdies are stuck into the middle of pies to stop the pie-crusts from sagging during baking, and to provide a vent for steam to escape

But for all the effort, we know that before long, the crust and sides will be broken up, carved up, and devoured. And all our efforts will be dashed in a flurry of gravy, cream, sugar and crumbs.

But our love-affair with pies is only the end of a very long journey.

With pies, cakes and tarts, comes an interesting history.

Takeout Pies

For a long time, pies were not even baked at home. We have a romantic image of pies cooling on the window-sill after they’ve been baked, the wonderful smells wafting around the neighbourhood. Which they may well have done; but it’s a rather modern thing.

For centuries, pies were never baked at home. Until the introduction of the range stove in the 1700s, it was well outside the ability of the ordinary man or woman to do his own baking at home. Most homes did not have ovens. They had fireplaces. Fireplaces are great for roasting meat, for cooking stews, boiling soup and providing heat and warmth, but they’re impossible for baking on. The smoke and flames and soot from the fire would destroy the pie, and the constantly wavering heat from the flames meant that the pie wouldn’t bake properly, anyway.

For a long time, pies were actually sent out to the nearest bakery to be baked. Here, the village baker would bake your pies for you. You dropped them off, and he marked the top of your pie in a manner that made it stand out (so you knew which one was yours, to differentiate it from the dozens of other pies in town!). He baked it, and then you came back later and picked it up.

The nursery-rhyme ‘Pat-a-Cake‘ recalls this era of history:

“Pat-a-cake, pat-a-cake,
Baker’s man, 
Bake me a cake as fast as you can,
Prick it, and poke it, and mark it with a B,
And put it in the oven, for baby and me!”

In the rhyme, just like in real life, a cake or pie was marked (‘with a B’, in this case), to differentiate Baby’s cake, or pie, from all the others in town, which were being baked at the same time, in the communal oven.

But before you even baked the pie, you had to put something into it. Filling! Back in medieval times, pie-fillings were a little more creative than what they are today. Two of the most common filling-choices gave us two of the most lasting, pie-related nuggets from history.

Before people got the idea of grinding up animal-guts and turning them into sausages, animal entrails were chopped up, boiled, and stuffed into a pie-casing. This pie was baked in an oven, and then served to the peasantry, low-ranking servants, and paupers. Entrails and guts and organs were called “umbles”. Serving “umble pie” to the poor gave the peasantry a constant reminder (as if they ever needed one!) that they were on the lowest rung of the social ladder, because all they could eat was…’Umble Pie”, or “humble pie”, as it eventually became to be called.

These days, we’re used to separating sweet from savoury. You’d hardly have a beef and custard pie, would you?

…would you?

Sweet’n’Savory

Believe it or not, but in medieval times, pies that mixed sweet with meat, were pretty common! Beef would be mixed with raisins and dates and prunes, and baked together in a pie. This wasn’t necessarily because people liked it…but rather because it was one of the few ways that people had, to stop food going stale!

The natural sugars found in fruit were used as a preservative to prevent the meat from going rotten. And often, fruit and meat were baked together, for this purpose.

These days, we don’t bake our meat and fruit together in a pie anymore. But we do have a leftover from that period – the Christmas “mince pie”. There isn’t any beef mince in these pies, but they’re called mince pies because they were originally made with meat, with the fruit acting as a preservative. Over time, the beef was removed, giving us a simple fruit ‘mince’ pie, the kind we know today.

Empty Shells

No, not shotgun-shells or bullet-casings…pie-casings!

The tradition of eating a pie, sweet, savoury, or a mix of the two, together with the crunchy pastry crust and casing, is actually a pretty modern development.

For much of history, when a pie was eaten, the pastry lid was removed, the contents (today, the fillings) eaten, and then the pie-casing (and the lid) was put back in the kitchen to be reused!…Again…and again…and again! For as long as the crust and casing remained fresh.

Why bother using the crust and casing when you have a pie-dish, though?

You have to understand a couple of things here…

This is a time before widespread refrigeration. Meat had to be cooked and eaten within 48 hours of being purchased fresh from the local market. There was nowhere to store it for longer than overnight without it going stale (unless you froze it, smoked it, or salted it).

To prevent meat going bad, cooks would bake it in a pie. And cooking the meat meant that it lasted longer and you could eat it, of course!

But why save the pie-crusts?

Until relatively recently, flour, the main ingredient in pie-casings, was an expensive commodity. Very expensive. In medieval times, the only way most people could get flour was to grow their own wheat, thresh their own grain, winnow their own wheatgrain, and then grind it by hand, or grind it at the mill owned by the local landlord (for which the peasantry had to pay taxes to use!). Even in later times, flour was expensive, and only the wealthy could afford to eat the fine, sifted, refined white flour which we love so much today. This was because the extra effort required to refine it made it more expensive.

The result? Most people couldn’t afford enough flour to bake a pie for every day of the week. You’d use up your flour to bake your pie and the meat inside. Then you’d use the same pie-crust over and over and over again until it started going bad, before eating it on the last night of the week. This was to make your flour last for as long as possible.

And the pie-crusts of older times are a lot different to the ones made today. Most people would complain…loudly…if you served them a pie with a crust that was too thick, since it would be impossible to crunch into, or get a fork or knife through. In the days of serfdom and lords, pie-crusts could be upwards of an inch or two in thickness! This was so that they would last through the repeated bakings without burning and charring in the oven.

Bread

Not for nothing is bread nicknamed “the staff of life”. For centuries, millenia even, all over the world, mankind has survived on bread of some variety. Whitebread, wholemeal, mixed-grain, sourdough, rice-bread, cornbread, pita-bread…the list is almost endless. But what is the history of bread?

The origins of bread go back to the dawn of civilization. And its importance is just as up-there as its history. Hell, the Romans even created a whole ROOM just for bread. Ever wondered why your kitchen has a ‘pantry’? It comes from the Latin word ‘Panna’ or…bread. A ‘pantry’ was the room in which bread, a staple of life, was stored. But here’s a few things you may not know about bread…

The Upper Crust

The “upper crust” is a common expression meaning those of a higher social status, up there in the upper-class economic group. But have you ever wondered where the term ‘upper crust’ came from?

Yep. Bread.

Before the first modern stoves were invented in the Western world (Ca. 1700), baking bread was a hot, dangerous and ashy affair. Here’s how it was done…

The dome-shaped bread oven was filled with wood, which was then set on fire. The oven door was left open and the huge fire inside the oven was allowed to burn for hours, until it finally burnt out. Once the fire was out, the baker had the unenviable task of raking out the hot ash, charcoal and cinders, and shoving in dozens of loaves of bread at a time, using those big, wooden baking-paddles (so he didn’t burn his hands).

Burning a fire in the oven, and letting it burn down to ash made the brick (or stone) inside the oven extremely hot. And it’s this heat, and not the heat from the fire, which actually bakes the bread. Once the bread was shoved in as quickly as possible, the oven door was shoved on, and extra bread-dough was stuffed around the edges of the door. This had the double-job of sealing in the heat, but also acting as an oven-timer. You could tell when the bread inside was baked by checking whether or not the dough on the door was also baked.

When the bread was baked, the door was ripped off, and the bread hauled out on paddles again.

Everything about baking bread relied on speed. It took so long to build, light and burn down the fire that bakers wanted to get the ash out of there, and the bread in there, as fast as possible. The result is that there was always a thin layer of ash on the oven-base. And during baking, this ash and soot would stick to the bottom of the loaves of bread. Eugh!

Picky rich people who wanted the best bread, would slice the loaves horizontally instead of vertically, so that the burnt, sooty bottom crust of the bread was given to the poor – the paupers, beggars and lepers, while they…the rich…kept the crunchy, soft, soot-free…upper-crust…for themselves!

Don’t Sit Under the Apple Tree…

Anyone who’s ever baked bread at home will know that one of the most frustrating things is the wait while the dough rises. After the bread-dough has been mixed and kneaded, it’s necessary to leave it alone so that the yeast inside the dough can expand and let off gas, which allows the dough to rise, before it can be put into the oven.

But what if you didn’t have yeast, one of the key ingredients in breadmaking?

If you don’t have yeast, you could do what medieval bakers used to do. Take the bread out into the back yard, or nearest available orchard, find a suitable apple tree, and stick the yet-to-rise bread-dough underneath it! And let nature take its course, as they say.

Yes. This actually works. And it works because apples, which grow on apple trees (see, you learn something reading this blog…), are full of yeast. And apples on the ground, rotting off, let off yeast fumes, which will help your flaccid loaf to fluff into life before its date with destiny. The yeast in the apples is the same reason why it’s possible to make alcoholic apple cider; yeast is also a key ingredient in beer!

Hungry for More?

The “If Walls Could Talk” documentary episode “The Kitchen”, and the documentary “Tudor Feast” will supply you with some tasty information.

What’s that Tune? The stories behind famous pieces of music

 

You hear them all the time on television, in kids’ cartoons, in movies, in advertisements on the radio and in the ad-breaks between your favourite TV shows. But what are the stories behind these iconic pieces of music? Here’s a selection of some of the most famous pieces of music you might not know anything about, and the stories behind them.

Title? Ride of the Valkyries
Who? Richard Wagner
When? 1870
What? From the opera “Die Walkure

Commonly used in cartoons, and TV shows to symbolise impending doom, destruction or the coming of some great conflict, the Ride of the Valkyries dates back to 1870. It was originally written for the German opera “Die Walkure” (“The Valkyries“), by Richard Wagner. The Valkyries was one of a series of four operas written by Wagner at the time.

Ride” plays at the start of the third act in the opera. Its dramatic and triumphal melody is designed to accompany the arrival of the valkyries (characters of viking mythology), whose task it is to select which viking warriors will die in battle, and the souls of which, the valkyries deliver to the god, Odin, ruler of Valhalla, the Viking world of the dead, where warriors who have died in battle are honoured for their bravery and skill.

Today, the “Ride” is most famously remembered from the film “Apocalypse Now”, but its fame dates back over a hundred years to the grand opera-houses of Germany and Austria.

Title? Tocata & Fugue in D Minor, BWV 565
Who? Johanna Sebastian Bach
When? Ca. 1704
What? Organ piece

Although most people have never listened to the whole thing, the first eight notes of Bach’s Tocata & Fugue in D Minor are recognised around the world, for their haunting, eerie, creepiness. Used in cartoons and other TV shows for setting the scenes in scary old Victorian houses, isolated haunted mansions, spooky abandoned castles, and grandmother’s dusty basement, where the lightbulb never seems to work properly, the Tocata & Fugue in D minor has remained famous for over 300 years!

Exactly WHEN Bach wrote the Tocata & Fugue is unknown. The closest date that anyone can figure is ca. 1704/5. In fact, it’s not even established that he wrote it. See, the problem with Bach’s collection of work is that he never signed any of his compositions! So it’s almost impossible to say if he wrote anything at all. The only method of determining what works can be genuinely attributed to him, is by reading the diaries, letters and other accounts left by his contemporaries.

Indeed, almost no copies of Bach’s original compositions, penned by his own hand, survive. Nearly all of the oldest copies which we have today, were once copied out by Johannes Ringk. Ringk (1717-1778), was a German composer, organist and music-teacher. And its his copies of Bach’s works which are among the oldest known to survive. Ringk’s copy of Bach’s famous organ-piece is likely taken from another copy, by Ringk’s fellow organist, Johann Peter Kellner (1705-1772), who copied the original Bach composition, ca. 1725. The original composition, which Kellner likely copied, has been lost to history, and no copies of the Tocata & Fugue, as penned by Bach, survive today.

Title? In the Hall of the Mountain King
Who? Edvard Grieg
When? 1876
What? From the Norwegian opera “Peer Gynt

One of the most famous operatic orchestral pieces in history, ‘In the Hall of the Mountain King’ is recognised instantly from its tiptoe start, its gradual increase in volume, and the eardrum-busting crescendo! But what purpose does this piece of music serve?

Mountain King” comes from the 1873 opera, “Peer Gynt“, a fantastical theater play, or…fairytale!

Originally, the story was called “Per Gynt” (“Peter Gynt”), and was a traditional Norwegian fairy-tale. Norwegian dramatist, Henrik Ibsen used the fairy-tale as the basis for his grand masterpiece theater-production, and Edvard Grieg’s famous piece of music was written for one of the scenes.

In the play, the main character, Peter Gynt, is disgraced. After dashing the hopes that his mother had held, for him to marry the daughter of a wealthy local farmer, Peter is banished from his community.

During his travels, Peter meets a wide range of people, and finds himself inside an enormous mountain, ruled by a troll-king, hence the title of the piece.

Peter meets a girl who is daughter of the troll-king. When the courtiers find out, and realise that Peter might have made her pregnant, everything goes awry, shown by the dramatic change in the piece of music during its later stages.

Title? Overture – The Barber of Seville
Who? Giaochino Rossini
When? 1816
What? From the Italian comic opera “The Barber of Seville”

The overture to the Barber of Seville is one of the most famous pieces of music in the world. To most people, it’s the soundtrack to a certain Bugs Bunny cartoon that came out in 1950…

The overture (‘opening piece’) to the Barber of Seville has remained one of the most famous and iconic pieces of music ever written, and its various elements has been used in TV, movies and commercials for years.

Title? Overture – 1812
Who? Pyotr Ilyich Tchaikovsky 
When? 181…no. 1880
What? Commemoration

Jangling church-bells and the reports of cannon-blasts going off is the most famous part of the 1812 Overture, one of Peter Tchaikovsky’s most famous works. But what is it actually about?

Contrary to popular belief, the 1812 Overture has NOTHING to do with the War of 1812. That’s a sheer coincidence.

The 1812 Overture, written in 1880, commemorates the Russian defeat of Napoleon’s forces in 1812, driving back the French emperor from the Russian homeland. The cannonfire for which the piece is so famous, commemorates the Patriotic War of 1812, the Russian name for the failed French invasion of Russia, from June to December of that year.

Title? The Typewriter
Who? Leroy Anderson
When? 1950
What? Novelty orchestral piece

‘The Typewriter’, from 1950, is one of the most famous pieces of novelty orchestral music ever written. It is unique because of the one instrument that it uses which isn’t an instrument: a typewriter.

Anderson wrote this quirky little piece to immortalise one of the most important inventions in the history of mankind, the humble typewriter. It is the typewriter’s clacking keys, the famous ring of the warning-bell, and the grating sound of the carriage being pushed back at the end of each line that people remember in this piece of music. However, there’s more to this piece than that.

To actually perform this piece, you require the orchestra, a functional typewriter, and a call-bell. The call-bell is there to facilitate the extra bell-chimes which the typewriter itself, cannot provide. And when the piece was first performed and recorded, a modified typewriter with only two functional keys, was used to provide the sound-effects!

Title? Galop – Orpheus in the Underworld
Who? Jacques Offenbach
When? 1858
What? Dance

Today, most people just know this piece as the Can-Can. Written in 1858, the Galop from the opera ‘Orpheus in the Underworld‘, by Jacques Offenbach, is one of the most famous pieces of dance-music ever produced.

A ‘galop’ is a French term, and the title of a type of dance. It comes from the word ‘gallop’, as in a galloping horse. The title reflects the lively, quick pace of a style of dance which became popular in the 1820s, which was full of speed and activity.

Title? “Music for Royal Fireworks”; ‘La Rejouissance’
Who? George Frederic Handel
When? 1749
What? Fireworks Accompaniment

George Handel’s ‘Music for Royal Fireworks‘, was written in 1749, to accompany a fireworks display being put on by King George II of England. This huge public spectacle was to celebrate the end of the War of the Austrian Succession, the year before. Sadly, the fireworks were not as spectacular as the music, which remains popular even to this day, nearly 300 years later. It’s well-known for its use in triumphal, royal scenes depicting splendor, pomp and ceremony.

Title? Symphony No. 40., in G Minor (1st Mvt)
Who? Wolfgang Amadeus Mozart
When? 1788
What? Nokia ringtone…?

Anyone who’s ever had to answer their mobile or cellphone will probably be familiar with this tune. Written by child prodigy, Wolfgang Amadeus Mozart, in 1788 (by which time he was in his 30s), this tune remains one of Mozart’s most famous compositions. It was also one of his last; Mozart died in 1791, at the age of 35.

Sweet, Cold and Delicious: The History of Ice-Cream

 

As I write this, the second-southernmost state of the Commonwealth of Australia is steadily being slow-roasted into hellish oblivion. For the third week in a row, we’re having temperatures over 30’c. And that is what has inspired this posting about the history of ice-cream.

Heaven, I’m in Heaven, and my heart beats so, that I can hardly speak.
And I seem to find the happiness I seek… 

Where Does Ice-Cream Come From?

Variations of ice-cream have existed for centuries. Cold, sweet foods which contained ice as a main ingredient date back to ancient times, in cultures as far apart as China and Ancient Persia (Iran, today), all the way to the Roman Empire. But how did ancient man produce these sweet, cooling treats, without freezers or refrigerators?

The First ‘Ice-Cream’

The first versions of ice-cream, which emerged in these ancient cultures, used crushed snow as the main ingredient. To the snow (stored in caves during hot weather, or harvested from mountains which remained cold all-year-round), various ingredients were added, depending on the tastes of the consumers, and the country of manufacture.

The first ice-creams of a sort, were fruit-based, and one of the main ingredients were fruit-juices, or purees. Of course, you could add anything you wanted to the ice; other ingredients included rosewater, saffron, or the crushed pulp of berries.

Living in the boiling climates that they do, it was the Arabians who developed ice-cream as we might know it today. Originally, the fruit that they added to crushed ice was not only to give it flavour, but also to sweeten it.

Eventually, Arabian innovators changed the recipe to improve taste and texture. To do this, sweetened milk was added to the ice instead of fruit, to create bulk and substance. And they used pure sugar, rather than the sugars found in fruit, to provide the sweetness. For the first time in history (about 900A.D.), we had our first ‘iced cream’, which literally combined ice, and cream (okay, milk…), to form a dessert that would remain popular for millenia.

The Spread of Ice-Cream

It took a while, but by the early 1700s, ice-cream was becoming popular all over the world. Recipes varied from country to country, but it was catching on fast. There were a few false starts and mistakes during the early years, but even these apparent failures gave us desserts which have survived the test of time, and became regional varieties of ice-cream; Italian gelato is one example of this.

Ice-cream became very popular in Europe. In France and Italy, and then eventually in England, too. By the late 1600s and early 1700s, ice-cream recipes had appeared, printed in a number of languages, including French and English. One of the earliest recipes for ice-cream in English dates to 1718! “Ice Cream” first appears as a dictionary-entry in 1744!

During the 1790s and the early 1800s, French aggression (remember a little chap named Bonaparte?) on the European mainland was driving Italians away from their homes. Italian refugees fled across the Channel to England, bringing their ice-creaming technology and skills with them.

Even before then, however, the popularity of ice-cream was spreading even further, and this sweet, cool dessert reached the Americas in the mid-and-late 1700s. The first ice-cream parlour in the ‘States opened in New York City in 1776. Ice-cream had been introduced to the colonials by Quaker migrants from Europe. Thomas Jefferson’s favourite flavour was supposedly vanilla.

How Do you Make Ice-Cream?

I hear you. How do you make ice-cream? They didn’t have freezers back then. They didn’t have fridges. And surely you can’t get ice and snow all year ’round? How did they make it in the summer, for example, when ice-cream would’ve been most popular? What, and more importantly, how, to do, when all the ice and snow is gone!?

Come to our aid, O great science of chemistry.

As far back as the early 1700s, housewives and professional ice-cream sellers had cracked the code of making ice-cream without all the fancy freezing and chilling apparatus which we take for granted today. Here’s how it’s done.

First, you need a pot or a can made of metal. Into this can, you put the ingredients of your ice-cream. The cream or milk, the flavorings and so-forth.

Find a larger pot. Line the bottom of the pot with ice. Lots of it. Put the smaller pot inside the larger pot, and pack in the space on the sides with even more ice. Now, just add salt.

A LOT of salt.

One particular recipe calls for a whole pound of salt.

what happens here, you ask?

The salt mixes with the ice, and the ice begins to melt.

The salty water is kept cold by the ice that hasn’t melted yet. And since salty water has a lower freezing temperature than pure water, the remaining ice can act on the salty water for a lot longer than it might otherwise do. And this drives the temperature of the salt-water-ice mix down even further.

This whole process is aided by putting the entire concoction of ice, salt, water and ice-cream, into the basement or cellar. The cold air slows down the melting of the ice that hasn’t already melted, and so the whole process is prolonged and lengthened out. The result is that the ice and saltwater slurry chills the sides of the interior pot or canister inside the main ice-pot. This, in turn, freezes the ice-cream mix inside the inner pot. Once the process is complete…you have ice-cream!

Simple.

Okay, not so simple.

The problem with this method is that, while it worked, it took a very long time. Up to four hours. When’s the last time you waited four hours to eat ice-cream?

A faster method of making ice-cream was needed. And in the early 1800s, that method arrived, in the United States…

Machine-Made Ice-Cream!

Since the early 1700s, ice-cream had been made the slow way. You filled a can with ice-cream, you sat it in a basin of ice and salt, and let basic laws of science do the rest. It produced a great result, at the expense of a lot of time. Something better had to be found to produce ice-cream in greater quantities, or at least, smaller quantities at a faster pace!

Enter…this:

Believe it or not, but this is the world’s first-ever purpose-built ice-cream maker.

Yes. That.

It was invented in 1843 by Nancy Johnson, a lady from New Jersey, in the United States.

How does it work, you ask? It works more or less the same as the previous method mentioned above, except this one takes more muscle. It produces ice-cream in the following way:

1. Put your ice-cream mixture into the interior canister.

2. Fill the bucket with ice, and salt.

3. Turn the crank.

And how exactly does this produce ice-cream?

Constantly turning the crank moved the interior can around in the slurry of saltwater and ice. This nonstop agitation mixed up the ice and water, but also mixed up the ice-cream. The result is that more of the ice-cream mixture gets to contact the freezing cold sides of its metal container, which means that the temperature of the ice-cream batch on a whole, decreases much faster. The faster you crank, the faster this happens, and the sooner you get ice-cream!

A bonus of the Johnson method of ice-cream making was that you also got ice-cream of a much better texture. The previous method, of simply freezing the cream in a bucket of icy saltwater produced a sort of ice-cream lump, similar to an ice-cube. The constant agitation produced by the hand-cranked freezer was that mixing the ice-cream around inside its receptacle prevented it from clumping together into chunks and blocks, and aerated it at the same time. The result was smoother, creamier ice-cream!

The result of this was that in 1843, you had the Johnson Patent Ice Cream Freezer. There are conflicting reports about whether or not Ms. Johnson ever patented her machine. Some say she did, in September of 1843, while others say say it was never patented at all. A Mr. William Young patented a similar machine and named it after her, in May, 1848. Whichever version of events is true, we have Nancy Johnson to thank for the first machine-made ice-cream in the world!

Ice-Cream, You Scream, We All Scream for Ice-Cream!

From its crude beginnings in the Middle East, up until the mid-1800s, ice-cream was a delicacy and a treat. Phenomenally expensive and extremely fiddly, labour-intensive and tricky to make in any decent quantity, ice-cream was originally available only to the super-rich.

But it’s so easy. You get the cream, the sugar, the flavourings, you put it in a pot, you put the pot in the ice-water and the salt and…

It’s not so easy.

First, you need the ice. To get that, you had to carve it out of frozen lakes. Or haul it down from the mountains and store it in ice-houses during winter. And you needed to have an ice-house to begin with! And the labourers or slaves to cut, dig and haul the ice.

Then, you needed the salt. Salt was so tricky for most people to get that for centuries, it was traded as currency. It’s where we get the word ‘salary’ from, because people used to paid in salt, or paid money so that they could then go and buy salt for themselves. Salt was only obtained at great expense in time, from evaporating great quantities of seawater to obtain the salt-crystals, which would then have to be washed and dried and purified. Or else it had to be dug out of salt-flats, crushed, and purified again. This made salt extremely expensive, and out of the reach of mere mortals like you and me.

The relative scarcity of the ice required to cool down the cream, and the salt needed to provide the reaction, meant that large quantities of ice-cream were very difficult to make, and thus, were only available to the richest of people, who could afford the expense of the ice and salt. Most ordinary people wouldn’t have bothered to waste precious salt (used to preserving fish and meat) on something as wasteful and as extravagant as ice-cream! The damn thing melted if you left it on the kitchen table. What use was that all that fuss over something that didn’t last?

It wasn’t until large quantities of ice and salt were able to be produced, harvested or sold cheaply enough for anyone to buy it, that making ice-cream for everyone really became a going concern. Before then, it was simply too expensive.

Nancy Johnson’s ice-cream machine from the 1840s made efficient manufacture of ice-cream possible for the first time. Granted, these early hand-cranked machines could only freeze a small amount of ice-cream at a time, but they were a big improvement on waiting for hours and hours and hours for the same thing from a can sitting in a pot of salty slush!

Building on inventions such as the Johnson ice-cream freezer, by the mid-1800s, it was possible to produce ice-cream in commercial quantities, and the first company to do so was based in Maryland, in the United States.

The man responsible for the birth of commercial ice-cream manufacture was named Jacob Fussell. Fussell was a dairy-produce seller. He made pretty good money out of it, but he struggled constantly to sell his containers of cream. Frustrated about the fact that this cream would otherwise constantly go to waste, Fussell opened his first ice-cream factory in 1851.

Fussell spread the gospel of ice-cream, and as more ice-cream manufacturers sprang up around the ‘States, you had ice-cream for the common man.

Ice-Cream in Modern Times

By the 1900s, ice-cream was becoming popular everywhere. In the 1920s, the first electric refrigerators, and by extension, the first electric freezers, made ice-cream production, selling, buying, storing and of course, eating, much easier. It was during this time that companies and distributors like Good Humor (1920), Streets (1930s) and Baskin-Robin (1945) began making names for themselves…and which they still do today.

Since the invention of the Johnson ice-cream freezer in the 1840s, ice-cream could now be made faster and cheaper. Refrigeration technology, and the technology to manufacture enormous, commercial quantities of ice also aided in the ability to make ice-cream available for everyone. This also led to ice-cream being served in different ways for the first time in history.

Ice-Cream on a Stick!

If as a child, or even as an adult, you ever went to the corner milk-bar, drugstore or convenience-shop, and opened the ice-cream bin, and pulled out an ice-cream bar on a little wooden paddle or stick, then you have two little boys to thank:

Frank Epperson, and Harry Burt Jnr.

Ice-cream-bars, or frozen, citrus-based popsicles, or icy-poles, were invented in the early 20th century by two boys living in the United States.

The first popsicle was invented in 1904, by little Frank Epperson. Epperson was eleven years old when he tried to make his own, homemade soft-drink. He poured the necessary ingredients into a cup, and stuck a wooden paddle-stick into it to stir the contents around. Epperson left the mix outside in the garden overnight, and went to sleep.

During the night, the temperature plunged to frigid, subzero temperatures. When little Frankie woke up the next day, he found that his mixture had frozen solid inside the cup! Undaunted, as all little boys are, he simply turned the cup upside down, knocked out the frozen soda-pop, grabbed his new invention by the stirrer-cum-handle, and just started sucking on it. The world’s first-ever popsicle!

The invention of the world’s first ice-cream bar can be attributed to young Harry Burt.

Okay, so Burt wasn’t so young. But he did invent the ice-cream bar on a stick.

Burt’s father, Harry Burt senior, was experimenting with a way to serve ice-cream on the go. To make the ice-cream easier to sell, he set the cream into blocks. To keep the customer’s hands clean, he dipped the blocks in chocolate and froze them so that clean hands need not be soiled by contact with melting ice-cream.

The problem was that…what happens when the chocolate melts?

This was the point brought up by Harry’s daughter, Ruth Burt. Harry wasn’t sure what to do about it. That was when Ruth’s younger brother, 20-year-old Harry Junior, came up with the idea of freezing the ice-cream with little wooden sticks already inside them, to give the customer something to hold onto, and minimise the chances of ice-cream going all over the customer’s hands.

Daddy liked the idea so much that he gave it a shot, and success ensued! Between them, the three Burts had invented the ice-cream bar on a stick!

Sundaes on Sundays?

Ah. The joys of having a dish made almost entirely out of ice-cream. Sinful, isn’t it?

Apparently, someone thought so, because in the United States, it was illegal to eat ice-cream on Sunday!

Is that true?

Honestly, nobody knows. Maybe it is. Maybe it isn’t. The legend goes that since selling  ice-cream was illegal on Sundays, ice-cream vendors would sell ‘sundaes’ instead, deliberately mis-spelling the name to circumvent the religious morality laws (‘blue laws’) which were killing their businesses.

Something else that nobody knows is where the sundae as an entity, was invented. The United States. But which city? And state? Nobody knows for sure.

Whoever invented sundaes, and for whatever variety of reasons, we should thank them for inventing one of the most enjoyable and most variable ways of consuming ice-cream  ever thought of.

…Banana split, anybody?

Sweet, Creamy Goodness

Looking for more information? Here are some links…

http://firesidelibrarian.com/projects/s532/icecream.html

http://inventors.about.com/od/foodrelatedinventions/a/ice_cream.htm

http://www.idfa.org/news–views/media-kits/ice-cream/the-history-of-ice-cream/