The Idiot and The Odyssey: The Complete Restoration of my Grandmother’s Singer Sewing Machine

In looking back over my blog, I realise that it’s been over a year since I started the seemingly ludicrous mission of restoring my grandmother’s 1950 Singer 99k sewing-machine. I am proud to say that as of the date of this posting, the restoration is complete!

Gran was born on the 7th of May, 1914 in Singapore. She died on the 28th of November, 2011, in Melbourne, Australia. Weet-Bix are suspected to have played a role in her demise. She was 97.

Granny was a dressmaker, and from the early 1950s until the early 1980s, was in this trade professionally. When she retired, she moved to Australia, and her Singer sewing machine came with her. A battered, but trusty Singer 99k knee-lever electric sewing machine. This machine was gran’s life and she used it in place of any other machine that might ever have been, or might have become available for her to use.

When gran moved to the nursing-home, in the early 2000s after worsening Alzheimer’s Disease, her most treasured possession, her Singer, was placed in the basement, where for the next eight or-so years, it sat in a corner at the bottom of a bookcase, gathering dust.

When gran died, I hauled the machine out of the basement and began a steady restoration process. I don’t know what possessed me to do this, other than the fact that this machine was gran’s livelihood for most of her adult life.

The majority of what happened next is covered in my earlier article. This posting is more of an addendum to what I’ve already written.

The Frankenstein Moment

MUAHAHAHAHAHAHAHAHA!!!

*Thunderclap!!*
*Flashlightning!*

…Ahem.

Actually getting the machine running and sewing for the first time really was an exhilarating experience. Second only to getting the machine-case off the base! It took a lot of oil and fiddling with a screwdriver, but I got it off eventually, and was very happy.

Getting the machine running was a considerable task. It was literally frozen solid when I got the lid off the machine-base, and not a single thing apart from the presser-foot lever and the bobbin-winder worked. Everything else was jammed solid from a complete and total absence of lubrication. And it’s no exaggeration to say that it took me nearly a week to lubricate the entire machine to a level where it would run as well as it did when it was brand-new.

I must admit, it was rather fun. There is the incredible thrill of a challenge, combined with the later sense of accomplishment, when it came to getting that machine running again.

I had almost given up at one point, but perseverance was the key. It was a real joy to see it running at full speed again, for the first time in probably ten years (or whenever the last time it was used, happened to be. At least ten years ago, though).

Duhr…Now What?

It’s working! Oh my god it’s working! It runs, it stitches, it sews, it runs at every speed,  the light turns on, gets hot enough to fry breakfast on, and then turns off. Everything is excellent! But what do we do now, huh?

I really wasn’t sure. Like I said, I didn’t have any real reasons for wanting to bring this thing out of the basement other than to tinker around with it. But once I’d got it running, I started thinking about these other things that I could do. And that’s when the thought entered my head that I could bring the machine back to its former glory, by tracking down and purchasing all the necessary bits and pieces for it. I had no idea where on earth I would begin. But as luck would have it, I live very close to a large and very well-stocked flea-market. And it was from that market that I purchased nearly everything for this machine.

The Scavenger Hunt

I started with simple things, like needles and bobbins. These were pretty easy to find. And all the while, I was busy cleaning and fixing the machine. It was like a gunk-generator. Every time I thought it was clean, I’d find some other part of the machine that required my attention. Like under the bed. Or behind the balance-wheel, or inside the electric motor, or underneath the bobbin-case. On top of everything, the machine required constant lubrication! It drinks oil like Barney Gumble drinks Duff Beer.

The harder things which I had to track down were the sewing-machine accessories boxes, the attachments that went inside them, the accessories that went with them, and the green oil-can that went inside the machine-case. I had no idea what these things looked like, and it took a long time to track them down. I actually ended up buying multiple boxes of attachments and pouring them all out, and scrambling them around until I had assembled one FULL box of attachments from the dribs and drabs found in other boxes. Those dribs and drabs would be useful for spares later on.

One big problem with this machine was finding the original square steel bobbin-plate or ‘slide-plate’. The slide-plate was a protective metal plate that shielded the spinning bobbin-mechanism from dust and tangling threads. There wasn’t anywhere local that I could buy one, and waiting for one to show up at the flea-market would take years.

The only way I could get one was to buy a replacement online. You can buy ORIGINAL Singer plates online (and there are people who sell these), but obviously, stock is limited, and as a result, prices are much higher. I had serious doubts about this. So instead, I went the reproduction route. With the help of a cousin, we bought the replacement plate from an eBay store based in the U.S.A., and had it shipped halfway around the world to…here.

Boy that took so long. I think it was something like a month or more, of waiting.

Finding the oil-can for the sewing-machine was rather challenging. There are all kinds of Singer oil-cans and bottles. And I had no idea which one I would need to fit the slot inside the case-lid. All I knew from what I saw, was that it had to have a flange at the bottom, and it had to have a curved base. Out of sheer luck, I found the can which I needed at the flea-market, hidden in the pre-dawn mists, amongst a bookcase full of all kinds of other cans which were for sale. I paid $5 for it and walked off.

Sentimental Attachments

Finding all the attachments for the sewing-machine was another big challenge. No one box of parts which I bought ever had the full set. So I was forced to buy four or five boxes of parts, and slowly piece them together, to form one big box of attachments. In the end, I had enough bits and pieces around to create two complete boxes!

On top of all the usual steel attachments, was the challenge of finding the zigzagger and buttonholer attachments. These old Singer sewing machines performed a very basic straight lockstitch. To allow these machines to make more complicated things like zigzags and buttonholes like modern machines can, the manufacturers came up with all kinds of fascinating gizmoes which you could bolt onto your machine.

Quality of Manufacture

One thing that I love about all these items is the quality of manufacture. The bobbins, the attachments, screws, plates…everything is made of solid steel, without exception. Nothing like that exists today. Today, bobbins are made of plastic, feet and other attachments are made of plastic. Even the screws are made of plastic. One crack or warping renders them useless. The older steel parts are nigh indestructable.

It’s stiff? Oil it. It’s rusty? Sand it. It’s dull? Polish it.

With plastic parts…it’s cracked?…Uh…I dunno. Throw it out and buy another one?

Money wasted and thrown down the toilet.

These steel pieces will literally last forever. And their simple, no-nonsense construction means that they will always do the job that they were made for, without any compromising on quality. Back in the good old days, this was standard. These days, we have to pay extra for quality that should come with the original product. Which doesn’t. They literally don’t make ’em like they used to.

The Last Piece

By the start of 2013, I had finally gathered all of the main components of the sewing-machine. I had the needles, oil, feet-attachments, the two main mechanical attachments, instruction manuals and other dribs and drabs. However, one piece remained elusive. The bed-extension table.

The bed-extension table came with most Singer sewing machines and it was used to extend the bed of the machine, to give you a larger work-area. This had the advantage of stopping your sewing-piece from sliding off the end of the machine-bed, and pulling your carefully-pinned cloth out of alignment with the needle and presser-foot.

Sadly, they’re not easy to find. The bed-extension table is of very simple construction, and it wasn’t uncommon for them to be thrown out or lost due to their rather bland and simple appearance. Unless you knew what you were looking at, the extension-table looks like just another plank of wood.

I discovered one recently at an antiques shop, along with a box of other bits and pieces, and snapped it up then and there. The standard Singer bed-extension table measures 8.5 inches wide (the width of the machine-bed), and about eight inches long.

Finding that final, missing piece means that the machine is finally back in its original and complete condition, having been reunited with all the items that would’ve come with it when it was purchased brand-new from the shop.

Like New!

The pictures below show the machine looking as it would’ve done back in the 1950s, complete with the parts that would’ve come with it when purchased brand-new:

Bits and pieces such as zigzaggers, buttonholers and other bits and pieces were purchased separately on a required basis. But those photos illustrate what came with the machine when it was brought home for the first time.

This model, the Singer 99 series, was manufactured from the mid-1920s up until the late 1950s, and came as a handcrank machine, or as a knee-lever machine. Knee-lever machines started coming out in the 1930s, and both hand-crank and knee-lever models were produced side by side until the model ceased production ca. 1958.

The body of the Model 99 changed significantly in the later years of its production, but the machine as it appears here would’ve been identical to one from the 1920s, minus the motor and the knee-lever, and with a spoked, instead of solid balance-wheel, with a crank-handle bolted to the side.

Built like a watch? More like a tank. The Model 66, the 99’s immediate predecessor, was highly popular, but extremely heavy and cumbersome.

The Singer 99 model was designed to be a 3/4 size “portable” machine, a step down from the full-size Singer 66 model, which came out in 1905. The 99 was designed to overcome the 66’s problems with regards to size and weight.

This advertisement from 1928 emphasizes the new machine’s portability! And with portability comes choice! You can now sew anywhere you want! Bedroom, living-room, parlour, guestroom, even outside if you wanted to. The one thing this advertisement does NOT publicise is the fact that this machine is DAMN HEAVY.

Keep in mind that the 99 was supposed to be a “portable” machine, a step down from the larger and highly popular 66 model. But despite the downsizing, the 99, complete with all its bits and pieces, still weighs in at 33.25lbs, or just over 15kg! I know this because I weighed it myself. Not so portable now, is it?

Nevertheless, it’s a practical, popular, stylish and robust machine, well worth restoring and using.

 

Singer Sewing Machine – Bed-Extension Table

It’s taken years and months, but my grandmother’s Singer 99k vintage sewing-machine is finally, and at last, complete! It has reached this level of completion thanks to the procurement of the last, and most hard-to-find Singer sewing-machine accessory…the bed-extension table. The extension-table may be seen here, hooked onto the end of the needle-bar side of the sewing-machine:

It’s the thing with the three spare vintage lightbulbs on top. The lightbulbs are spares for the one which goes into the light-socket at the back of the sewing-machine. They came as part of the package.

The extension-table came as standard with some models of vintage Singer sewing machines, such as the Singer Model 99 and it’s variants. However, not all of Singer’s sewing-machines were sold with this very handy feature included, which I think is a pity. The table measures roughly eight inches by eight inches, and the steel hook at the end simply slots into the lock-plate of the machine-bed. It extends the sewing-machine bed. That’s why it’s called a bed-extension table. Duh!

Sadly, these handy little extension-tables are not easy to find these days, and I had almost given up hope of ever getting one. I had even considered fabricating a homemade one! But fortunately, I found this, instead.

Their handiness lies in the fact that they give you a larger work-area when sewing, to stop your pieces of fabric from flopping off the end of the sewing-machine (and possibly pulling out of alignment). They also give you somewhere to rest your left hand and arm as you feed the fabric through the machine.

This is what the extension-table looks like, when it’s housed inside the case:

You can see it in this picture from a 1930s Singer 99k user-manual. It’s on the bottom of the picture (labeled ‘D’ in this picture).

It’s rather amazing how much those innovative Singer chaps could cram into such a restricted space as the lid of a sewing-machine! This is what the same arrangement looks like in real life; again, using my grandmother’s 99k as the example:

In all the same positions, you can see the green SINGER accessories box (on the left), the ‘?’-shaped knee-lever at the back, the oval-based green SINGER oil-can on the right, and at the bottom, the extension-table. Amazingly, even with all this stuff in-place, you can still put the lid comfortably over the top of the sewing-machine and lock it down tight!

Bed-extension tables. If you have a vintage Singer sewing machine and you don’t have one of these…start looking for one. They’re getting harder and harder to find, so don’t waste time!

 

Rule Britannia: A History of the British Empire

From the close of the 1500s, until the end of the Second World War, the British Empire grew, spread and eventually dominated the world, and for two hundred years, from the 1700s until the mid-20th century, was the largest empire in the world.

By the 1920s, the British Empire covered up to 22.6%+ of the globe, covered 13.1 million square miles (33.7 million square kilometers), and its subjects and citizens numbered some 458 MILLION PEOPLE. At its height, 20% of the people on earth were British, or British subjects, living in one of its dozens of colonies, dependencies or protectorates.

The British Empire was, is, and will forever be (until there’s another one), the biggest and arguably, the most famous, of all the Empires that the world has ever seen.

But how was it that the British Empire grew so large? Why was it so big? What was the purpose? What was to be gained from it? Why and how did it collapse? And what became of it? Join me on a journey across the centuries and oceans, to find out what caused the Empire to take root, grow, prosper, dwindle and decline. As this posting progresses, I’ll show the changing flags, and explain the developments behind each one.

The Need for Conquest

The British Empire was born in the late 1500s. During the reign of Henry VIII, England was a struggling country, mostly on its own. Ever since the king’s break from Rome, and the foundation of the Church of England, the Kingdom of England was on its own. Most countries saw it as being radical and nonconformist. It had dared to break away from Catholicism, the main religion of Europe at the time. England was seen as weak, and other countries, such as Spain, were eager to invade England, and either claim it for themselves, or seat a Catholic English monarch on the throne.

It was to protect against threats like these, that Henry VIII improved on what would become England’s most famous fighting force, and the tool which would build an empire:

The British Royal Navy.

The Royal Navy had existed ever since the 1400s, mostly as a hodge-podge of ships and boats. There was no REAL navy to speak of. Ships were simply requisitioned as needed during times of war, and then returned to their owners when war was over. Even in Elizabethan times, British fishing-boats doubled up as the navy.

It was Henry VIII, and his daughter, Elizabeth I, who began to build up the Navy as a serious fighting force, to protect against Spanish threats to their kingdom.

But having a navy was not quite enough. What if the Spanish, or the French, tried to establish colonies elsewhere, where they could grow in strength and strike the British? There is talk of a new world, somewhere far to the West across the seas. If the British could grab a slice of the action, then they would surely be more secure?

It was originally for reasons of security, but eventually, trade and commerce, that the idea of a British Empire was thought up. And it would be these reasons that the British Empire grew, flourished, and lasted, for as long as it did.


The English flag. St. George’s Cross (red) on a white background

British America

In 1707, Great Britain emerges. No longer is it an ‘English Empire’, but a British Empire. Great Britain is formed by the Act of Union between Scotland, England, and the Principality of Wales.

By 1703, England and Scotland had already been ruled by the same family (the Stuarts), for a hundred years, ever since Elizabeth I died in 1603, and her cousin, King James of Scotland inherited her kingdom as her closest surviving relative.

The flag of the Kingdom of Great Britain. The red St. George’s Cross with the white background, over the white St. Andrew’s Cross and blue background, of Scotland. This would remain the flag for nearly 100 years, until the addition of Ireland

It seemed only to make more sense, therefore, that since England and Scotland were ruled by the same family, they may as well be the same kingdom. The Kingdom of Great Britain.

By this time, British holdings had grown to include Newfoundland, and more and more holdings on the North American mainland. At the time, America was being carved up by the great European powers. France, Britain, Holland and Spain were all fighting for a slice of this new, delicious pie called the New World.

And they were, quite literally, fighting over it. Ever heard of a series of conflicts called the French and Indian Wars? From 1689 until 1763, the colonial powers fought for control over greater parcels of land on the American continent. America had valuable commodities such as furs, timber and farmland, which the European powers were eager to get their hands on.

By the end of the 1700s, Britain’s colonial ambitions and fortunes had changed greatly. It retained Newfoundland, but had gained Canada from France, but had lost its possessions in America to these new “United States” guys. Part of the deal with France over getting their Canadian land was that the French be allowed to stay. As a result, Canada at the time (in the 1790s), was divided into Upper and Lower Canada (Ontario, and Quebec, today). Even in the 21st century, we have French-speaking Canadians.

British colonies in the Americas wasn’t just limited to the top end, either. Since the mid-1600s, the British also controlled Jamaica (a colony taken, not from the French, but this time, from the Dutch). British rule of Jamaica lasted from 1655, until the late 1950s!

Just as its former American colonies had provided Britain with fur pelts and cotton, Jamaica was also colonised so that it could provide the growing empire with a valuable commodity – in this case, sugar. In the 1500s, sugar was incredibly rare, and the few countries which grew sugarcane were far from England. Extracting and transporting this sweet, white powder was labour-intensive and dangerous. But now, England had its own sugar-factory, in the middle of the Caribbean.

British India

It was during the 1700s, that the British got their hands on one of the most famous colonies in their growing empire. They might have lost America and gained Canada, but in the 1750s, they gained something much more interesting, thanks to an entity called the East India Trading Company, a corporation which effectively colonised lands on behalf of the British.

In 1800, another Act of Union formed the United Kingdom of Great Britain (England, Scotland and Wales) and Ireland. The flag now depicts the diagonal red cross of St. Patrick, over that of St. Andrew, but with both below the cross of St. George. This has remained the British flag for over 200 years, up to the present day

Formed as a trading company to handle imports and exports out of countries in the Far East, the East India Company (founded in 1600), got their hands on the Subcontinent of India. And for a hundred years, between 1757 and 1858, more or less controlled it for the British Government.

Indians were not happy about being controlled by a company. True, it had brought such things as trade, wealth, transport, communications and education to the Indian Subcontinent, but the company’s presence was not welcomed.

The end of Company Rule in India came in 1857, a hundred years after they had established themselves there. The Indian Rebellion of 1857 occurred when the Indian soldiers who worked for the Company rebelled over the Company’s religious insensitivity. Offended by the liberties and insults which the Company took, and dished out, Indian soldiers under Company pay, revolted against their masters.

The rebellion spread around India, and fighting was fierce on both sides. It eventually ended in 1859, with an Indian defeat, but at least it also ended Company Rule in India.

However, the British were not willing to let go of India. It had too many cool things. Like spices and ivory, exotic foods and fine cloth. Oh, and a little drug called opium.

In the end, the British formed British India (also called the British Raj), in the late 1850s.

To appease the local Indian population and prevent another uprising, a system of suzerainty was established. Never heard of it? Neither had I.

Suzerainty is a system whereby a major controlling power (in this case, Britain), rules a country (India), and handles its foreign policy as well as other controlling interests. In return, the controlling power allows native peoples (in this case, the Indians) to have their own, self-governing states within their own country.

When applied to India, this allowed for 175 “Princely States”. The princely states were ruled by Indian princes, or minor monarchs, (the maharajahs),  while the other states within India were ruled by the British. As such, India was thereafter divided into “British India”, and the “Princely States”.

British India was ruled by the Viceroy of India, and its legal system was determined by the big-wigs in London. The Princely States were allowed to have their own Indian rulers, and were allowed to govern themselves according to their own laws. Not entirely ideal, but much better than being ruled over by a trading company!

The Indians largely accepted this way of life. It was in a way, similar to their lives under the Mongol Empire before. It was a way of life with which they were familiar and comfortable with. In return for various concessions, changes and improvements, the Indians would allow the British control of their land.

The number of princely states rose and fell over the years, but this system remained in place until Indian independence was granted by Britain in the years after the Second World War.

The Viceroy of India was the head British representative in India, and ruled over British India, and was the person to whom Indian princes went to, if they had concerns about British involvement within India.

Pacific Britain

Entering the 1800s, Britain became more and more interested in the Far East. Britain realised that establishing trading-posts and military bases in Asia could bring them the riches of the Orient and a greater say in world affairs. To this end, it colonised Malaya, Singapore, Australia, New Zealand, Hong Kong, Fiji, Penang, Kowloon, Malacca, Ceylon (modern day Sri Lanka) and Burma. It even tried to colonise mainland China, but only succeeded in grabbing a few small concessions from the Qing Government, such as Shanghai.

The Pacific portion of the British Empire was involved heavily in trade and commerce, and a great many port cities sprang up in the area. Singapore, Hong Kong, Rangoon, Calcutta, Bombay, Melbourne and Sydney all became major trading-stops for ocean-liners, cargo-ships and tramp-steamers sailing the world. From these exotic locales, Britain could get gold, wool, rubber, tin, oil, tea and other essential, exotic and rare materials.

The British were not alone in the Pacific, so the need for military strength was important. The Dutch, the Germans and the French were also present, in the Indies, New Guinea, and Indochina, respectively.

Britain and the Scramble for Africa

The Industrial Revolution brought all kinds of new technology to the world. Railways, steamships, mills, factories, mass-production, telecommunications and improved medical care, to name but a few. And Britain, like other colonial powers, was eager to see that its colonial holdings got the best of these new technologies that they could.

However, these improvements also spurred on the desire for greater control of the world. And from the second half of the 1800s, saw the “scramble for Africa“.

The ‘Scramble’ or ‘Race’ for Africa, was a series of conquests by the colonial powers, to snatch up as much of the African continent as they could. The Dutch, Germans, French and British all duked it out to carve up hunks of Africa.

The French got most of northwest Africa, including the famous city of Casablanca, in Morocco. They also controlled Algeria. The British got their hands on Egypt, and a collection of holdings (including previous Dutch colonies, won from them after the Boer Wars) which they called the Union of South Africa. The British also got their hands on Nigeria, British East Africa (“Kenya”) and the Sudan.  Egypt was never officially a British colony, but remained a British protectorate (a country which Britain swore to provide military assistance, or ‘protection’ to). It was a crafty way of adding Egypt to the British Empire without actually colonising it.

British interest in Egypt and southern Africa was related less to what Egypt could provide the empire, and more about what it would allow the empire to do. Egypt was the location of the Suez Canal, and important shipping-channel between Europe and the Far East. Control of Egypt was seen as essential by the British, for quick access to their colonies in the Far East, such as India, Singapore and Australia.

A map of the world in 1897.
The British Empire comprised of any country marked in pink

Justification for Empire

As the British Empire grew during the Victorian era, and the early 20th century, with wars of conquest, and with other European powers, some sort of justification seemed to be wanting. Why should Britain control so much of the world? What gave it this right? How did it explain it to the other European powers, or the the Arsenal of Democracy that was the rising power of the United States? How did it justify the colonisation of countries to the peoples of the countries which they colonised?

Leave it to a writer to find the right choice of words.

Rudyard Kipling, author of “The Jungle Book“, was the man who came up with the phrase, “The White Man’s Burden“, in a poem he wrote in 1899.

Put simply, the burden of the white man; the white, European man, is to bring civilisation, culture, refinement and proper breeding and upbringing to the wild and uncouth savages of the world. Such as those savages likely to be found in Africa, the Middle East and the isolated isles of the South Pacific.

Britain, being naturally the most civilised, cultured, refined and most well-bred country on earth, producing only the most civilised, cultured, refined and most well-bred of citizens, was of course, the best country on earth, with the best people on earth, to introduce these wonderful virtues to the savages of the world. And to bring them up to date with technology, science, architecture, engineering, and to imbue them with good Christian virtues. Britain after all, had the best schools and universities: Eton, Harrow, Oxford, Cambridge, St. Peter’s., the list goes on. They were naturally God’s choice for teaching refinement, culture and all that went with it, to the rest of the world.

This was one of the main ways in which Britain justified its empire. By colonising other nations, it was making them better, more modern, and more cultured, in line with the West. It brought them out of the Dark Ages and into the light of modernity.

The British colonised certain countries (such as Australia) under the justification of the Ancient Roman law of “Terra Nuliius“. Translated, it means “No Man’s Land”, or “Empty Land” (“Terra” = Land, as in ‘terra-firma’; “Nullius” = Nothing, as in ‘null and void’).

By the British definition of Terra Nullius, a native people only had a right to sovereignty over its land if it changed the landscape in some manner, such as through construction, industry, craft, agriculture, or manufacturing. It had to show some degree of outward intelligence beyond hunter-gatherer society and starting a fire with two sticks.

They did not recognise these traits in the local Aboriginal peoples, and saw no evidence of such activites. Therefore, they claimed that the land was untouched, and the people had minimal intelligence. Otherwise, they would’ve done something with their land! And since they hadn’t, they had forfeited their claim to sovereignty over their land. Under the British definition of Terra Nullius, this meant that the land was theirs for the taking. Up for grabs! Up for sale! And they jumped on it like a kangaroo stomping on a rabbit.

The Peak of Empire

British control of the world, and the fullest extent of its imperial holdings came during the period after the First World War. One of the perks of defeating Germany was that Britain got to snap up a whole heap of German ex-colonies. A lot of them were in Africa, but there were also some in the Far East, most notably, German New Guinea, thereafter simply named ‘New Guinea’ (today, ‘Papua New Guinea’).

It was during the interwar period of the early 20th century that the British Empire was at its peak. By the early 1920s, Britain had added such notables as the Mandate of Palestine (modern Israel), and Weihai, in China, to its list of trophies (although its Chinese colony did not last very long).

The extent of the British Empire by 1937. Again, anything marked in pink is a colony, dominion, or protectorate of the British Empire

The Colony of Weihai

For a brief period (32 years), Great Britain counted a small portion of China as part of its empire. Britain already had Hong Kong, but in 1898, it added the northern city of Weihai to its Oriental possessions. Originally, it was a deal between the Russians and the British. So long as the Russians held onto Port Arthur (a city in Liaoning Province in northern China), the British could have control of Weihai.

In 1905, the Russians suffered a humiliating defeat to the Japanese, in the Russo-Japanese War. Part of the Russian defeat was the Japanese occupation of Port Arthur. Britain then made a deal with Japan that they could remain in Weihai so long as the Japanese held onto Port Arthur. To appease the Chinese, the British signed a lease with the Chinese for twenty-five years, agreeing to return Weihai to the Chinese Imperial Government when the lease expired, which in 1905, meant that Weihai would be returned in 1930.

The Glory of the British Empire

The early 20th century was the Golden Age of the British Empire. From the period after the Great War, to the onset of the Second World War, Britain was powerful, far-reaching and dominant. British culture, customs, legal systems, education, dress, and language were spread far around the world.

Children in school learnt about the Empire, and the role it played in making Britain great. People in countries like New Zealand and Australia saw themselves as being British, rather than being Australian or New Zealanders. After the First World War, monuments and memorials were erected to those who had died for the “Empire”, rather than for Australia, New Zealand, or Canada. Strong colonial and cultural ties held the empire together and drew soldiers to fight for Britain as their ‘mother country’, who had brought modernisation, culture and civility to their lands.

The Definitions of Empire

If you read any old documents about the British Empire, such as maps, letters, newspapers and so-forth, you’ll notice that each country within the empire is generally called something different. Some are labeled ‘colonies’, others are ‘mandates’, some are ‘protectorates’, and a rare few are named as ‘dominions’. And yet, they were all considered part of the Empire. What is the difference between all these terms?

The Colonies

Also sometimes called a Crown Colony, a colony, such as the Colony of Hong Kong, was a country, or landmass, or part of a landmass, which was ruled by the British Government. The government’s representative in said colony was the local governor. He reported to the Colonial Office in London.

The Protectorates

A protectorate sounds just like what it does. And it can be a rather cushy arrangement, if you can get it. As the name implies, a country becomes a protectorate of the British Empire when it allows the Empire to control certain parts of its government policies, such as foreign policy, and its policies concerning the country’s defence from foreign aggression. One example of this is the Protectorate of Egypt.

In return for allowing the British to control such things as foreign relations and trade, and in return for having British military protection against their enemies, a country’s ruler, or government, could continue running their country as they did, with certain things lifted off their shoulders. But with other things added on. For example, the British weren’t interested in Egypt for the free tours of the Valley of the Kings. They were interested in it because of the Suez Canal, the water-highway to their jewel in the Far East, known as India! In return for use and control of the Canal, the British allowed the Egyptians to run their own country as they had always done.

The Mandates

The most famous British mandate was the Mandate of Palestine (modern Israel).

In the 1920s, the newly-formed League of Nations (the direct predecessor to the U.N.) confiscated former German and Turkish colonies, and distributed them among the two main victors of the Great War; Britain, and France. Basically, Britain and France got Turkish and German colonies as ‘prizes’ or ‘compensation’ of the war.

Legally, these mandates were under the control of the League of Nations. But the League of Nations was well…a League! A body. Not a country. And the League couldn’t control a mandate directly. So they passed control of these mandates to the victors of the Great War.

The Dominions

On a lot of old maps, you’ll see things like the Dominion of Canada, the Dominion of Australia, and the Dominion of New Zealand. What are Dominions?

Dominions were colonies of the Empire which had ‘grown up’, basically. They were seen as highly-developed, modern countries, well-capable of self-governance and self-preservation, without the aid of Mother England. They were like the responsible teenagers in an imperial family, to whom old John Bull had given them the keys to the family car, on the condition that they didn’t trash it, or crash it, and that they returned it in good working order.

The Dominions were therefore allowed to be more-or-less self governing. After 1930, the Dominions became countries in their own rights, no-longer legal colonies. But they were still seen as being part of the Empire, and bound to give imperial service if war came. Indeed, when war did come, all the Dominions pledged troops to help defend the Empire.

There was talk of making a ‘Dominion of India’. India wanted independence, but Britain was not willing to let go of its little jewel. It saw making India a Dominion as a happy compromise between the two polar options of remaining a colony, or becoming totally independent.  However, a little incident called the Second World War interrupted these plans before they could be fully carried out.

War and the Empire

The British Empire was constantly at war. In one way, or another, with one country, or another, it was at war. The French and Indian Wars, the American War of Independence, the War of 1812, the French Revolutionary Wars, the Napoleonic Wars,  the Opium Wars, the Crimean War of the 1850s, the First and Second Afghan Wars. The Mahdist War of 1881 lasted nearly twenty years! Then you had the Boer War, the Great War, and the Second World War.

One reason why Britain managed to engage in so many wars, and survive, and in some cases, prosper from them, was due in a large part to its empire. In very few of these wars did Britain ever fight alone. Even when it didn’t have allies fighting alongside it, Britain’s fighting force was comprised of both home-born Englishmen, but also a large number of imperial troops. Indians, Australians, Canadians, New Zealanders, and Africans. They all signed up for war! When Australia became a nation in 1901, instead of being a gaggle of colonies, Australian colonial soldiers, freshly-returned from the Boer War in Africa, marched through the streets of Australian cities as part of the federation celebrations.

“The Sun Never Sets on the British Empire”

The cohesion of the British Empire began to crumble during the Second World War.

Extensive demilitarisation during the interwar years had greatly weakened Britain’s ability to wage war. Britons and their colonial counterparts became complacent in their faith in the might of the British Navy, which for two hundred years, had been the preeminent naval force in the world.

Defending the Empire became increasingly difficult as it grew in size. In the years after WWI, Britain believed that the “Singapore Strategy”, its imperial defence-plan based around Singapore, would protect its holdings in the Far East.

The strategy involved building up Singapore as a military stronghold for both the army, navy and Royal Air Force. In the event of Japanese aggression, the Navy could intercept Japanese warships, and the air-force and army could protect Singapore from land or air-based invasions. The Navy would be able to protect Singapore, Hong Kong and Malaya from Japanese invasion, or would be able to drive out the Japanese, if they did invade.

Under ideal circumstances, such a plan would be wonderful. But in practice, it fell flat. The British Royal Navy simply did not have the seapower that it once had. It had neither the ships, sailors, airmen or aircraft required to protect both England and Singapore. Great Britain was having enough troubles defending its waters against German U-boats, let alone Japanese battleships and aircraft carriers in the Pacific.

On top of that, Singapore simply wasn’t equipped to hold off a Japanese advance, from any direction under any means. Even though the British and colonial forces on Singapore vastly outnumbered those of the Japanese, the British lacked food, ammunition, firearms, artillery, aircraft, and naval firepower, resources stretched thinly enough already due to British disarmament during the 1920s and 30s.

The fall of Singapore after just one week of fighting the Japanese was a great shock to the Empire, especially to Australia and New Zealand, who had relied on Singapore to hold back the Japanese. In Australia, the fall of Singapore showed the government that Britain could not be trusted to protect its empire.

When Darwin was bombed, defying orders from Churchill, Australian prime minister John Curtin ordered all Australian troops serving overseas (in the Middle East and Africa) to be returned home at once. For once, protection of the homeland was to take precedence over protection of the Empire, since the Empire wasn’t able to provide protection, Australia would have to provide its own, even if it came at the expense of keeping the Empire together.

Even Winston Churchill, an ardent Imperialist, realised the futility of protecting the entire empire, and realised that certain sections would be impossible to defend without seriously compromising the defence of Great Britain. British colonies in Hong Kong, Malaya, Singapore and in the Pacific islands gradually fell to Japanese invasion and occupation.

By the 1930s, the Empire was already beginning to fall apart. Independence movements in countries like Iraq (a British mandate from 1920), Palestine (a mandate since 1920) and India, was forcing Britain to let go of its imperial ambitions.

For the most part, independence from Britain for these countries came relatively peacefully, and in the years after the Second World War, many of the British colonies gained independence.

Independence was desired for a number of reasons, from the simple want for a country’s people to rule themselves, the lack of contact and cohesion felt with Great Britain, or in some cases, the realisation or belief that the British Empire could not protect them in times of war, as had been the case with Singapore and Hong Kong.

The British Commonwealth

Also called the Commonwealth of Nations, the British Commonwealth was formed in the years during and after the Second World War. The Commonwealth is not an empire, but rather a collection of countries tied together by cultural, historical and economic similarities. Almost all of them are former British colonies.

The Commonwealth recognises that no one country within this exclusive ‘club’ is below another, and that each should share equal status within the Commonwealth. It was formed when the British realised that some of their colonies (such as Australia, New Zealand and Canada) were becoming developed, cultured, civilised and powerful in their own rights. So that these progressive countries did not feel like Britain’s underlings, the Commonwealth was formed. Now, the Dominions would not be above the Colonies, and the colonies would not be below Britain, they would all be on the same level, and part of the same ‘club’, the British Commonwealth.

The Empire Today

The sun has long since set on the British Empire, as it has on nearly all great empires. But even after the end of the empire, it still makes appearances in the news and in films, documentaries and works of fiction, as a look back at an age that was.

During the 1982 Falklands War, a conflict that lasted barely three months, the famous New York magazine “Newsweek” printed this cover:

An imperial war without the Empire…

In four simple words, this title comments on the recent Stars Wars film of the same name, the former British Empire, and on the fabled might of the Royal Navy, which had allowed the formation of that empire, so many hundreds of years ago.

Imperial Reading and Watching

The British Empire lasted for hundreds of years. It grew, it shrank, it grew again, until it came to dominate the world, spreading British customs, ideals, education, government, culture, food and language around the globe. This posting only covers the major elements of the history of the British Empire. But if you want to find out more, there’s plenty of information to be found in the following locations and publications. They’ll cover the empire in much greater detail.

The British Empire

“The Victorian School” on the British Empire.

Documentary: “The Fall of the British Empire

Documentary Series: “Empire” (presented by Jeremy Paxman).

 

Sweet, Cold and Delicious: The History of Ice-Cream

As I write this, the second-southernmost state of the Commonwealth of Australia is steadily being slow-roasted into hellish oblivion. For the third week in a row, we’re having temperatures over 30’c. And that is what has inspired this posting about the history of ice-cream.

Heaven, I’m in Heaven, and my heart beats so, that I can hardly speak.
And I seem to find the happiness I seek… 

Where Does Ice-Cream Come From?

Variations of ice-cream have existed for centuries. Cold, sweet foods which contained ice as a main ingredient date back to ancient times, in cultures as far apart as China and Ancient Persia (Iran, today), all the way to the Roman Empire. But how did ancient man produce these sweet, cooling treats, without freezers or refrigerators?

The First ‘Ice-Cream’

The first versions of ice-cream, which emerged in these ancient cultures, used crushed snow as the main ingredient. To the snow (stored in caves during hot weather, or harvested from mountains which remained cold all-year-round), various ingredients were added, depending on the tastes of the consumers, and the country of manufacture.

The first ice-creams of a sort, were fruit-based, and one of the main ingredients were fruit-juices, or purees. Of course, you could add anything you wanted to the ice; other ingredients included rosewater, saffron, or the crushed pulp of berries.

Living in the boiling climates that they do, it was the Arabians who developed ice-cream as we might know it today. Originally, the fruit that they added to crushed ice was not only to give it flavour, but also to sweeten it.

Eventually, Arabian innovators changed the recipe to improve taste and texture. To do this, sweetened milk was added to the ice instead of fruit, to create bulk and substance. And they used pure sugar, rather than the sugars found in fruit, to provide the sweetness. For the first time in history (about 900A.D.), we had our first ‘iced cream’, which literally combined ice, and cream (okay, milk…), to form a dessert that would remain popular for millenia.

The Spread of Ice-Cream

It took a while, but by the early 1700s, ice-cream was becoming popular all over the world. Recipes varied from country to country, but it was catching on fast. There were a few false starts and mistakes during the early years, but even these apparent failures gave us desserts which have survived the test of time, and became regional varieties of ice-cream; Italian gelato is one example of this.

Ice-cream became very popular in Europe. In France and Italy, and then eventually in England, too. By the late 1600s and early 1700s, ice-cream recipes had appeared, printed in a number of languages, including French and English. One of the earliest recipes for ice-cream in English dates to 1718! “Ice Cream” first appears as a dictionary-entry in 1744!

During the 1790s and the early 1800s, French aggression (remember a little chap named Bonaparte?) on the European mainland was driving Italians away from their homes. Italian refugees fled across the Channel to England, bringing their ice-creaming technology and skills with them.

Even before then, however, the popularity of ice-cream was spreading even further, and this sweet, cool dessert reached the Americas in the mid-and-late 1700s. The first ice-cream parlour in the ‘States opened in New York City in 1776. Ice-cream had been introduced to the colonials by Quaker migrants from Europe. Thomas Jefferson’s favourite flavour was supposedly vanilla.

How Do you Make Ice-Cream?

I hear you. How do you make ice-cream? They didn’t have freezers back then. They didn’t have fridges. And surely you can’t get ice and snow all year ’round? How did they make it in the summer, for example, when ice-cream would’ve been most popular? What, and more importantly, how, to do, when all the ice and snow is gone!?

Come to our aid, O great science of chemistry.

As far back as the early 1700s, housewives and professional ice-cream sellers had cracked the code of making ice-cream without all the fancy freezing and chilling apparatus which we take for granted today. Here’s how it’s done.

First, you need a pot or a can made of metal. Into this can, you put the ingredients of your ice-cream. The cream or milk, the flavorings and so-forth.

Find a larger pot. Line the bottom of the pot with ice. Lots of it. Put the smaller pot inside the larger pot, and pack in the space on the sides with even more ice. Now, just add salt.

A LOT of salt.

One particular recipe calls for a whole pound of salt.

what happens here, you ask?

The salt mixes with the ice, and the ice begins to melt.

The salty water is kept cold by the ice that hasn’t melted yet. And since salty water has a lower freezing temperature than pure water, the remaining ice can act on the salty water for a lot longer than it might otherwise do. And this drives the temperature of the salt-water-ice mix down even further.

This whole process is aided by putting the entire concoction of ice, salt, water and ice-cream, into the basement or cellar. The cold air slows down the melting of the ice that hasn’t already melted, and so the whole process is prolonged and lengthened out. The result is that the ice and saltwater slurry chills the sides of the interior pot or canister inside the main ice-pot. This, in turn, freezes the ice-cream mix inside the inner pot. Once the process is complete…you have ice-cream!

Simple.

Okay, not so simple.

The problem with this method is that, while it worked, it took a very long time. Up to four hours. When’s the last time you waited four hours to eat ice-cream?

A faster method of making ice-cream was needed. And in the early 1800s, that method arrived, in the United States…

Machine-Made Ice-Cream!

Since the early 1700s, ice-cream had been made the slow way. You filled a can with ice-cream, you sat it in a basin of ice and salt, and let basic laws of science do the rest. It produced a great result, at the expense of a lot of time. Something better had to be found to produce ice-cream in greater quantities, or at least, smaller quantities at a faster pace!

Enter…this:

Believe it or not, but this is the world’s first-ever purpose-built ice-cream maker.

Yes. That.

It was invented in 1843 by Nancy Johnson, a lady from New Jersey, in the United States.

How does it work, you ask? It works more or less the same as the previous method mentioned above, except this one takes more muscle. It produces ice-cream in the following way:

1. Put your ice-cream mixture into the interior canister.

2. Fill the bucket with ice, and salt.

3. Turn the crank.

And how exactly does this produce ice-cream?

Constantly turning the crank moved the interior can around in the slurry of saltwater and ice. This nonstop agitation mixed up the ice and water, but also mixed up the ice-cream. The result is that more of the ice-cream mixture gets to contact the freezing cold sides of its metal container, which means that the temperature of the ice-cream batch on a whole, decreases much faster. The faster you crank, the faster this happens, and the sooner you get ice-cream!

A bonus of the Johnson method of ice-cream making was that you also got ice-cream of a much better texture. The previous method, of simply freezing the cream in a bucket of icy saltwater produced a sort of ice-cream lump, similar to an ice-cube. The constant agitation produced by the hand-cranked freezer was that mixing the ice-cream around inside its receptacle prevented it from clumping together into chunks and blocks, and aerated it at the same time. The result was smoother, creamier ice-cream!

The result of this was that in 1843, you had the Johnson Patent Ice Cream Freezer. There are conflicting reports about whether or not Ms. Johnson ever patented her machine. Some say she did, in September of 1843, while others say say it was never patented at all. A Mr. William Young patented a similar machine and named it after her, in May, 1848. Whichever version of events is true, we have Nancy Johnson to thank for the first machine-made ice-cream in the world!

Ice-Cream, You Scream, We All Scream for Ice-Cream!

From its crude beginnings in the Middle East, up until the mid-1800s, ice-cream was a delicacy and a treat. Phenomenally expensive and extremely fiddly, labour-intensive and tricky to make in any decent quantity, ice-cream was originally available only to the super-rich.

But it’s so easy. You get the cream, the sugar, the flavourings, you put it in a pot, you put the pot in the ice-water and the salt and…

It’s not so easy.

First, you need the ice. To get that, you had to carve it out of frozen lakes. Or haul it down from the mountains and store it in ice-houses during winter. And you needed to have an ice-house to begin with! And the labourers or slaves to cut, dig and haul the ice.

Then, you needed the salt. Salt was so tricky for most people to get that for centuries, it was traded as currency. It’s where we get the word ‘salary’ from, because people used to paid in salt, or paid money so that they could then go and buy salt for themselves. Salt was only obtained at great expense in time, from evaporating great quantities of seawater to obtain the salt-crystals, which would then have to be washed and dried and purified. Or else it had to be dug out of salt-flats, crushed, and purified again. This made salt extremely expensive, and out of the reach of mere mortals like you and me.

The relative scarcity of the ice required to cool down the cream, and the salt needed to provide the reaction, meant that large quantities of ice-cream were very difficult to make, and thus, were only available to the richest of people, who could afford the expense of the ice and salt. Most ordinary people wouldn’t have bothered to waste precious salt (used to preserving fish and meat) on something as wasteful and as extravagant as ice-cream! The damn thing melted if you left it on the kitchen table. What use was that all that fuss over something that didn’t last?

It wasn’t until large quantities of ice and salt were able to be produced, harvested or sold cheaply enough for anyone to buy it, that making ice-cream for everyone really became a going concern. Before then, it was simply too expensive.

Nancy Johnson’s ice-cream machine from the 1840s made efficient manufacture of ice-cream possible for the first time. Granted, these early hand-cranked machines could only freeze a small amount of ice-cream at a time, but they were a big improvement on waiting for hours and hours and hours for the same thing from a can sitting in a pot of salty slush!

Building on inventions such as the Johnson ice-cream freezer, by the mid-1800s, it was possible to produce ice-cream in commercial quantities, and the first company to do so was based in Maryland, in the United States.

The man responsible for the birth of commercial ice-cream manufacture was named Jacob Fussell. Fussell was a dairy-produce seller. He made pretty good money out of it, but he struggled constantly to sell his containers of cream. Frustrated about the fact that this cream would otherwise constantly go to waste, Fussell opened his first ice-cream factory in 1851.

Fussell spread the gospel of ice-cream, and as more ice-cream manufacturers sprang up around the ‘States, you had ice-cream for the common man.

Ice-Cream in Modern Times

By the 1900s, ice-cream was becoming popular everywhere. In the 1920s, the first electric refrigerators, and by extension, the first electric freezers, made ice-cream production, selling, buying, storing and of course, eating, much easier. It was during this time that companies and distributors like Good Humor (1920), Streets (1930s) and Baskin-Robin (1945) began making names for themselves…and which they still do today.

Since the invention of the Johnson ice-cream freezer in the 1840s, ice-cream could now be made faster and cheaper. Refrigeration technology, and the technology to manufacture enormous, commercial quantities of ice also aided in the ability to make ice-cream available for everyone. This also led to ice-cream being served in different ways for the first time in history.

Ice-Cream on a Stick!

If as a child, or even as an adult, you ever went to the corner milk-bar, drugstore or convenience-shop, and opened the ice-cream bin, and pulled out an ice-cream bar on a little wooden paddle or stick, then you have two little boys to thank:

Frank Epperson, and Harry Burt Jnr.

Ice-cream-bars, or frozen, citrus-based popsicles, or icy-poles, were invented in the early 20th century by two boys living in the United States.

The first popsicle was invented in 1904, by little Frank Epperson. Epperson was eleven years old when he tried to make his own, homemade soft-drink. He poured the necessary ingredients into a cup, and stuck a wooden paddle-stick into it to stir the contents around. Epperson left the mix outside in the garden overnight, and went to sleep.

During the night, the temperature plunged to frigid, subzero temperatures. When little Frankie woke up the next day, he found that his mixture had frozen solid inside the cup! Undaunted, as all little boys are, he simply turned the cup upside down, knocked out the frozen soda-pop, grabbed his new invention by the stirrer-cum-handle, and just started sucking on it. The world’s first-ever popsicle!

The invention of the world’s first ice-cream bar can be attributed to young Harry Burt.

Okay, so Burt wasn’t so young. But he did invent the ice-cream bar on a stick.

Burt’s father, Harry Burt senior, was experimenting with a way to serve ice-cream on the go. To make the ice-cream easier to sell, he set the cream into blocks. To keep the customer’s hands clean, he dipped the blocks in chocolate and froze them so that clean hands need not be soiled by contact with melting ice-cream.

The problem was that…what happens when the chocolate melts?

This was the point brought up by Harry’s daughter, Ruth Burt. Harry wasn’t sure what to do about it. That was when Ruth’s younger brother, 20-year-old Harry Junior, came up with the idea of freezing the ice-cream with little wooden sticks already inside them, to give the customer something to hold onto, and minimise the chances of ice-cream going all over the customer’s hands.

Daddy liked the idea so much that he gave it a shot, and success ensued! Between them, the three Burts had invented the ice-cream bar on a stick!

Sundaes on Sundays?

Ah. The joys of having a dish made almost entirely out of ice-cream. Sinful, isn’t it?

Apparently, someone thought so, because in the United States, it was illegal to eat ice-cream on Sunday!

Is that true?

Honestly, nobody knows. Maybe it is. Maybe it isn’t. The legend goes that since selling  ice-cream was illegal on Sundays, ice-cream vendors would sell ‘sundaes’ instead, deliberately mis-spelling the name to circumvent the religious morality laws (‘blue laws’) which were killing their businesses.

Something else that nobody knows is where the sundae as an entity, was invented. The United States. But which city? And state? Nobody knows for sure.

Whoever invented sundaes, and for whatever variety of reasons, we should thank them for inventing one of the most enjoyable and most variable ways of consuming ice-cream  ever thought of.

…Banana split, anybody?

Sweet, Creamy Goodness

Looking for more information? Here are some links…

http://firesidelibrarian.com/projects/s532/icecream.html

http://inventors.about.com/od/foodrelatedinventions/a/ice_cream.htm

http://www.idfa.org/news–views/media-kits/ice-cream/the-history-of-ice-cream/

 

Up in Smoke: A History of Firefighting

Fire. One of man’s greatest creations. It allowed for light, heat, and the invention of the barbeque! For millenia, fire was essential to survival in one form or another. But fire was, and remains, a constant threat. Handled properly and safely, fire provided light, heat and the ability to cook delicious meals. But an act of carelessness or a lack of foresight could turn one of the most important forces known to man, into a destructive cataclysm far beyond our control.

To prevent and to manage events of the latter nature, we have firefighters, and firefighting equipment. Fire-fighters have been around ever since Ancient Rome, and they have a long and fascinating history, which this posting will explore.

Ancient Firefighting

Firemen have existed for centuries, in one form or another. There are fire-fighting teams that go back to Ancient Egypt and even Ancient Rome.

The first fire-fighting brigade of significance was established in Rome, by a man named Marcus Licinius Crassus. A wealthy businessman, Crassus employed a team of 500 men whose job it was to extinguish structural fires in the city of Rome…for a fee…to be paid…before the firefighters would even tip so much as a thimble of water…

So much for that.

The Emperor Augustus liked the idea of Rome having a firefighting force. He established the world’s first professional fire-brigade. Called the Vigiles, these men patrolled the streets at night. Upon the alarm of fire, they formed bucket-brigades and teams of laddermen and hook-men, who extinguished fires, or pulled buildings down, to prevent the fire from spreading to other structures in the surrounding areas. The Vigiles did double-duty as an early-form of police-force as well, keeping an eye out for crime, and making sure that the city was safe from both fire and thieves and generally, being vigilant. Yes, that’s where the word comes from. It’s also where we get the term “Vigilante”.

Ancient Firefighting Tools

For centuries, up until the 1800s, firefighting equipment was rudimentary. Buckets of water, long fire-hooks, to pull down buildings, ladders to reach high windows, primative hand-powered water-pumps and only moderately effective “Water-Squirts” (a handheld water-dispenser which was a bit like a modern child’s water-gun), were the main tools of the trade. Fighting a fire was less to do about putting the fire out, and more about preventing its spread. Fire-hooks were used to pull down burning buildings in danger of collapse, or to destroy buildings in the fire’s path, to create a firebreak which the flames couldn’t jump, thus containing its destructive force.

During the medieval period, firefighting was largely self-organised. Various European monarchs (such as Louis IX of France), set up state-funded fire-fighters, but also encouraged regular citizens to form their own “fire-bands”. These acted like Neighbourhood Watch committees, which patrolled the streets at night, keeping an eye out for fires and crimes in progress.

The Great Fire of London and Advances in Technology

In 1666, the ancient city of London was razed to the ground by a fire started by the King’s baker, the unfortunate Mr. Thomas Farynor. The Great Fire was a disaster unprecedented in the history of London. Sure, there had been fires before, but no fire had ever burnt down 4/5 of the city! The Great Fire of London also instituted the start of a newfangled concept in the world…insurance! For the first time ever, you could now pay for fire-insurance! An insurance-company would open an account and upon consideration of a few pounds each year, you would have fire-protection in the event of your property going up in flames. In return for your patronage, the insurance-company gave you a big, fancy metal “Fire-Mark”. This plaque was to be affixed to your residence in a prominent place (such as next to the front door), to indicate to the company’s private fire-brigade, that you were a paying customer who they were expected to help, in the event of a house-fire. And now, fighting fires was slowly getting easier, too!

By this time, the first really successful fire-pumps had been developed. They were heavy, lumbering things that needed a horse to pull them, but they did work. Their main issue, however, was that they had a very short range. You had to be right in front of the fire for the pump to be any good at all.

These early pumps were called ‘force-pumps’. This meant that water filled a piston-shaft, and the piston forced the water up a pipe and out of a nozzle on each down-stroke. On the up-stroke, the piston-shaft was again filled with water from the tank, and again, forced out by the down-stroke.

These pumps were ineffective and rather time-wasting. The man who improved them was a German inventor named Hans Haustch. He developed a suction-and-force pump in the 1600s. This meant that pumping the handle up and down both pumped out water from the piston-shaft, but also pulled more water in from the tank, creating a constant and more powerful flow of water.

Although it was an improvement, this new double-action pump was useless, relatively speaking, until the intervention of Dutchman, Jan Van Der Heyden.

van der Heyden (1637-1712), developed the one crucial bit of equipment so vital to firefighting that centuries later and every fire-station on earth still has one!…in fact, every fire-station probably has dozens of these things!

The fire-hose.

Jan van der Heyden was a Dutch inventor. He developed his newfangled ‘fire-hose’ in the 1660s. His brother Nicolaes was a hydraulic engineer…a handy person to have when designing fire-fighting equipement…and together, they developed a perfected version of the fire-hose in 1672.

Affixed to the spouts of the new double-action water-pumps, the van der Heyden Brothers’ new fire-hoses (made of leather, the only material sufficiently strong enough to cope with the pressure), allowed people for the first time to have directional, pressurised water as a means for attacking a fire. No longer was the range of your attack limited by how far you could throw a bucket or how close you could park the fire-engine, but rather, by how fast you could pump the handle. Everything else was managed by the hose. Direction, height, distance…all you had to do was point and shoot. A great improvement from standing six feet from a blazing building holding a piddly bucket of water. Despite these advances, however, in Colonial America, it was still the law in many towns that every household kept a bucket of water outside the front door at night, as a safeguard against fires. The buckets were used by the local fire-watch, and would be returned to the home-owner once the fire had been put out.

The Development of the Fire Engine

The first fire-engines, with the new water-pumps and leather hoses, hooks and ladders, axes and buckets, were developed in the 1700s. By the Georgian era, firefighting had developed to a point that it was finally practical to make a mobile firefighting unit, the fire ENGINE.

The fire-engine had been in development in the 1600s, but the first really successful versions took root at the dawn of the 18th century. Horse-drawn fire-wagons could now to be directed to any part of a city with its supply of water, hose, pump, men and equipment, to tackle a major conflagration.

It was around this time that the first modern firefighting brigades were developed. While there were still penny-pinching, profiteering private fire-companies around (they were particularly notorious in the United States), city and state governments were now establishing the first paid fire-fighters.

The first city to have such a fire-brigade was Paris. Created by order of King Louis XIV, the “Company of Pump Guards“, as it was called, was the first professional, state-funded, uniformed fire-brigade in the world.

To prevent the squabbling and fighting that had attended the Ancient Roman firefighters, and even the colonial firefighters and private firefighters in the United States, the French Government decreed that ALL firefighting missions were provided by the city, to the victim, FREE OF CHARGE.

As the 1700s progressed into the 1800s, more and more city-funded fire-brigades were established. Big cities such as London, Edinburgh and New York soon had city fire-services and organised firefighting had become a reality.

Fire-Trucks

Fire-trucks are famous aren’t they? Jangling bells, wailing sirens, flashing lights, and that distinctive “fire-engine red” paintjob!

The first-ever modern fire-truck came out in the early 1900s, and belonged to the Springfield Fire Department in Springfield, Massachusetts, USA. Here’s a photograph of it:

This fire-truck was made ca. 1905, an age when most motorised vehicles were still the handmade, and extremely expensive preserve of the upper classes. But it is, nonetheless, the world’s first modern fire-truck.

Victorian-era Firefighting

The 1800s saw a huge rise in urban populations, industry, and…fire. By now, most big cities had their own, state-funded fire-services. But technology was still rather primitive. To improve firefighting, a number of changes had to be made.

Fire-wagons were still horse-drawn, but to improve efficiency, the first coal-fired, steam-powered water-pumps were installed on fire-engines in the 1800s. These allowed for longer fire-fighting times, and for more men to be used fighting the fire, rather than manning the pump.

It was around this time that the fire-dog became famous.

Dalmatian dogs are a common symbol of fire-stations in the United States. They’re famous for being white with black spots, for wearing classic red fire-helmets and for rescuing people from burning buildings!

But why are they there in the first place?

Fire-dogs, the Dalmatian dogs which are so strongly associated with firehouses, are descendant from 18th and 19th century “carriage dogs”. Carriage dogs (an ancestor of the modern Dalmatian) were the canine companions of coachmen back in the days of horse-drawn carriages. They were a sort of car-alarm with fur.

In the 17 and 1800s, when nearly all transport was horse-drawn, the welfare of the horse that did the drawing was extremely important. Especially when the transport happened to be a fire-engine. To protect horses from harm, such as horse-thieves, it was common for stable-boys, grooms and coachmen to keep dogs near to the horses, to drive away people intending the animals harm.

When a fire-engine went out on a call, the dogs went along with it, again, to guard the horses against people who might want to steal or harm the horses, which in the 1800s, were valuable assets.

The 1900s saw the end of the horse-drawn carriage, but the Dalmatian dog remained. They don’t run alongside, or guard the wheels of modern fire-trucks anymore, but they have stayed a symbol of firefighting ever since.

Fire Extinguishers

For most of history, the most widespread fire-extinguisher of any kind was a bucket of water stored next to the stove, or on the front porch.

The first modern fire extinguishers were developed in the 1800s.

Capt. George William Manby, a writer and inventor from England, created the first modern fire extinguisher in 1813.

It was designed to be portable, but it was made of copper, and weighed about 12kg! But it was, nonetheless, a fire-extinguisher.

It was filled with a water-and-potassium-carbonate solution, contained under pressure. In the event of a fire, the pressurized solution could be sprayed out of the nozzle to extinguish the blaze.

In the second half of the 1800s, numerous inventors came up with extinguishers which did more than just spray ordinary water onto a fire. Starting in the 1860s, inventors created the “soda-acid” fire extinguisher, which was particularly useful for fires where there might be poisonous chemicals around.

The soda-acid extinguisher worked by having the main canister of the extinguisher filled with a mix of water, and sodium bicarbonate…baking-soda!…and a separate phial filled with sulphuric acid, sealed inside the main canister, along with the water-soda mixture.

In the event of a fire, the extinguisher (depending on the design) was either tipped upside down, or a plunger was pushed or pulled. The idea was that this motion would break the glass phial inside the extinguisher. This released the acid into the water-soda mixture. The resultant reaction created high pressure, and a lot of carbon-dioxide gas. This could be forced out of the nozzle of the extinguisher to put out the fire.

One of the more interesting types of fire-extinguishers developed during the 19th and early 20th centuries was the so-called “fire-grenade”.

An antique glass ‘grenade’ fire-extinguisher

The fire “grenade” was a sphere of glass filled with either salt-water, or the chemical carbon tetra-chloride (“C-T-C”).

Fire-grenades could be used by firemen or people in distress, to put out a fire from a distance. One simply lined up the fire in one’s sight, and threw the grenade at its base. The glass shattered and the spreading water (or chemicals, as the case might be) put out the fire, with minimal risk to the firefighter or person in distress.

In some places, fire-grenades were placed on special hair-trigger harnesses above doorways in big, public buildings. This way, if there was a fire, the grenades could fall from their harnesses into the doorways above which they were installed. This kept the doorway clear of flames, allowing people a safe escape-route (so long as you were fine with running on top of broken glass!).

Fire-Helmet

Ah, the classic fire-helmet. Originally made of leather, or brass, and today more commonly made of special plastics, the fire-helmet was developed during the Victorian era, as a way of protecting firemen from two of the biggest dangers of fighting a fire: Collapsing buildings, and getting soaked.

Fire-helmets are famous for their long, sloping rear brims. These are designed to protect the neck and the back of the head, and to deflect falling water away from a fireman’s neck, and going down the back of his shirt. Meanwhile, the iconic shape is designed to protect against falling objects, such as collapsing scaffolding, bricks and other debris that might come crashing down out of a fire-weakened building.

Brass helmets were popular during the Victorian era. But they started being changed for safer plastic helmets in the 1900s because of the risk of electrocution from electrical fires. As a result, fire-helmets today are made of special, heat-resistant plastics and composite materials.

Fire Hydrants

The first ‘fire hydrants’ of a sort, were developed in the 1600s. Cities lucky enough to have running water had it transported around town in wooden mains-pipes, which were buried under the streets. In the event of a fire, firemen would dig a hole in the street to expose the water-mains below. A hand-drill was used to bore a hole into the pipe. As the water rushed out and filled the hole around the pipe, a bucket-line could be formed around the hole, filling buckets with water and sending them, hand-over-hand, to the blaze.

When the fire was extinguished, the hole in the mains pipe was plugged with a wooden bung. The hole in the street was filled in, and a marker was placed on the spot. This was so that any future firefighters would be alerted to the presence of a previous bore-hole in the area, if they ever needed to fight a fire in that street again. This is why some people still call fire-hydrants ‘plugs’ to this day; because they literally plugged the water-mains.

The modern fire-hydrant, which we see on street-corners, and which are painted bright red, came around in the 1800s. It was invented in 1801 by Frederick Graff, then chief-engineer of the Philadelphia Water Works. Ironically, the patent-papers for Graff’s invention were lost when the United States Patent Office in Washington D.C. burnt down in 1836!

The Firepole

It’s a scene played out in old movies, cartoons, T.V. shows, and in almost every episode of “Fireman Sam“; the call comes in, the alarm-bells start ringing, and firemen leap into action, jumping for the fireman’s pole, swinging around and sliding down the shaft to the ground floor of the firehouse, to jump into their uniforms, put on their helmets, start up the fire-truck and charge off to the scene of some catastrophe, red lights and sirens glaring and blaring.

The fire-pole was invented in Chicago in the 1870s. As with many inventions, necessity, and a certain level of ingenuity, gave birth to one of the most iconic pieces of firefighting equipment ever.

Firehouse No. 21 in Chicago was an all-black firehouse, and the resident captain, David Kenyon, was stupefied when he saw one of his firemen, George Reid, slide down a pole from the second storey of their three-storey firehouse to the ground floor to respond to an emergency.

At the time, fire-poles did not exist; Reid had actually used the lashing-pole which the firehouse used in transporting bales of hay for the fire-wagon horses. The pole was used as a securing-point when hay-bales were loaded onto the hay-wagon, to stop them rolling off during their deliveries.

Kenyon was so impressed that he pestered the Chicago fire-chief over and over and over again to give him permission to install a similar, purpose-built pole in his firehouse. Eventually, the chief gave in, and agreed – provided that the funds needed for the installation and maintenance of the pole came entirely out of the pockets of the firemen who used it.

And so it came to be, that in 1878, the world’s first fireman’s pole was installed at Chicago Firehouse No. 21.

At first, nobody paid any attention to the pole, and other firemen thought it was stupid and ludicrous. It was some ridiculous toy to play around with when the boys at firehouse 21 had nothing else to do!

But other firehouses began to sit up and take notice when they realised that Firehouse 21 was responding to emergencies much faster, especially at night.

Not having to deal with doors, staircases, landings and overcrowded corridors meant that the firemen could literally slide into action and be ready to go in just a few minutes; compared to having to run down stairs, hold doors open, and risk tripping and falling over, especially in the dark.

With the benefits of fire-poles established, every firehouse in the world began to be fitted with them. To make them stronger and longer-lasting, the world’s first metal fire-pole (made of brass), was installed in Boston, in 1880.

An old fire-pole with important safety-features:
Double trap-doors, and a safety-cage

These days, fire-poles are sometimes considered more of a hindrance than a help, because of the dangers of sprained or broken legs and ankles, and risks of losing one’s grip, and falling. Some countries have outlawed them altogether, but other countries continue to use them, albeit, with better safety-measures in-place, such as protective railings, and padded landing-mats. These prevent accidental falls, and cushion any hard landings.

 

The “Idiot Box” and the History of Television

The television, the T.V., the idiot-box, the electronic babysitter. That magical screen in our living-rooms which has brought us news, sports, weather, education, entertainment, excitement, bemusement and rage, has come a long way since its inception nearly 100 years ago.

This posting will have a look at the history of television, from its beginnings to the commencement of regular programming.

The Television and Us

For most of us in the 21st century, life without television is inconceivable. There are those of course, who were born without it, but with it or without it, chances are, if you watch it regularly today, you would be hard-pressed to imagine your current and future existence without this magical device in your living-room. How many incredible events have been brought to us through the television? How many amazing films have we seen? Famous and memorable TV serials, and even advertisements. Everything from “Happy Days” to “Brylcreem” (just remember, only use a LITTLE dab), to “Are You Being Served?”

Mankind’s love-affair with the TV is inseparable, unstoppable and unthinkable that it should ever go away. But where does TV come from?

A World Before Television

In a dark and soul-less time, before computers and fax-machines and mobile telephones, when eggs were 5c a dozen and penny-candy was really a penny, mankind tuned into the radio.

From the early 1920s, until the late 1950s, we enjoyed a roughly 30-year period where radio was king. When we literally had to tune in and warm up, to enjoy a program over the air. This was the Golden Age of Radio. It brought us such memorable events as the Hindenburg Crash of 1937, the Attack on Pearl Harbor in 1941, the Declaration of War in 1939 and countless famous old-time radio programs, from “Gang Busters” to “Dragnet”, to “Richard Diamond” and “Abbott & Costello”.

If you want to read more about that, have a look here.

Back then, the family radio-set was an important piece of household equipment. But even by the 1930s, its dominance in our living-rooms was being threatened by a new kid on the block called television.

The Invention of Television

The word ‘television’ comes from the Greek ‘tele’ meaning ‘from afar’. Just like how telephone, and telegraph mean sounds, and writing, or messages, from afar, television means pictures from afar.

So, who invented television?

As with many great inventions, from airplanes to motor-cars, telephones, the fountain pen and the typewriter, television cannot be wholly attributed to one man.

Experiments in transmitting images over a distance have dated back as far as the late 1800s, however, television as we would recognise it today, that is, moving images transmitted to a screen, did not emerge until the mid-1920s. The man responsible for its creation was Mr. John Logie Baird, a Scotsman (1888-1946). To this day, the Australian TV industry still holds the “Logie Awards” every year in his honour.

Mr. Baird was experimenting with transmitting images over the air for a long time, starting in the early 1920s. However, it was not until the early 1930s that the first TV sets that we might know today, ever appeared in shop windows.

Early Television

Named after its inventor, this is the Baird Televisor, ca. 1933, one of the first ever residential TV sets! It’s hardly widescreen, but it is a television.

Back in the 20s and 30s, radio was the dominant force for entertainment, education and news, and T.V. programming was often limited to a few hours, or even a few minutes a day, and nothing more than black and white film with no sound, or sound, with no pictures! T.V. during the interwar period was little more than a fairground attraction, or a toy for the rich.

By the second half of the 1930s, TV started becoming more accessible, and more advanced, although it still had a limited market. Picture-quality was not what it might be, but now, TV sets had sound! Sets were still expensive, but those who could afford them, bought them from famous department-stores like Selfridges in London. In the United States, T.V. broadcasting started in the 1930s and Franklin D. Roosevelt was the first American president to appear on television, at the 1939 New York World’s Fair.

Nazi-Vision

That’s right! Nazi-Vision!

Believe it or not, but it was the Nazis who created one of the world’s first national television networks. German factories started producing early TV sets in 1934, and the Nazis were among the very first people on earth to realise the potential for television to reach several audiences at once, and spread the glorious Nazi ideologies of Strength through Joy, racial purity and an abundance of bratwurst for all!

Based in the German capital of Berlin, the Nazi-controlled broadcasting station and studio produced everything from propaganda movies, to Nazi rallies, speeches and other material, which was transmitted to the screens of loyal Germans fortunate enough to own the first generation of home television-sets. While most of the programming was broadcast live, and was not recorded, some 250-odd reels of ancient film remains, which gives us a tantalizing look at television under the Nazis, from 1935-1944.

Although the Nazis could see that TV could be a great technology for spreading their ideologies and propaganda, they also realised that the technology would have to be greatly improved before it would work properly. The limitations of early cameras meant that picture-quality was mediocre at best. Their solution was to record their broadcasts onto film, and play it back later, like they did with any other movie. This not only improved quality, but it also had the unintended side-effect of giving us a record of Nazi television that has survived to this day.

Despite the Nazis grand vision, the relative expensiveness of television sets meant that the audience for their programming was always rather small. Few people owned sets. Those who did were usually party-members with the money to spend, people in positions of power, money or authority, and a chosen lucky few private citizens. The rest of the sets were set up in public “Television Parlors”, scattered around Berlin. They were little more than simple movie-theaters, where the big screen had been replaced by the small one.

Another opportunity for the Nazis came in 1936. That’s right, the Berlin Olympics of 1936, where Jesse Owens beat the Aryans and humiliated Hitler, were the first Olympics to be publicly televised!

However, the fact remained that, despite the Nazis best efforts, early television remained impractical on a large scale. They had improved some things, such as picture-quality and sound, but a limited audience meant that until the medium was more widely adopted and accepted, and better recording, broadcasting, and receiving means had been devised, TV would be little more than a toy. Indeed, even by the outbreak of the Second World War, the entire nation of Germany had only about 500 television sets, scattered around the country.

Television and the War

By the early 1940s, some semblance of regular TV broadcasts had begun. In 1941, CBS in the United States was broadcasting televised news in 15-minute bulletins, twice a day. Regular programming began to introduce the TV shows that we would recognise today, although the limitations of the studio-cameras and lights of the period left much to be desired when it came to picture-quality. The war itself played a big role in holding back the development of TV. Rationing and shortages of almost everything needed to make TV sets, from wood to metal to glass, made them expensive luxury-items. And at any rate, the companies that made TV. sets were more interested in making radios and other electronics for the war effort.

These shortcomings and interruptions severely affected the widespread use of televisions, and it wasn’t until after the war, in 1947, that regular T.V. broadcasting really took off in the United States.

In Germany, where television was being exploited for propaganda purposes, advances in technology had been made, but even then, programming was brief. Usually only a few hours a day, if at all. By autumn of 1944, with constant, heavy bombing-raids on German cities, and the war going badly for the Nazis, the national broadcasting company in Germany ceased transmissions.

Please Check your Local Paper for the Times

The war is over! Yay!

In the late 1940s, TV programming really started taking off. With the war over, more technology and research could be profitably spent developing and improving the emerging medium of television. For the growing number of television-owners, there were now more frequent telecasts and a greater variety of options, everything from news programs, sitcoms, and early kids’ shows like the famous “Howdy Doody” program, starting in 1947!

There was stiff competition from radio during this time, but one by one, popular radio programs of the 1930s and 40s slowly shifted from the old, to the new, setting up regular TV spots for themselves on the weekly schedule. For a while, some actors and performers ran concurrent TV and radio programs; “Dragnet” used to do it for nearly a whole decade!

By the early 1950s, TV was becoming more and more accepted, and popular shows such as “Amos & Andy” (1951) and the Jack Benny Program (1950), were big hits on TV. Radio-writers and musicians who found themselves suddenly unemployed, began scriptwriting for these newfangled television-series, and writing and recording music for TV shows.

The Shape of the Box

Early televisions of the 1930s and 40s closely followed the styles of furniture and radios of the period. A typical 1930s radio-set was large, with a handsome wood case, cloth-covered speakers and handsome bakelite knobs. Television sets were made in the same style. Here’s an RCA 360, from 1947, one of the first postwar televisions to be mass-produced and available to the public:

By the 1950s, as with many other things, from typewriters to radios to kitchen gadgets, sleeker lines, newer materials and different colour-palettes were the rage. Boxy old wood-case televisions were out. More simplistic and uncluttered looks were in…

In the 50s, televisions were the latest and greatest thing around. Some people who couldn’t actually afford a set, would just buy an aerial and stick it on their rooves, just to pretend that they did, so that they could keep up with the Joneses.

Remote Television

Almost as soon as TV started taking off, people started looking for ways to make the technology more appealing to the everyday user. Why should you have to get up and flip a dial and knob whenever you wanted to change the channel? That arduous, six, seven, or nine-foot trek to the set, and back again, is such an inconvenience! Surely there’s a better way?

I See the Light!

As early as 1950, the first TV remote-controls had been invented. Originally connected to the set itself by long cables, the first wireless TV-remotes, of the kind we recognise today, came out in the mid 1950s. One of the first wireless remotes was the Flashmatic, from 1955. It worked quite simply: You pressed the buttons on the controller and aimed it at the television. A beam of light from the remote hit a photoelectric panel on the TV set, which changed the channel.

Brilliant, but problematic. See, the light-sensitive electric cell on the television-set did not differentiate between the beam shot from the remote, and any other source of light. If you turned on an electric lamp near to the television, or even if you opened the curtains and let in the sunlight, the channel would change automatically, even without the remote!

A Click and a Switch!

Early TV remotes worked on light-beams affecting light-sensitive electric panels on the television set. They worked well enough, so long as you had a decent aim and there weren’t any interfering light-sources, but the drawbacks of their over-sensitivity and fiddly operation made them somewhat impractical. A better type of TV remote was invented shortly after, which relied not on light, but on sound. Pressing the remote-buttons let off clicks of different frequencies, which could be picked up by the TV-set. Each frequency related to a specific command – changing the channel, or the volume, as the case may have been. But even this could be problematic, when people with sensitive hearing could hear the pulses of sound (which were designed to be outside the human hearing-range).

Slice and Dice!

Don’tcha just hate it that, just when the show gets to the interesting bit, it suddenly breaks for a commercial?

You can thank TV remotes for that.

After the invention of the remote, it was discovered by studio bigwigs that airing commercials between shows was ineffective. Once a show was over, you could just turn the set off, or flip to another channel. And you didn’t have to watch the stupid commercial for Remington typewriters, or Brylcreem, or Pepsodent, or whatever other boring junk those commercial schmucks were trying to peddle in your own living room! How dare they invade your privacy like this!?

To remedy this, the modern format of television was created, where shows were split into segments or acts, just like a play at the theater. This allowed for advertising, but it also meant that people were less likely to flip away from the channel, in case they missed the return of their favourite TV episode, thereby increasing the viewer-numbers of TV commercials.

Crafty bastards…

The Golden Age of Television

The Golden Age of Television is defined as the period from the early 1950s up to the 1970s. It was during this period that many of the classic and famous TV shows that we know and love and remember, were broadcast. But more importantly, it was during this time (especially in the 50s and 60s), that TV gained dominance over radio for the first time in history. Also, it was during the 50s and 60s that TV developed its own style, format and language.

Previously, TV shows were modeled after radio-programs, but not everything used in radio was possible on television, which necessitated various changes, which led to the evolution of modern television. Shows produced on TV during and after this changeover, are considered classics of television.

What shows, you might ask? Well, how about Dragnet? The Jack Benny Show? Amos & Andy? Leave It To Beaver? Life with Luigi, and numerous other programs.

Good Night, and Good Luck

Along with regular programming, the television revolutionized the broadcasting of news. Previously, you had the radio and the newspaper. But now, the nightly, six o’clock bulletin was the mainstay of news, sports and weather. The news anchor and reporters became staples of nightly broadcasts. Programs like the 1950s “See it Now“, began to replace radio broadcasts as the method for spreading news to the public. The line “Good night, and good luck”, was the sign-off line used by famous reporter Edward R. Murrow, notable for reporting on the Blitz in London, and MacCarthyism during the 1950s.

We Return You to Your Regularly Scheduled Program…

By the 60s and 70s, TV had become the mainstay of most well-to-do households in the developed world, and had finally replaced radio as the main medium for electronic entertainment, music and news. It had by now, reached the format which we’re most familiar with.

The 60s and 70s saw many of the most famous TV shows in history take to the air, like Gilligan’s Island, the Addams Family, Are You Being Served?, Dad’s Army, Dragnet (which transferred from radio in the 1950s), and the Dick Van Dyke Show.

It was in the early 1970s that the first TV-recording equipment arrived on the scene. These days, we have DVD recorders and other technology that will allow us to pause, rewind, record and watch multiple shows at once. But we wouldn’t have gotten anywhere if the VCR and the video-cassette didn’t get there first. Entering the market in 1971, the VHS tape and the VCR remained the standard method for recording TV-programs for thirty years, until the end of the 20th century. Tricks like putting sticky-tape over the slots in the tape-cassette to disable the anti-recording feature on some cassettes, would enable people to use almost any cassette to record movies, TV shows and almost anything else that they wanted, right off their TV sets. VCDs, and eventually, DVDs, and their accompanying recorders, would of course replace them starting in the late 1990s, but VHS tapes paved the way.

That brings us more or less to the modern day, so far as TVs are concerned. Some things have changed, such as digital TVs from old cathode-ray tube (CRT) TVs, and the lack of a need for a pair of rabbit-ear antennae, but in the past few years, not much else has changed about the basics of television as we know it today.

Want to Know More?

“Television under the Swastika – The History of Nazi Television”

A History of Television from the Grolier Encyclopedia

 

The Elements of a Vintage Study or Office

It occurs to me that there’s a lot of blogs and forums out there these days, dedicated to the proposition that all men are not created equal. There are those who sail merrily on their way, oblivious to everything, and there are those who have thrown out the anchors at the top of the falls, holding back with all their might, mankind’s devilish attempts to hurl them into the abyss of blandness, cookie-cutterism and lack of personality and style.

Some Sort of Introduction

Websites and blogs such as the famous Art of Manliness, and The Gentleman, and forums such as the Fedora Lounge, were created to educate people about what life, mostly for men, but also for women, used to be. Before we all got tangled up in what Hollywood and the men from marketing and advertising wanted us to look like.

Some people have seen the older ways and in one way or another, have decided that they would like to return to them, or imitate their style in one way or another, ranging from behaviour, dress, grooming, style, and home decor.

In the 21st century world, the odious ‘man cave’ has made its appearance, both in peoples’ homes, and as a term on the internet. It is an odious term. Yes. I have said it, and it is said.

We already have ‘study’, ‘office’, ‘den’, ‘loft’, ‘workshop’, ‘games-room’ and ‘garage’ as sanctuaries of masculinity, and as places for men and their friends to hide themselves away from others, and enjoy themselves in their own privacy, or enjoy their privacy with their chosen circle of friends.

But apparently, none of these terms sufficiently captured the essence of what the ‘man-cave’ is, which is in itself, a rather fluid term which at times seems to defy definition altogether. A man-cave can be anything from a games-room, a home-theater, a library, an office,  study, a private bar or a model-making workshop, tinkering-room or gym. Perhaps this is why older terminology has been replaced by something more suited to capture such a diverse space that the man-cave has become.

But I’m kinda digressing here. Like…a LOT. I apologise…

The Actual Point of this Posting

One of the most common and popular rooms in the house, and one which may well become a person’s man-cave, is the room which in older times was marked as a study, office, or den. In an attempt to inject these traditionally masculine rooms with masculinity once more, some men have chosen to go the oldschool-route, and redecorate and redesign their studies so that they might look like the great chambers of thought and knowledge that they once were, full of books, wood, leather, whiskey and tobacco smoke.

This posting will cover the details that you’ll need if you want to try and pull off that classic, old-world man-cave study/office look from yesteryear. Those big, classy executive-style offices that you see in old houses, in period movies, and old photographs, with all the lashings of wood and leather and steel and brass, glass and soft, fluffy rugs. The traditional man’s office of yesteryear.

The Stuff You Will Need

The Desk

Every good study…has a desk. It goes without saying. But if you’re going for that old-world look, what kinds of desks should you be looking for? There are several to choose from.

The Rolltop

The rolltop desk is a traditional desk-form from the Georgian era, characterised by the curved rolling lid made of linked wooden slats. The desk typically comes in one of two styles: Either with a quarter-circle curved frontage and side-panels, or a more bendy “S”-styled roll, such as what is pictured above. One is not necessarily better than the other, and it’s up to personal taste which one you want.

The rolltop desk has plenty of space for storing little nicknacks, files, stationery and so-forth, and enough space on it to keep a typewriter, or a computer. Provided the computer or typewriter is of the portable, laptop variety, the rolltop lid in most cases, can be pulled down over the machine at the end of the day, without the top of the computer or the typewriter getting in the way.

The rolltop also has lots of little cubbyholes and pigeon-holes. These are extremely useful for things like stamps, bottles of ink, pens, paperclips, staplers, hole-punchers and other desktop equipment that you would need on an infrequent basis, but would need to access in a hurry when you did.

The Slant-Top or Bureau

The slant-top or bureau desk is characterised by its famous drop-down work-surface, which is usually supported by a pair of pull-out supports, either side of the top drawers. Much like the rolltop, this desk-form dates back to the 1700s, but remains popular with those people who like to keep things neat and tidy. Its rather small size forces you to keep clutter to a minimum, and like the rolltop, a simple flip of the lid hides everything neatly away from the sight of others.

The Secretary Desk

The secretary desk is instantly recognisable from its distinctive shape. It’s basically a bureau with a bookcase stacked on top. This is a handy desk-form if you find yourself constantly needing to flip through reference-books during your work, and you’re sick of having to trek across the study to your bookcases and back, to find the information you need. Simply stack your most-used reference-books in the case above your desk!

One of the great things about desks of this type is that the shelf at the top of the desk is the perfect place to put a desk-lamp where it will provide light, but not get in the way of your work. The upper part of the pigeonholes is also great for storing pencil-mugs, drinks and other things that you might want to access when the desk itself is closed and/or locked at the end of the day.

Rolltop and slant-top desks are almost strictly wall-desks. The backs of the desks are up against the wall, literally. Some people don’t like this. They like having a desk which they can access from all sides. What should you look for?

In this category, there are two common forms.

The Pedestal Desk

The pedestal desk is a desk-form so common that its creation goes back probably to the beginning of desk-building. It’s called a “pedestal” desk because it holds the desktop above two “pedestals” which house the drawers and storage-cupboards within. In its numerous guises and variations, the pedestal desk is the one desk-form that has survived well into the modern day.

The one small issue with pedestal-desks, and other all-round desks like this, is that there isn’t any back panel behind which you could hide wires and cables, so they can sometimes present a more messy appearance.

Particularly small pedestal desks with a narrow space between the two pedestals are often called “kneehole” desks, because the space under the desktop is just wide enough for the writer to slide in and put his knees in there. Compare the kneehole desk below, to the larger pedestal desk further up, and you’ll automatically see a difference in size.

The Partners’ Desk

The Partners’ desk is without doubt, the granddaddy of all desks. They’re called partners’ desks because they’re designed to be used by two business-partners, working face-to-face, sharing one big desk, which is essentially two pedestal-desks placed back-to-back.

Partners’ desks are MASSIVE. They’re about the size of a small car and have enough surface-area to double as an airfield during times of war. I’m pretty sure that during the Battle of Britain, Churchill allowed the RAF to use his desk as a runway for Spitfires when his majesty’s airfields were bombed out of action. Yes. Their finest hour was won thanks to desk-space.

Yes, I made that up. But the size of these desks was such that during the Second World War, those daring R.A.F. chaps used to refer to partners’ desks as “Mahogany Bombers”, due to their gigantic size. And that’s the truth!

These desks also weigh about as much as a whale after it’s gone through the krill buffet. If you’re looking for a power-desk, you must buy one of these. But be warned, they weigh a lot, and they take up a lot of real-estate. You need a BIG study, office, or man-cave, to fit this in!

Unless IKEA has invented a flat-pack version of this, you’ll never get one home in the boot of your car. You might succeed if you have a truck. Best bet is a trailer of some variety, a moving-van, or a pair of teleport-booths.

Classic Desk Accessories

Now that you’ve picked your desk, you need something to put on it. What kinds of things were common on desks 50, 70, 100 years ago? For the accessories and items that make up that classic desktop look of times gone by, read on.

The Lamp

Unless your awesomeness, sophistication and coolness is such that it generates its own, blinding glow of smug superiority, you’ll need a lamp on your desk. If you want something that will match your beautiful antique or solid-wood desk, and not some smunky piece of junk that you bought at IKEA, then you couldn’t go past a traditional Emeralite desk-lamp…

Commonly called “bankers’ lamps” because of their association with banks and their tellers, Emeralite desktop lamps have been manufactured since 1909! Talk about endurance of design! They were originally produced by the company of H.G. McFaddin & Co., in New York, U.S.A. To this day, the classic brass base and stem, and the swiveling green glass lampshade has remained a popular choice for those seeking old-world lighting charm. The brass is shiny and reflective, increasing the amount of light, and the green lamp-shade provides for a nice dash of colour!

But why is it green?

Although you can get these lamps with their shades in almost any colour, from frosty white to lemon yellow, its most common colour, and the colour which everyone associates with these lamps, is green. Why?

Emeralite lamps (note the name: “Emerald Light”) were made with green glass shades because light shining through the glass was softened by the colour green, and was easy on the eyes, while still providing enough light to be useful. The problem was that early electric lightbulbs could be a tad overpowering (some bulbs made in the Edwardian-era are still burning brightly to this day, a testament to their quality and longevity!). Placing a green shade between the light and the user was meant to soften it and make it less glaring on the eyes.

As bankers and accountants often had to update and check ledgers and balance-sheets, usually written in tiny script, having soft lighting that wouldn’t burn out their eyes was important. This is why the shades are green.

It’s also why those old-fashioned visors (such as worn by bank-tellers and accountants) are green. To diffuse the light and make it less intense.

Enough with the history, where do I get one? You can find them easily at antiques shops, second-hand shops, lighting-shops and office-supply chains. The design is so iconic that there are still people manufacturing the exact same style of lamp today, over a century later. You can pick one up, brand new, for not very much money at all.

A Leather Desktop 

You can’t go past the feel of real leather. Soft, cool, relaxing and smooth. And also an essential on any old-fashioned desk.

In the old days, leather-topped desks (such as the ones seen above), were considered the height of quality. The reason is not always obvious. Some people think that the leather is there purely because it’s there, and it’s there because it’s leather, and leather is expensive and if it’s expensive it’s gotta be quality and…yawn.

No.

Leather is found on old desks because it provides a smooth, soft, cushioned surface for writing. Don’t forget that until the 1950s, most people wrote with fountain pens, or dip-pens. Ever pricked yourself with the tip of a steel pen-nib? I can assure you that it hurts. A LOT.

A pen-nib is sharp enough in some cases, to literally draw blood. Since scraping such a needle-sharp pen-point on a wooden desktop would gouge marks and troughs into it, and make writing a very uncomfortable job, desks were lined with leather to give the nibs a smoother journey across the playing-field. These days, leather-topped desks are mostly purchased for their aesthetics, but if you intend to do a lot of handwriting at your desk (with a fountain pen or a dip-pen), then you should certainly buy a desk with a leather top.

Desk Blotters

What’s that, I hear you say? You can’t find a desk with a leather surface? Or they’re too expensive? Or they’ve been ripped up from years of poor use?

Fear not, intrepid study re-decorator, your grandparents already thought of a solution. They’re called desk-blotters.

Desk-blotters are those big leather pads that you see on executive desks, with the sheets of blotting-paper (yes, that’s what it is, blotting-paper) slotted into their corners. You can buy these things second-hand at antiques shops and places like that, or on eBay. Or you can buy them brand-new from homewares shops and large stationery-chains. Blotting-paper can be purchased in huge A1 sheets from places like arts-and-crafts shops, and big stationery-shops. You may need to cut the paper down to size for it to fit into your blotter, though.

Desk-blotters are handy for a number of reasons. Just like with the leather desk-surface, they protect the nibs of your pens from hard, friction-producing surfaces. They also arrest any drips or spills from ink, or drinks, or food (provided that they land on the blotting-paper, which may be changed and removed as necessary). The blotter also protected the leather surface of the desk underneath, if you didn’t want to damage it, but they also had a role in muffling sounds and providing stability which is necessary for the next item on our list.

The Typewriter

You can’t possibly have a nice, classic desktop setup like what you see in the movies, without a pretty, mechanical typewriter.

Remington Standard No. 16., Desktop Typewriter., Ca. 1933

For a machine that really pops and stands out for all the right reasons, and to match the traditional decor of the room, you’ll probably want a typewriter from the first half of the 20th century. A real vintage or antique machine with chrome and steel, and which has all those classic round glass keys with the chrome rings. Such machines ooze class and style.

However, be warned that typewriters of this style are getting harder and harder to find in working condition these days. All-steel typewriters with the flashy glass keys died out after WWII, and are almost unheard of after 1950. But if you’re looking for one (even a non-functioning one to act as a display-piece), then typewriter models likely to be found in old, pre-war offices and households include the Underwood Standard range, (Nos. 1-6), the Royal No. 10 model, the Remington Standard range (Nos. 10-16), and the L.C. Smith & Bros. Standard No. 8 model.

Be warned: A desktop typewriter of this size and vintage is EXTREMELY HEAVY. A Royal 10 weighs roughly 30 pounds. A Remington of a similar vintage weighs about twice as much. Make sure you have a STRONG desk that can take the weight, but more importantly, can handle the bone-jarring vibrations produced by the machine when it operates.

If a huge chunky desktop typewriter is too much to have on your desk, then you could get a nice vintage portable. You can choose from those made by companies such as Corona, Remington, Royal, Imperial, Continental, Olivetti and Underwood. Portables have the benefits of style, convenience, portability, compactness and smaller price-tags.

To find out more about how to buy your typewriter, read this. 

Having a typewriter in your study has many pluses. Apart from the fact that they’re extremely stylish and photogenic, a typewriter can save your ass if for any reason, you have a computer-failure. Anything from a crash to a blackout, to your printer packing up. Provided your machine’s in working order, in a pinch, a ribbon and a couple of sheets of fresh paper will have your letter, your essay, your business-report or other important document done in a few minutes.

Typewriters are also handy for things like typecasting on your blog, for keeping a diary or a journal, and for running off one-off documents that you really don’t want to have to save on your computer and waste disk-space with.

To muffle any undesirable clanking from your typewriter, and to stop it from shifting around on your desk, you may like to place a typewriter-pad underneath it. In the old days, you could buy these things from any stationery-shop. They’re just thick, square pads of foam or felt that you stick underneath your machine.

If you’re using a portable typewriter, a large mouse-pad, suitably orientated, can be an excellent substitute. A larger desktop typewriter will need something that covers more surface-area, and which will have to be much thicker, to cope with the significantly higher weight. To prevent irritating rattling, clinking or clanking while typing, remove any glass objects (jars, sets of drinking-glasses, etc) off your desk. Even the smallest portable typewriter can produce significant vibrations.

Fountain Pens

A man who loves to write should always have a good fountain pen. Not only are they infinitely classy, they are also much smoother and lighter writers than the modern ballpoint pen. For more information about these classic writing instruments, how to buy them, how to use them, care for them and other information, there is an entire category dedicated to them, which may be found on the menus back at the top of this page, on the left side of the screen.

Inkwell or Inkstand

You couldn’t have a classic desktop setup without one of these, could you? An inkwell, or an inkstand (a pair of inkwells on a stand, with slots and spaces for pens, nibs, and other bits and pieces) was a common desktop accessory, which remained popular long after dip-pens were obsolete. Some inkstands were given away as presentation-pieces or gifts.

The traditional inkstand or inkwell that might be found on a traditional desk would’ve been made of glass, silver, or brass.

Rocker Blotter

If you have a fountain pen, then you need a rocker-blotter. Rocker-blotters, in their various sizes and styles, have been desktop accessories since the Victorian era. They can be made of almost anything, from steel to silver, pewter, brass, leather, and a dizzying array of wood-types.

Rocker-blotters come apart into two-or-three pieces. A strip of blotting-paper (or in a pinch, paper-towel) is slipped over the blotter’s base, and it’s held in-place by the top-plate, which in-turn is held in-place by the knob at the top, which simply screws down. Paper is changed as necessary and as frequently as the blotter’s use requires it.

Magnifying Glass

Every household, or every study, and desk, should have some sort of magnifying device. For stuff like reading maps and small print, a standard, desktop magnifying glass is often sufficient. For a magnifier that won’t look out of place in your new study’s oldschool theme, look for a glass with a silver or brass frame, possibly with a cut-glass handle, like the one pictured above. Glasses like that are heavy and solid in the hands, unlikely to slide off the desk and provide good magnification.

Their extra weight means that they can also double as extra-classy paperweights, if need be.

A Good Drinking-Vessel

Either to be stored at the corner of your desk, or on a separate surface such as a sideboard, you should always have a nice drinking-vessel. What it is depends on what you like to drink. Fine glassware for top-quality alcoholic beverages, or even if you don’t drink alcohol, it can look fine filled with water. If you dislike having to constantly fill up your glass, search for something larger, like a traditional 1-pint pewter tankard.

Relax, modern pewter doesn’t contain any lead, so they’re perfectly safe to drink out of. But if you are the suspicious type, buy a traditional-style tankard with a see-through base. Traditionally made of glass, most modern tankards have see-through bases made of plastic (although some makers do still make tankards with traditional glass bases).

This was an innovation from Georgian times, and was created so that drunken bar-patrons would notice if a Royal Navy pressman had dropped a silver shilling into his beer. Press-gangs would enter a bar and look for drinkers. Accepting a shilling from a pressman was taken as your agreement to enter the Royal Navy. To trick drinkers, pressmen would drop a shilling into their tankards of beer. The drinkers would never see the shilling until the beer was all gone, and they were too drunk to notice it. They’d find the coin at the bottom of their mugs and were therefore hoodwinked into joining the navy.

To beat this crooked system of recruitment, people started making tankards with see-through bottoms so that drinkers could make sure there was nothing hiding at the bottom of their booze.

If you’re really worried about people slipping stuff into your drink, get yourself one of those German beer-steins with the lids on top.

Ash-Tray

Fewer people smoke today than they did back in the 30s, 40s and 50s, but an ash-tray is a nice thing to have on your desk, even if you don’t smoke. They’re handy as receptacles for things like loose-change, keys, business-cards and other important, but small, fiddly things that you don’t want to lose accidentally. The classic man’s ashtray is typically made of either brass, steel, or cut glass.

Bill-Spike

Anyone who is in the habit of writing down dozens of little post-it notes, phone-numbers, phone-messages, and other little details on small pieces of paper on a regular basis (like me!) will certainly appreciate a bill-spike.

Commonly found on shopfront-counters, reception-desks and other places where receipts are want to gather, these painfully sharp steel spikes on their metal bases are handy for keeping a tab on little bits of paper which are important enough to keep around, but not large or detailed enough to put in a folder, in a book, or in a drawer somewhere (where they’d probably get lost, anyway). You can pick these things up at places like stationery-chains and nick-nack shops for just a couple of dollars.

I have one on my desk, and without it, I’d forget where I put a person’s phone-number, or the address of someplace, within an hour of writing it down. Having a bill-spike is great for just poking down those flittery bits of paper that some people just have all over their bedrooms, offices and studies. Just write down your note, and poke it on down, and it won’t move anywhere until you want it to.

If your spike has a little coin-catcher, like that one in the photo (mine does), so-much the better. Handy for keeping your loose change in. If it doesn’t, then that’s why you’ve got the ash-tray on your desk for.

Letter Holder

For some people, having a steel bill-spike on their desk can be a safety hazard (if you have kids, for example). An alternative is the traditional letter-holder. Typically made of wood, brass or steel, these things can range from simple one-slot holders, to entire caddies that will hold letters, envelopes, incoming mail, outgoing mail, pens, pencils, scissors, stamps, paperclips, staples and oodles of other things. Handy for storing loose bits of paper in there.

Inbox

No, not one of those electronic things. I mean a proper inbox! Remember when they used to be made of wood? Handy for keeping documents that you’re working on, spare copy-paper and other things. If you need extra help with organisation, get a matching “outbox” too.

Stapler

You couldn’t possibly have a vintage office man-cave, without a stapler. And you couldn’t possibly have a stapler more vintage than the El Casco M5, from 1934.

Established in Spain in 1920, El Casco was originally a firearms manufacturer, producing revolvers. But the Depression hit the company like a kick in the nuts. Desperate not to keel over and die, the company turned its precision machining of firearms into precision machining of exquisite desktop accessories…which it still manufactures today. And the M5 stapler is one of its most iconic designs, and is the stapler that you would have to have in any vintage office.

Other Oldschool Office Fixtures

Oldschool Storage Solutions

Pigeon-holes and filing-cabinets kinda rule the roost here. I don’t believe in things really doing double-duty. An object should have a use, and it should be used for that purpose. Having things that double up as something else can be fiddly and frustrating to some people, just as much as it can be space-saving and time-saving for others. Keep a nice old-fashioned filing-cabinet in your office or study. Two or three drawers, possibly four, depending on how much filing you need to do.

And while you’re at it, invest in some of those old beige/custard/buff-coloured manila folders, the ones made of cardboard. I find these handy because you can just write whatever you need to, on the front of the file, in big letters, to save you having to fiddle around with tags and stickers. And some more modern files don’t have surfaces or colour-selections that lend themselves well to this function. Especially handy if you have poor eyesight.

Sound System

For most men, music is a must. To enjoy your favourite rock, jazz, classical, pop, Latin/South-American, or other genre of music, it sounds so much nicer when it’s coming out of something that looks pretty. Or even if it’s just listening to your favourite radio-station, talkback, music, or otherwise. What’s something that you can put in your new, revamped man-space that will look nice and sound nice?

For those of us who enjoy variety, you probably couldn’t go past a Crosley-brand radio-gramophone. Records are becoming more and more popular these days, and people young and old are collecting records, buying new records, resurrecting old records, and dusting off their old collections.  The Crosley record-player shown above is one of many reproduction units evoking the radio-styles of the 30s and 40s. It can tune into AM and FM radio, it can play all your records, ranging from 33, 45, up to 78rpm, and it even has audio-cassette capabilities. Some units of this style even have slots for CDs (keep an eye out for those, if that’s what you’re after).

Some people find themselves listening to the radio more than they listen to their CD, record, cassette or even MP3-collections. Good, old-fashioned tube or transistor-radios are ideal for this. Some people say that vacuum-tube radios, of the kind popular from the 20s-40s, are the ones that produce the very best sound.

Old-fashioned tube-radios came in a number of styles. The two most common are cathedral…

…and tombstone…

…named for their curved, and rectangular/square profiles.

You can buy an antique one that’s been restored, or you can buy a modern reproduction, which will look the part, sound the part, but cost a fraction of the price.

If you have an extensive collection of CDs or records, you might want to buy an old jukebox from the 1940s or 50s…

You can buy original vintage ones, or you can buy modern reproduction jukeboxes, which are designed to play a stack of CDs, instead of a stack of records!

 Seating Solutions

Don’t be a Victorian, and believe that ultra-comfortable seating is something to be considered immoral and rude. Every office man-cave should have a comfortable office-chair. The modern office-chair was invented in the mid-1800s, and was typified by the Centripetal Armchair:

In many ways, this was the first modern office-chair. It came with a swivel seat, rolling caster-wheels, and had models which came with additional features such as headrests and arm-rests. In fact, when it was unveiled in 1851, it was considered so modern and revolutionary that the uptight Victorians were completely horrified by it! Victorian morality dictated that such comfort and pleasure, derived from a piece of furniture, suggested relaxed, loose morals, quite shocking and improper in those days! As a result, despite its revolutionary design, the chair was a poor seller.

Fortunately, such starched, straitlaced attitudes are not so prevalent today, and you can easily go out and by a comfortable chair without fear of immorality.

You don’t have to buy a chair as fancy as that, but any desk-chair should be comfortable and fully adjustable. If you’re going for that vintage look, older chairs were typically made of wood and/or leather. Not plastic or other materials. Chairs like these (particularly ones made of wood) are often pretty cheap and can be bought almost anywhere.

If your room is large enough, then you might also consider the inclusion of armchairs and/or a couch. Handy for visitors, or just as a place to kick back, relax, and have a nap. Or read. Or write.

A Safe Place

What better place to keep things safe than…a safe?

Of course, there are other alternatives, but not all of them are particularly effective. Those pesky “personal” safes that you can buy aren’t really that effective. If it’s small enough to carry home, it’s small enough for someone to steal. And therefore…useless.

What kind of strongbox you buy depends on what you want to keep safe. Some desks come with lockable drawers. If you have a vintage desk with the keys intact, you could use that as your safe. Nobody’s going to try and carry away an entire desk. Some filing-cabinets also have the same feature, for storing important documents.

But if these two options aren’t suitable, and having a floor or a wall-safe isn’t an option, then your best bet is to get an actual, honest-to-goodness safe. Those old-fashioned steel ones that Wil-E-Coyote loves to drop on the Road Runner. A safe like that in working condition, with a known combination, will keep your valuables of all kinds…well…safe!

Of course, these safes come with a few strings attached – They take up quite a bit of space. And they are also extremely heavy! Be glad that some of them come with stands and wheels! But they are handy in storing stuff that you want to have protected. Now, nobody is going to be running off with your precious collection of ‘gentleman’s literature’.

Coat-Tree

A classic, bentwood tree is always handy. This one belongs to me. Traditionally, hats were placed on the top branches, coats on the lower branches, and things like umbrellas, walking-sticks and canes were placed in the ring around the base. Even if you don’t own a stick or a hat, these things can still be handy as a place to dump your coat when you come in out of the cold. Better than chucking them on the couch, anyway.

Open-Grille Fan

Back in the old days, when health and safety regulations were not what they are today, almost every office or study would have one of these perched somewhere around the room, either on the desk (if there was space…unlikely), or on a stand, pedestal or side-table. Old-style open-grille fans are stylish, easy to clean, and keep you cool the old-fashioned way. Just don’t put your fingers anywhere near it when it’s running, and keep the kids away from it. Or better yet, you could install ceiling-fans. Having a nice collection of paperweights (or paperweight stand-ins) would be important when you have a fan like this in your room.

Rotary Telephone

The old, rotary-dial telephones of the 20s and 30s are iconic, and no vintage office, if you’re trying to recreate one, would be found without one. You can still buy original telephones in working order. Simply plug it into the wall, and let it ring! Some of these old phones have bases and bodies made of steel, so they can be surprisingly heavy. But the good news with such solid construction is that after a heated conversation, you can literally slam down the handset without damaging the unit.

Some Concluding Remarks… 

These are more or less the bare bones essentials that you’ll need to buy, to pull off the look of a vintage office or study, if that’s the angle for your man-cave, or home-office redecoration. You can vary them around a bit and mix them up, but in completion, they’ll turn almost any room into a replica office or home study, straight from 1935.

Any other elements you add in are personal touches to add your own little spin to things. This is my vintage desktop at home:

As you can see, most of the things listed in this posting can be found there. It’s an ongoing project, inspired by my recent purchase of the banker’s lamp in the corner, which in-turn, inspired this posting, for any guy looking to dress up his study or office in a more interesting, vintage style.

 

The History of the Modern Toilet

The toilet. The latrine. The commode. The privy. The water-closet. The closed-stool.

Whatever you call it, for centuries, mankind has always needed a place to get away from it all. Since the dawn of time, man has required the use of a place or contraption for the peaceful, if not always quiet, ejection of bodily waste. These days, that place is the modern flushing, sit-down toilet. But where did it come from?

Before the Toilet

Damn near every house on earth…has a toilet. It’s that one indispensible invention that none of us could do without. Fridges, TVs, computers, telephones…even electrical lighting…but not the toilet.

But the toilet is an amazingly modern invention. What happened before then?

Primative Toilets

For centuries, a toilet was little more than a hole that you dug in the ground. Toilet-paper was whatever you could lay your hands on…usually leaves.But to give the ancients some degree of credit on hygeine, various ancient societies have had their own lavatorial inventions over the centuries. Civilisations such as the Ancient Eygptians and the Ancient Romans had toilets that worked with running water and which served to keep the populous clean, satisfied and healthy. In Ancient Rome, public bathhouses usually had toilet-chambers available for public use. Ejected matter would end up in the channel beneath the communal toilets, which would then be flushed away periodically by large volumes of water expelled from the public bathhouse nearby.

Medieval Toilets

Societies such as the Ancient Greeks, Eygptians and Romans all had rather sophistocated ideas and inventions to deal with the issue of human waste. However, with the fall of the Roman Empire, the Western World went back to shitting in a hole.

In the Medieval Era, concepts about personal hygeine were virtually nonexistent. A total lack of understanding about how disease was spread, and the dangers of untreated sewerage, caused sanitation nightmares that would send all those Health-and-Safety officers running for cover…sometimes literally!

In the great cities of Europe, such as London, Paris, Prague and Rome, toilets took several steps backwards from the Roman era several hundred years before. In a typical medieval town or city, a toilet was a seat with a hole cut into it that projected out of the side of the building. Any released feculence would just drop into the streets below…bad luck if you were out for a walk. Streets of medieval cities were often filled with several feet of compacted, mashed up, mushed up, foot-trodden weeks of sewerage. The smells were naturally abominable…but at the time, no link was made between this, and any sort of danger to public health.

If your toilet didn’t eject out the side of the building into the street, then it might eject out into a river or stream. Or the muck in the streets would end up being dumped into the nearest river anyway. This led to incredible and unspeakable pollution…and poisoning! Because people used to drink that water, too! And wash their clothes in it. And bathe in it…and cook with it…Water in medieval times was so polluted and foul-tasting that almost everyone drank wine or beer instead. Even kids! In fact, kids would drink ‘Small Beer’ (with a lower alcoholic content), while their parents would drink ‘Big Beer’ (which naturally, had a stronger taste and higher alcohol content).

Cesspits

But what if you couldn’t get your toilet to jut out over the street? Or over a river or stream? Or down into one of the few sewers that would have existed in the medieval world? What then?

Well, then you would make use of the cesspit.

A cesspit is basically a medival septic tank. It’s the huge chamber or room underneath your toilet into which all your bodily waste would be dumped into. Every few weeks…or months…you had to get the thing emptied, just like with septic-tanks today. And to get it emptied, you had to go out and find the chap with quite possibly the worst job in history.

The gong-scourer.

‘Cess’ and ‘gong’ are old English words for sewerage and dung. The gong-scourer was the poor bastard who emptied out your cesspit.

Although in all honesty, he wasn’t that poor. Being a gong-scourer was a job that was literally swimming in shit. It was a filthy, hazardous, dangerous, backbreaking job. You would have to shovel out tons of excrement from all the toilets and cesspits all over town and you had to do this every single night. Because the work was so obviously revolting, not many people would do it. So wise-thinking city-authorities would pay gong-scourers a pretty princely wage in return for their vital and revolting job. How much?

18d for every 1 ton of waste removed.

That’s 18 pence (A shilling and a half) for every ton of waste.

This in an era when the average wage of a working man in London was sixpence a day.

Of course, for some gong-scourers, even money wasn’t enough. A chap named Samson, royal gong-scourer to Her Majesty, Queen Elizabeth I of England, was paid half in money, half in rum!

Privies and Closed Stools

In the medieval world, there were two toilets available to you. The most common one ws the privy. Coming from the Latin word for ‘Privacy’, the privy was a removable seat over a cesspit. Any business-transactions done in a privy would end up in the cesspit below. To clear out the pit, the gong-scourer would remove the seat and climb down into the muck to shovel or bucket it out. Not a fun job.

The other, slightly more comfortable and dignified toilet was the Closed Stool. If you’ve ever wondered where we get the word for feces meaning ‘stool’ from…well…take a guess.

The Closed Stool was similar to a modern toilet-chair. It was a box with a hole in it. Inside the box was a large bucket. After the daily interaction with the stool, the bucket was removed, emptied, washed and replaced inside the stool. An altogether cleaner and more comfortable toiletry eperience…what you did with the waste when the bucket was full was another matter.

Inventing the Modern Toilet

As you may have guessed, after the downspiral from the Ancient world, mankind was living in a world of muck and filth. From the Dark Ages up to the 1800s, almost all toilets were of the kind described above. And they only provided temprorary relief from one of our oldest problems.

Where do you put it?

The big problem was that sewers…really effective sewers…simply did not exist. Even into the 1800s, big rivers such as the Seine and the Thames, were little more than huge open drains! What few public sewers there were, would be choked, blocked, overflowing and completely unable to handle the waste of the millions of people who flooded into the cities and towns of Europe during the Industrial Revolution. Indeed, the idea of modern sewers wasn’t given any serious, practical thought until the 1860s, in London. It was then that Sir Joseph Bazalgette designed and helped to build the world’s first modern sewerage-system underneath the city of London.

So, where did that leave the modern toilet?

Ancestors to the Modern Toilet

The first truly modern toilet, of a kind that we might possibly recognise today, was actually invented in the 16th Century, to be precise, in 1596.

The chap who invented it was a man named Sir John Harington. Being godson to Queen Elizabeth I, he probably had the time and money to invent what was effectively the world’s first modern toilet. He called it the ‘Ajax’. He installed such a toilet in his house and then built another one for the Queen. The Ajax wasn’t perfect, but it did work…kinda.

The toilet was installed over a sewer-drain. The cistern behind the seat was filled with water from buckets. At the end of the episode, a plug was pulled. Water flooded from the cistern into the bowl. Then, another plug was pulled and the entire contents of the toilet-bowl were flushed out into the drain below. Effective, but without running water, the cistern had to be refilled manually each time.

The next instance of a modern-style commode does not make its appearance until the 1700s. And just like back in the 1590s, this fantastic new invention, the flushing toilet, was to be used only by a queen. But this time, not a queen of England, but of France.

Marie Antoinette, wife of King Louis XVI.

The setting is the royal palace of Versailles, 12km south of Paris.

The famed Palace of Versailles is the last word in luxury. Huge banquets, luxurious chambers, flashy clothes, powdered wigs and the world-renowned Hall of Mirrors. Heaven on Earth! Right?

Eh…No.

The truth was that, for all its luxury and obscene opulence, the Palace of Versailles was little cleaner than a sewer! Animals were allowed to wander at will through the stately halls, and relieve themselves as they pleased. And it wasn’t just animals, either. For a place as expensive and luxurious as Versailles, there were almost NO toilets…ANYWHERE! Courtiers, servants, guests and visitors were compelled to relieve themselves where-ever they could. And I literally mean…WHERE-EVER. Underneath staircases, behind curtains, in the dead-space behind doors, or into chamber-pots, the contents of which would then be ejected out the window into the palace courtyard below.

Classy.

Amazing as it seems, there was actually a TOILET in Versailles. A real, honest-to-goodness flushing toilet. But it was for the use of ONE person ONLY. And that one person was the Queen of France herself: Marie Antoinette. The toilet (which can still be seen in Versailles today!) was one of the few plumbing fixtures in the entire palace, and was secreted away in the deepest, darkest, most private chambers of the queen’s royal apartments. Apartments which only her most trusted and intimate of servants would ever have seen. Most people didn’t even know the toilet existed!

Over the next two hundred-plus years, mankind improved on Sir John’s design. Eventually, in 1851, the world’s first public…flushing…toilets, were unveiled!

Where?

In the Crystal Palace in London during the Great Exhibition. Although the toilets were public, they were not free – For the privilege of emptying your bowels in the latest modern conveniences, you had to give the bathroom attendant one penny before he granted you access to the newfangled ‘flushing toilet’.

Probably the most famous names associated with the history of the toilet, however, is that of an American. A plumber with the unfortunate name of Thomas…

…Crapper.

Contrary to popular belief, Thomas Crapper’s name did not lead to the coinage of the term ‘crap’ meaning to take a dump. The word ‘crap’ actually comes from the Latin word ‘Crappa’, meaning ‘chaff’, the leftover husks from stalks of wheat (as in “to separate the wheat from the chaff”). So literally, “Crap” is the leftovers. The stuff we leave behind. The stuff we reject and ignore. Crap.

Why is it called a Toilet?

The word ‘Toilet’ comes from France. Originally, “toilet” referred to one’s personal hygiene and grooming. To “attend to one’s toilet”, meant to keep oneself clean. This ranged from bathing, brushing your teeth, combing your hair, shaving, washing your face, and relieving bodily waste. The utensils and products used in the execution of these procedures were known as “toiletries”. Even today, when you go on holiday, you still take a “toiletry bag” with you.

With the invention of the modern commode, the word ‘Toilet’ moved from the more general term meaning grooming, to the more specific one, meaning a receptacle for bodily waste. And that has been its main definition ever since.

The Victorian Toilet

The toilet as we know it today really came into its own during the second half of the 1800s. In big cities with new, enlarged, free-flowing sewer-systems which were now capable of handling large volumes of waste on a daily basis, plumbed toilets were finally practical. And with this practicality, came a surge of toilet-manufacturers.

Designs varied slightly from maker to maker, but all toilets were made up of a bowl, a C, U, or S-bend, to trap water and prevent the rise of sewer-gas, a seat, and a cistern for the storage of flush-water. For something that was essentially a self-emptying chamber-pot, the Victorian toilet was decorated with surprising artistry, and both the exterior and interior of a toilet-bowl were just as likely to be covered in blue-glaze paint, flowers, forest-scenes, water-scenes and nature scenes as the plates of your finest bone china dinner-service.

Because modern toilets descend from the toilets of the Victorian era, you can still install a Victorian loo in your house today. And it would function just as well as a modern one. It’s the same technology, after all. Just a little bit fancier.

There’s been little change in toilets since the Victorian era. There are now more water-efficient ones, ones which are easier to clean, more comfortable, even those insane Japanese ones that do everything for you, and then some, but the toilet as we know it, has essentially reached the end of the road, when it comes to development.

Want to Know More?

A lot of the information gleamed for this article were taken from the Dan Snow documentaries “Filthy Cities” (“London”, “Paris”, and “New York”), and the Dr. Lucy Woslery documentaries “If Walls Could Talk”. You can find these on YouTube.