There is now a global pandemic over a new virus commonly known as «coronavirus» (technically, it’s Severe acute respiratory syndrome coronavirus 2 or SARS-CoV-2). The first case was identified in Wuhan, China and it has evolved into a global health issue. Around the world governments, businesses and public places have altered drastically the way they work, including:
- Reducing scope of service: some businesses stopped having a seating area or imposed restrictions on its capacity (the exact amount varies, but it’s not uncommon to find places where they will serve only 50% of their total seating capacity). Some businesses transitioned completely to delivery.
- Requiring face-masks to serve people. Most offices and businesses require everyone coming in to wear a face mask to reduce contagion; reserving the right to refuse service to those who don’t meet this requirement (more on this later)
Despite its status as a «developed nation», there’s a significant population in the United States who refuses to receive vaccines to immunize against the aforementioned COVID-19. The reasons behind this vary, but include arguments for bodily autonomy against a government mandate; suspicion on the safety of the vaccine; belief in acquiring natural immunity to the disease; debunked conspiracy theories stating that the disease is not real, and debunked conspiracy theories stating that there are microchips in the vaccine. There is a very vocal group—which I hope is a minority—that has compared the above restrictions with the Holocaust in Germany.
But the saddest thing is that this reaction is not completely unexpected. Isaac Asimov will write (1980) an essay titled «A Cult of Ignorance» —itself building on his previous essay «The By-Product of Science Fiction» (1957)—where he will lament the rise of anti-intellectualism in the U.S.A. I cite this not because it’s an isolated lament, or even the first, but among the most cited. There’s already books discussing anti-intellectualism in America in your time, but this discussion will not reach enough people to reverse the trend.
That is not to say that the whole population of the United States denies evolution and climate change, or that everyone still believes that dinosaurs and men coexisted together less than 10,000 years ago1. Curiously, some of the best universities and research institutes are in the U.S. But the American population will cling to anti-intellectual views, policies and institutions in enough numbers, with enough force and loud enough for the rest of the world to see the whole as a country that is already past its prime. Indeed, even when I was younger the caricature of the «ignorant gringo» was not news but a cliché.
Is America the dumbest country in the world? I’m sure it’s not, but it’s certainly the loudest. That’s why even though one can find analogues of American stupidity, racism, failing infrastructure, partisan politics and cults of personality all around the world, the focus is on America because, well, it’s the easiest to see from a distance.
The year 1977 will see the release of «Star Wars», a film by a relatively new director called George Lucas. This film will be the cornerstone of one of the most famous media franchises ever, spanning at least 12 films in about 40 years, not counting a ton of TV series, books (original stories and film adaptations), comics and video games.
As of 2021, the media franchise is fifth-highest grossing of all time. The highest-grossing media franchise is Pokémon, which started as a small video game about collecting monsters that could fit in a small ball in your pocket (hence the name, Pocket Monsters). As I’m writing this, the video games in this franchise are ending its eighth generation, and the beginning of the ninth was just announced to come in late 2022.
The Star Wars film series is the second-highest grossing of all time, just below the Marvel Cinematic Universe, or MCU (yes, those comics became a series of movies outselling everything else)
Speaking of Star Wars and the MCU, both are now part of The Walt Disney Company, arguably one of the largest media companies in the entire world. Yes, the company founded by that charming man appearing on The Wonderful World of Disney went on to acquire entire sports, general entertainment, video game studios and two of the largest media franchises in the world. Oh, and The Walt Disney Company also has Pixar, who you still don’t know by that name…
In 1979 Lucasfilm—the production company that made Star Wars, see above—will found the Graphics Group as part of its computer division. Long story short, this group will become Pixar, a studio that will go on to create the first fully computer-animated feature film, called Toy Story in 1995.
Now, in 1975 it’s possible that you think using computers to make movies is cheating. That opinion will last for a while but not forever: Tron will be released in 1982 with groundbreaking visual effects created by computers, but it won’t be nominated to the Academy Awards because of that prevailing opinion. Nowadays, computer-animated films are common—if mostly created for children and family movies—and generally enjoy critical and commercial success, barring atrocities like The Emoji Movie. People in 1975 might think that animating movies with a computer is cheating but this is not the case: creating these movies takes years of work and incredibly talented people with multiple cross-disciplinary knowledge and expertise. Pixar itself has won numerous accolades not only on account on their technical achievements, but on the great stories they bring out to the world.
By 2001 computers will have shrunk in size and cost so that individuals can have them and the smallest ones fit in a briefcase2, and will be the main working tool of most white-collar employees. Computers will number in the millions and billions in the years to come. While this is a great achievement, a larger one is connecting computers between them remotely so that they can exchange information from very long distances in a decentralized web-like configuration. We call the collective computers thus connected—along with the protocols to do so—with the name Internet.
The details of what the Internet really is and how it works are too much to fit in a simple letter like this, but it’s safe to mention that it began life as a way for academics to exchange information and papers between them, and it greatly succeeded. But the idea of free exchange of information started to outgrow its University walls and began extending to the public at large. As the Internet became available to more and more people, we started trying to put more and more of our lives on this large network.
One such idea was to create a new encyclopedia, a central repository for the sum of human knowledge. Now, the basics of this idea are not new at all. When I was a kid I used to read science fiction stories that spoke of a large machine living underground that would have all the knowledge of the world and one could get answers for it as long as one knew how to phrase the question. A modern-day Oracle, if you will. The encyclopedia that we have today is both the same and nothing like that.
First, the differences. As mentioned, computers are incredibly small these days and this massive encyclopedia fits comfortably in a storage chip the size of a thumbnail.3 Second, this encyclopedia (called Wikipedia for reasons beyond the scope of this letter) is mostly written in human language, so there’s no need to have specialized knowledge to access its information other than basic literacy.4 Third, Wikipedia’s style is mostly the same format as you would expect of the Britannica: informative, objective facts coming from trusted primary sources and so is not a special-purpose machine. This is both a strength and a weakness of this encyclopedia.5 Fourth, this is a collaborative effort, one of the largest ones we’ve made as a species. The articles cover lots of topics that a traditional encyclopedia would never cover, owning in part to the cheapness of storage space and the explosion in information created on the past few decades. Wikipedia exists in hundreds of languages and is created by the collaborative efforts of men and women around the world.6
As for similarities, Wikipedia will indeed be a massive effort and an enormously useful tool for answering all sorts of questions. Even now this project is evolving so that it becomes more abstract so that is readable by humans and machines alike. It will strive to keep itself free from any governmental or private control. It will be considered a large public good, much like a park or a library will.
Vinyl records kind of died out, but are making a comeback. They are, of course, not the cultural powerhouse that they were decades ago. But cassettes came and mostly went, and Compact Discs came and… well, they aren’t dead yet, but in 2020 the Recording Industry Association of America will report that vinyl records outsell CDs for the first time since the 80s.
But the music industry has changed in deeper ways—as most media industries have—and not just in the preferred distribution format.
With the advent of personal computers (see the point above), ideas of free exchange of information and no explicit laws or legal standings on new technology, people will start trading away music through computers much in the same way they have done before with mixtapes, only at a massive scale.
The internet will allow people to exchange all sorts of data with almost no restriction and the rising capacities of computers meant that people could file enormous amounts of music—and other media, but I’ll focus on music—on their computers. Storing one hour of music and storing the whole works of Mozart took the exact same physical space.
Media companies will see this and instead of embracing it as a new way to distribute music, will focus only on the copyright infringement aspect of it and will try multiple times to ban the technological mechanisms through which people share music. They will fail.
And so piracy will grow. Not the kind imagined by Robert Louis Stevenson, mind you. «Piracy» is the common name for the copying and sharing of files without explicit permission from the copyright holder. Music, movies, books and video games will be digitized and shared through unsupervised channels of the internet.
As I mentioned, this shift in consumption could have been the start of something great for the industry at large. And now it is, but it took us all a long time to get here. For a while media companies at large will decide that distributing their content over the internet in any way, shape or form was just inviting piracy to happen and the doom of all musicians in the world. There will be many opinions on how the record companies have stalled and failed to see the revolution happening before their very eyes.
Now in 2022 we can confirm what we knew then: that the Apocalypse predicted by the companies was a boogeyman all along. But they still tried to stop it.
But then, in 2007 something very interesting will happen: Radiohead, a British band of renown will release an album called «In Rainbows», they will put it officially on the internet for the price of «Pay what you want». In other words, they will allow people to legally get their new album for free if they so desire.
The most credible sources say the album was a financial and critical success, even if the majority of users decided to pay nothing for the album. Although there’s disagreement on how much exactly the band made, it marked the beginning of a new era.
Radiohead wasn’t the only band experimenting with gratis downloads. Nine Inch Nails will also release a set of four albums titled «Ghosts I–IV» and make them available on the internet at different formats and price points. The cheapest of these will be getting the first 9 tracks of the album for free.
A few years after these and many other experiences on downloadable media, there will come entire companies that will transmit the music or movies to your computer, legally. They will usually charge a relatively low monthly fee. These companies will be known with the umbrella term of streaming services and will grow to the point of becoming not just distributors, but creators of new media as well.
Video games, as mentioned, will become a massive industry rivaling and surpassing the movie industry. Video games will expand in scope so that the prospective gamer—the common term for someone who enjoys video games—can play games most anywhere with the right apparatus. Some games are produced by very small studios and can be enjoyed in a few minutes, while others are produced by massive multimedia studios and can take hundreds or thousands of hours to complete.
The internet will also allow people to play video games with each other across massive distances, and not just simple games as chess, but entire simulations of fantasy combat with several dozens of people playing together at any one time.
Video games will also see the rise of a new class of entertainer. Much like TV, radio, movies and sports, there will come people who play video games for a living, and most of them fit into one of two categories.
The first of these is professional gamers. Just as there are professional sports players, there will come people so good at playing competitive that they can play on organized leagues, under professional teams and often playing with sponsorships. While this is a small group by its very nature, there’s lots of money in there to attract a good crowd—and talent—from all around the world.
The second of these categories will be known as streamers. While they may be very good at playing video games, their trade is closer to the movie actor than the football player. These people will transmit live video of themselves playing all sorts of games, and their success often depends on the combination of technical mastery and general entertainment of their audience. They command attention of dedicated audiences and this may make them valuable to video game companies, often sponsoring them to promote an upcoming game or event.
Chess will be considered—in a way—to be the holy grail of computing right until the 1900s. Before then, the general consensus will be that a computer will never be able to defeat a human at chess. But in 1997 there will be a tournament played between «Deep Blue»—a supercomputer—and Garry Kasparov, the World Chess Champion and the supercomputer will win.
But the computer itself has absolutely no human features at all: it mostly looks like a couple of black boxes. The idea of humanlike robots will not take and an interesting shift in perspective will happen: we will use the word and concept of «robot» for something subtly but importantly different. Deep Blue, as mentioned, will not look like a human. The achievement of it defeating Kasparov will come from the way it’s programmed, which itself will be the result of very interesting mathematical breakthroughs that I will skip over for now for the sake of simplicity.
At some point between that7 and today, the word «robot» will shift to mean the internal programming of the computer rather than its physical human-like form. Thus, today we would say that Deep Blue is a chess-robot of sorts. This nomenclature will extend to all sorts of tasks that can be automated, and so we will have chat-bots, game bots and even spambots.8
After the Deep Blue versus Kasparov match, the general belief will be that the game of Go is the next frontier to test the ability of computers to «think». This is because despite the best efforts on it, the best computers in the world at this point cannot defeat an intermediate Go player. And so Go will become the next frontier of «what humans can do better than computers».
But again it won’t last forever, and this time it will come as part of greater changes.
In 2015 a computer program will beat a professional Go player–Fan Hui—in a full-sized board without handicap, 5 games against 0. However, the important part for 1975 is that this time the computer will rely on a whole different set of tricks than Deep Blue and—interestingly—the basis for these tricks is mathematics from the 1950s and 1960s.9
Deep Blue worked with a «book» of known chess openings and clever ways of evaluating what the best possible move was for any given position on the board. In a way, its programming was close to what good chess players do: try and «see ahead» the possible moves an opponent might do if I do this or that move. However, Deep Blue—and lots of its successors—try to put numbers to this process so that the answer will be mostly quantitative.
The underlying processes of AlphaGo Zero—the machine that beat humans at Go—are rooted in Machine Learning, a subset of research in Artificial Intelligence.10 In 1975 some of this research already exists, but it will see a sharp decline until the late 1980s and early 1990s.11 The main difference here is that AlphaGo Zero will never be «told» or «taught» exactly what are good moves in Go. The bare bones programming of AlphaGo Zero will contain little more than the rules of the game and «how to learn» from itself.
I wish to restate the above because it’s iconic of the achievement. AlphaGo Zero will not be considered to be a living being and the phrase «how to learn from itself» might be misleading in 1975. What it will do is create its own way of evaluating a Go board rather than have humans create one for them. AlhpaGo Zero will play millions of Go games against itself and analyze the results to refine its own way of playing.
The team behind AlphaGo Zero will not necessarily create all of those.12 They will build upon a growing body of research and practice focused on Neural Networks, currently thriving because of the growing computing power, high volumes of data available to train models, increasing complexity in theoretical models that can now be put to the test and many other factors. AlphaGo Zero is not the first.
The Curse of the Bambino will not be lifted until 2004, when the Boston Red Sox will win against the St. Louis Cardinals. And the Curse of the Billy Goat will not be lifted until 2016, when the Chicago Cubs will win against the Cleveland Indians. Funny story is, the Curse of the Billy Goat will be referenced in «Back to the Future Part II», a 1989 film where the protagonist travels through time to the year 2015 and one of the things he sees is the news of the Chicago Cubs winning the World series (thus being very close to a correct prediction)
I admit this was thought as a small list of quickly-explainable items to someone in 1975, but as it happens with me and research, it quickly grew and grew to include some of my opinions and pet-topics.
As I wrote more and more, I realized that I wanted not just to include a shocking statement, but a small overview into the major societal changes happening since then. It is not, as mentioned at the beginning, a complete overview of things that have changed drastically since 1975, and I welcome additions to this list, whether by you asking me to add them here or—even better—by writing your own.
See Newport (2014) who notes «More than four in 10 Americans continue to believe that God created humans in their present form 10,000 years ago, a view that has changed little over the past three decades.»
Well, that’s not entirely true. By the 90s there will be a gadget called a Personal Digital Assistant, or PDA for short. These fit neatly in the palm of one’s hand and are useful for taking notes and several other tasks. But by and large, the general public doesn’t refer to them as computers and they will be phased out by the 2010s by an even stranger type of devices that we collectivelly call «smartphones». These combine a portable telephone, photo- and video-cameras shooting in color, among a lot of other useful stuff. They deserve their own bullet point.
As of this writing, the «wikipedia (english) all maxi» Kiwix package is listed as 87 Gigabytes. There are already 128 GB MicroSD cards. Whether that is the appropriate storage medium for a Kiwix project is left to the reader as an exercise.
This is not to say that merely knowing how to read is enough to parse and digest the information in an appropriate way. Thus, the mere access to information is basic literacy and knowing how to discern, analyze and extract useful knowledge out of media is advanced literacy. I don’t count this particular point as different to 1975 because this separation has existed for quite some time. If anything, advanced- and internet-related-literacy are harder now owning to the amount of information, but the skill is mostly the same as it was 50 years ago.
I don’t want to extend this excessively, just need to make a note here of the discussions that have happened in the Wikipedia and Wikimedia communities the last few years: is an «Encyclopedia» the best repository for human knowledge? What happens with the knowledge that is not, has not been, and probably will never be consigned to traditional, academic primary sources? The Encyclopedic format is great for many things, but it is still a product of its time and thus still prone to colonialist thinking. This is a much larger topic that doesn’t concern the people of 1975, but bears mention if this node is alive in 50 years.
More men than women, unfortunately, in such disproportion as to impact Wikipedia with a male-centric bias.
Or maybe at an even earlier point.
You might have seen a British comedy troupe called Monty Python and a particular sketch set in a café where most meals include Spam in some form. Through a series of interesting choices, the term spam is now a technical term in computing as a homage to that sketch.
The perceptron algorithm by Rosenblatt (1958) is arguably the first time a neural network like the ones we use today was described (pattern recognition and two layers of interconnected neurons). This of course is open for debate, but a line must be drawn somewhere and I’ve decided on this one. See Wikipedia contributors (2022p) for more.
Specifically, AlphaGo Zero uses Deep Learning techniques, which are only a subset of all Machine Learning techniques. In set-theoretical words all Deep Learning is Machine Learning, but not all Machine Learning is Deep Learning. More on this later.
Again, open to debate. See Wikipedia contributors (2022a) for an overview.
In the sense that the breaktroughs made by Silver et al. (2017) are mostly on the specific application of existing techniques and not the «invention» of anything new. This is not meant to diminish their efforts and is a massive simplification of the hard theoretical and practical work that went into making AlphaGo Zero the first AI that beat humans at Go. But explaining the breaktroughs of this particular study are beyond the scope of this exercise in imagining how to communicate current events to someone living in 1975.