Machine learning is ubiquitous in our daily lives. Every time we talk to our smartphones, search for images or ask for restaurant recommendations, we are interacting with machine learning algorithms. They take as input large amounts of raw data, like the entire text of an encyclopedia, or the entire archives of a newspaper, and analyze the information to extract patterns that might not be visible to human analysts. But when these large data sets include social bias, the machines learn that too.
A machine learning algorithm is like a newborn baby that has been given millions of books to read without being taught the alphabet or knowing any words or (more…)
Given that we see the world through two small, flat retinae at the backs of our eyes, it seems remarkable that what each of us perceives is a seamless, three-dimensional visual world.
The retinae respond to various wavelengths of light from the world around us. But that’s just the first part of the process. Our brains have to do a lot of work with all that raw data that comes in – stitching it all together, choosing what to concentrate on and what to ignore. It’s the brain that constructs our visual world. (more…)
Unidentified woman taking her own photograph using a mirror and a box camera, roughly 1900, Scanned from the original 4×5 inch glass negative.
Three years ago, on the 18th of November, 2013, the Oxford English Dictionary named the term “selfie” as their Word of the Year.
It was a term coined by an Australian, who took a photo of himself. He then posted it on an ABC online forum, saying, “Um, drunk at a mates 21st, I tripped ofer [sic] and landed lip first (with front teeth coming a very close second) on a set of steps. I had a hole about 1cm long right through my bottom lip. And sorry about the focus, it was a selfie”.
Today the term crops up with the regularity of death and taxes in news feeds across the world, and like death and taxes, it releases myriad conflicting contrails. (more…)