Is AI (Artificial Intelligence) a gift or a curse
If technology is in a position to zoom in and out, offering more micro and macro insights into human behavior and impact, how can we make sure that insight will translate into ethical action?

You don’t need to be a sci-fi wizard to recollect Avatar. The 2009 fantasy blockbuster was equally entrancing for Trekkies and faculty teachers alike. within the epic fantasy film, writer and director James Cameron envisioned a replacement world inhabited by an alien species referred to as the Na’vi, who lived in perfect harmony with their deity, Ewya, on an exoplanetary moon referred to as Pandora. All was well ‒ an ideal display of symbiosis between nature and its inhabitants ‒ until humanity had to point out up. And once more, it had been a case of ‘winner takes all’, where the collective force of human greed was unleashed to tailspin perfect order into chaos.
Albeit fantastically far-fetched, there was something deeply resonating about the movie. What made Avatar rack up two Academy Awards and USD2 billion was quite just great computer graphics and Sam Worthington’s acting. it had been the sense that ‘this is us now ‒ and this might be us later.’ The plot was a well-known one, with humanity because of the classic villain, willing to plunder paradise for temporal gain but to their ultimate demise.
In 1974, biochemist James Lovelock posited a replacement paradigm referred to as the Gaia theory. Essentially, he said that organisms and their inorganic surroundings have evolved together into one living, self-regulating complex system over time. The biota, or web of interdependent organisms, has determined everything from global temperatures to ocean salinity ‒ anything that might ensure “life maintains conditions suitable for its own survival”. In short, life has been making how for itself over countless centuries.
We might not be ready to tap into the whisperings of Ewya just like the Na’vi did on Avatar. But, what if we could tune into the undertones of the biota? Our technologies are advancing concurrently and exponentially, synthesizing billions of intelligent devices into one cloud-based ecosystem referred to as the web of Things (IoT). As our systems will get smarter, so too will our ability to know their interconnectedness. Imagine the transformative power we could unlock if we could see the cumulative impact of a billion small actions in motion. Could IoT be the hero to save lots of us from ourselves?
The laws of consequence
The concept of consequence is nothing new. Scientists are studying their behavior for hundreds of years. Newton reminds us in his third law that each action has an equal and opposite reaction; Clausius and Kelvin tell us within the first law of thermodynamics that energy can't be created or destroyed ‒ only transformed from one form to a different. So, once we extract oil from the world, transforming its thermal energy into K.E. to show a turbine to get electricity for our household use, we've to understand there'll be a consequence on the opposite side of the equation. Look no further than our melting ice caps for exhibit A.
But, as our devices become all the more embedded with intelligence, and IoT rolls merrily our way, we've fewer excuses to not connect the dots into the longer term. Machine learning will increasingly add up to the vast oceans of knowledge flooding in daily, filtering out helpful insights and patterns to enable improvements in nearly every sphere of life. Machines will most likely get excellent at remarking the opportunities and pitfalls, and that we are going to be left to steward these key insights.
An apocalyptic alternative
abstract image of an individual watching space of course, there are warning lights everywhere. the ever-present concern is that we’re frantically investing during a world order that would potentially outrun our human capacities and ingenuity, offering no promise to stay us in it someday. Experts like Nick Bostrom warn us of the grave danger in controlling AI, including Elon Musk who predicts its powers to trigger WW3 and eventually wipe out humanity.
But apocalyptic singularity is merely a method of watching it. a number of the world’s top entrepreneurs are suggesting a more cooperative, hands-on approach to the difficulty. Industry leaders like Eric Schmidt, Peter Thiel, and Elon Musk have invested billions within the research and promotion of ethical AI to ‘benefit humanity as a whole’. Called OpenAI, the new non-profit is aimed toward developing AI which will function a tool to assist solve major challenges, including global climate change and food security. They argue our technologies can become forces for the greater good, instead of shovels for our species’ grave. Says Facebook’s chief technology officer Mike Schroepfer, “The power of AI technology is it can solve problems that scale to the entire planet.”
The maths of morality
If technology is in a position to concentrate and out, offering more micro and macro insights into human behavior and impact, how can we make sure that insight will translate into ethical action? In other words, how can we confirm the robots and ‘biota’ and ‘Mama Gaia’ will all want to be friends? The question isn't very easy, once you consider the complexities of overpopulation, short-term profit gain, and demands for environmental protection, all running side by side and vying for more on top. If we ever want to tap into the heartbeat of our own Gaia, we’ll need to invest much more within the algorithm of ethics.
Says Pedro Domingos, author of the recent book The Master Algorithm, “I actually don’t think it’s that tough to encode ethical considerations into machine learning algorithms.” However, he notes: “The big question is whether or not we citizenry are ready to formalize our ethical beliefs during a halfway coherent and complete way.” the important issue is, as custodians of an ethical code, are citizenry even ready to articulate and agree on what’s right and wrong?
At this relatively early stage of AI advancement, the onus is on us to intersect and qualify the moral code for sustainability, planet, and population alike. If we will learn to attach our technologies to know the rhythms and impacts of symbiotic living, we might not need to strap those moon boots on and plan our earthly exodus just yet.
Perhaps the notion of Pandora wasn't as ‘out there’ because it seemed. Our world is awfully interwoven; we may be building the tools to interpret and manipulate its complexity for the higher. But as systems sophisticate, the question governing our future won't be such a lot how we'll steward this responsibility, but rather will we?
What's Your Reaction?






