Sitting in traffic one rainy Sunday, listening to (embarrassingly accurate) 90’s music recommendations on Spotify, I mulled over the increasing presence and influence of Artificial Intelligence in my daily life. On my Sunday drive to Spotlight I can avoid the worst of the rain-affected traffic and save journey time following the Google Maps recommended route. The time my partner spends entering invoice and bill details into Xero has has been cut down by implementation of machine learning. My TradeMe recommendations are more relevant and presented in an order that makes sense to me due to AI analysing my browsing and buying behaviour.
Looking through the fogged-up windows, the one thing clear to me was that AI isn’t a technology to think about designing for in the future, it’s already here. Product teams developing AI-powered tools are making decisions on ethics, functionality and interactions today that will become foundations for what is built over the next 5, 10, and 50+ years. When making these decisions, we have a responsibility to be judicious and mindful about the foundations we are collectively working to define.
Physicians can take the Hippocratic Oath is to swear to uphold certain ethical standards. Research scientists must submit their proposals to ethics committees for approval. And while there is a “strong libertarian ethos” among technologists there is no universal ethical standards or process for digital product teams to follow. As a designer in a service company which works on a wide variety of client products, I need to be informed if I hope to be an active participant discussing ethical AI within my projects, company and industry. Here are some steps and resources I’ve gathered to help me, and perhaps you, prepare to become an advocate for the discussions and decisions around ethical AI practices that we will face ahead.
1. Learn from unintended consequences in the past
Humans have a long history of experimenting with things we don't fully understand, even with the best of intentions. Bernard Vonnegut (older brother of science fiction writer Kurt Vonnegut) worked for General Electric in the 1940’s as an atmospheric scientist. He discovered that silver iodide could be used as an agent to affect the amount of rainfall from clouds, known as cloud seeding. His hope was that cloud seeding could help bring rain to drought affected areas, but his project attracted military interest due to its potential to redirect weather patterns (ie. hurricanes) towards enemy targets. Although his work helped to demonstrate that human activity can affect the weather - helping to inspire subsequent research on climate change, it also highlights the moral responsibilities of scientists which technologists share. Just because you can, doesn’t mean you should.
Through learning about unintended consequences of innovation in the past we can bring circumspection and discussion into our own project development.
MSD’s Privacy, Human Rights and Ethics framework overview
In May this year the Ministry of Social Development released information on The Privacy, Human Rights and Ethics framework (“PHRaE”) which is a framework to help question whether it’s right to use data just because there is access to it.
It’s described as set of smart tools “that users of information can utilise to ensure privacy, human rights and ethics have been considered from the design and development stage of an initiative.”
While the tools still appear to be internal to government, the framework overview (linked in the article) gives a good start to questions that project teams can ask to highlight potential risks and unintended consequences in any project that collects user data.
Data Ethics Framework - Department for Digital, Culture, Media & Sport, UK
The Rt Hon Matt Hancock MP, Secretary of State for Digital, Culture, Media and Sport released the Data Ethics Framework “to encourage ethical data use to build better services and inform policy.” The framework includes a workbook (linked in in the page) that product teams could step through - start with clear user need and public benefit, be aware of relevant legislation and codes of practice, use data that is proportionate to the user need, understand the limitations of the data, use robust practices and work within your skillset, make your work transparent and be accountable, embed data use responsibly.
2. Take inspiration from the future(s)
“I define science fiction as the art of the possible.” Ray Bradbury
Science fiction can give great insight into potential futures with countless depictions of dystopian futures caused by omnipotent AI wreaking havoc. Positive advancements in technology have also been included in fiction before becoming a reality. Jules Verne proposed the idea of light-propelled spaceships in his 1865 novel 'From the Earth to the Moon' and NASA engineers are today developing and deploying solar sails. Arthur C. Clarke’s 1968 novel ‘2001: A Space Odyssey’ included tablet computers called Newspad’s, with the first Microsoft tablet on sale in the early 2000’s.
Industry leaders in the technology and software industry already highlight the value of their teams taking influence from science fiction. Science fiction writer David Brin serves on the advisory board of NASA's Innovative and Advanced Concepts group. Microsoft hired science fiction writers to envision the future of new technology, and the VR company Magic Leap’s Chief Futurist is currently Neal Stephenson, a prominent sci-fi writer whose book ‘Snow Crash’ is on a reading list that Facebook gives its Oculus employees.
The MIT Media Lab teach the course Science Fiction to Science Fabrication, where students combine science fiction texts and films with physical or code based “interpretations of the technologies they depict.” (ie. prototypes!)
Instructor Sophie Brueckner explains “Authors have explored these exact topics (biotech, genetic engineering etc.) in incredible depth for decades, and I feel reading their writing can be just as important as reading research papers.”
A fascinating example of a student project was inspired by William Gibson’s ‘Neuromancer’ where students used “electrodes and wireless technology to enable a user, by making a hand gesture, to stimulate the muscles in the hand of a distant second user, creating the same gesture.” With some really humane and positive potential uses, instructor Novy explains, “there was also deep discussion among the class about the ethical implications of their device. In Gibson’s novel, the technology is used to exploit people sexually, turning them into remote-controlled “meat puppets.”
“Science fiction, at its best, engenders the sort of flexible thinking that not only inspires us, but compels us to consider the myriad potential consequences of our actions.” Sophie Brueckner
MIT’s ‘Science Fiction to Science Fabrication’ reading list
Topics covered in the class and the reading list include virtual/augmented reality; networks; artificial intelligence; nanotechnology; and more...
“This class ties science fiction with speculative/critical design as a means to encourage the ethical and thoughtful design of new technologies.”
3. Understand the ethics bit
As software teams building and advancing the technology of AI, we have an implicit influence and responsibility to speak and act if we disagree with project, company or industry decisions. Earlier this year over 3,000 Google employees shouldered that responsibility and exercised their right to challenge business decisions, signing a letter to protest the company’s involvement in the Pentagon program Project Maven – asking for Google to announce a policy that it will not “ever build warfare technology.” The New York Times reported in June that Google won’t renew the contract with the Pentagon when a current deal expires next year.
At Ackama we have company-wide discussions to develop a shared understanding of what we define as 'ethical' work or an ethical company/organisation. Although we also recognise it's a tricky subject, where the lines can be blurry, thought experiments and discussions are a valuable way to explore this. Does your workplace have a policy in place describing it’s understanding of an ethical or unethical project or company? Can you exercise your right and responsibility to challenge decisions and disagree with company leadership if you do not agree with decisions that affect privacy, human rights and ethics within a project?
Crash Course: What is Philosophy (animated videos)
Before I feel comfortable having an informed discussion about ethics and AI, and what constitutes an ethical project or company I am doing a bit of groundwork on the basics in the field of ethics. I highly recommend this Philosophy video series from ‘Crash Course’ which have great animated explanations of some concepts.
NZ Human Rights Commission: Privacy, Data and Technology: Human Rights Challenges in the Digital Age (May 2018)
It’s a meaty paper but a really interesting read in relation to human rights and privacy rights in regards to data, data retention, mass surveillance, big data and AI and how these relate to the Bill of Rights Act (affirm, protect, and promote human rights and fundamental freedoms in New Zealand) and Privacy Act (regulates the collection, use and disclosure of information about individuals).
Artificial Intelligence technology is continuing to advance and in order for it to advance in a direction we are comfortable with, we as product teams need to be involved in the discussion around ethics and AI. We need to learn from unintended consequences of innovation in the past, take inspiration from the proposed futures proposed in science fiction, and we need to know what we stand for as individuals as well as understanding our company and industry ethical standards. The future of an ethical Artificial Intelligence does not just depend on global leaders, human rights professionals, and Silicon Valley CEOs, it also depends on us.
Thanks to Kim Partridge and Shakira Jensen.