The omnipresence of algorithms that impact billions of lives cannot be underestimated. With their mathematical efficiency, they are built into our mobile phones, appliances, online services and products we interface with in our quotidian routines.
Today, algorithmically-powered applications analyze, make decisions and act autonomously. The incursion of machines in our lives has led to coupled behaviors we often can no longer distinguish. We voluntarily and passively use these tools to influence sleep behavior, media consumption, commuter paths, transportation systems, financial systems, housing choices and even national defense.
Algorithms are purpose-built and designed by software engineers who could embed their biases towards a preferred mathematical and statistical consequence. Innately, an algorithm is unprincipled and exists within a broad moral domain. In my view, a human mind created these tools, therefore, it is the creator’s responsibility to imbue ethical and moral considerations into their creations. However, the expertise and frameworks to ensure that the mathematical foundation of AI and machine learning algorithms are ethical and fair has not kept up with the rapid development of the field.
Our role at APCO is to counsel our technology-creating clients to produce ethical and fair data processes, algorithms, machine learning and AI frameworks and counsel our user-side clients on how to equitably use these tools. Biases are naturally latent within data sets and massively impact machine learning and AI modeling. Algorithmic biases are ironically a very human problem that can raise inherited flaws that affect the human experience of millions and damage trust in a business or organization. They can unintentionally negatively impact people of different races, genders, incomes or education levels.
Data and systems models should reflect the population they will serve, and at APCO, we often help identify stakeholders a team of developers may not expect. Helping our clients assess their algorithmic process and convert inherit negatives into positive contributions to stakeholder experiences lowers overall risk. By acknowledging, seeking and resolving these human biases, we can help ensure that the data processes that are so fully-integrated into our lives more closely represent the diversity of the world and reflect the inclusive intentions of the businesses that create and use them.
If you would like to learn more about algorithmic bias, I recommend Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy by Cathy O’Neil.