Does the algorithm solve the human bias problem, or does it compound it? Are there unintended consequences associated with heavy reliance on algorithms for decision-making? How do we decide where to draw the line?
For example, Facebook’s year-in-review collections of most memorable images automatically posted to users news feeds created unintended consequences. A great way to celebrate a birth or a new job, a somber way to be reminded of the loss of a loved one.
USV opened a thread on this topic:
Recommendation engines and algorithms create positive feedback loops that reinforce a desired behavior by replicating the conditions that led to that outcome. These tools can provide a great user experience by recommending new content to you before you even went to search for it. Maybe Amazon recommends you a new book by an author you’ve already purchased books from. Facebook may suggest a friend to connect with based on your connection to 10+ mutual friends. The recommendations are there to help make the product more useful, personalized and easier to find value.
Issues arise with these algorithms when they clash with ethics and morals.
Taking a philosophical stance, is it possible to encode ethics in objective statements? Whose moral code do we select to judge value? What are the cultural implications?