Just the other day, I found an article breaking the dilemma down into a simple binary question, do we trust data scientists with the questions of creating AI in government, the subject experts who work for those agencies, or someone else - like a program manager?
Admittedly, I find such articles a little tedious, because they use the word AI like it has a special impact on people, different from the use of other technologies. Do you have car insurance, or life insurance? Then your life has been measured, quantified, and ranked to identify a particular level of risk. This risk decides how much money you pay for your coverage. Insurance has been around for a long time, hundreds of years really, and while the methods of risk analysis have changed over time, the overall experience remains the same. You pay more money if you are high risk and less money if you are low risk. If we had AI/ML/DS then we just end up with a more granular risk to cost ratio. The granularity might create a cost savings for you - or more likely - will better benefit the insurer.
With such a robust history of calculation driving day-to-day decisions in the world, these questions on AI have sufficient historical context. The power of team building and decision making is multifaceted, and the overall bias of that team is primarily rooted in the interests of those who are driving it in the background. In the case of private sector efforts, we see the shadows of financing and financial goals, and in public sector we have the guardrails of public policy. If there is no policy, then the people on the front lines will have the most influence, for better or worse.
What is forgotten in such discussions is that the issue is not merely solved by identifying "who" is doing the work. The second part is "how" are they doing it?
Our our processes, our tools, and our communications shape our outcomes. I've hired highly talented data scientists who were terrible communicators and thus failed the project. I've also hired incredibly communicators with deep expertise, and yet they relied upon antiquated methods and therefore their work was not easily absorbed into the organization. Not to mention, tools and code have flavors and bias.
If we want to build great AI, we have to stop pretending that the open ended questions on "what should we do," can be solved with simple heuristics. Rather, lets education ourselves to better understand the problems, discuss them, collaborate, build, and share our work with each other. We will make mistakes, and with persistence - our mistakes will change - as we also change our tools and our minds.
No comments:
Post a Comment