AI? At my atheist discussion group last night, we talked about AI. Although the debates over the usefulness or, alternately, the dangers of AI continue to roil the public imagination, most of them overlook a more fundamental idea: the enduring distinction between "is" and "ought." That is, even if we can, should we? And if we do, how are we to do it?
At this point, the world cannot put AI back into the bag and pretend that it isn't there. Clearly, it's one of the leading waves of the future. And the nations of the world can, if they wish, work together to establish ethical guidelines and boundaries for how AI is used. While we move forward with AI, however, it seems to me that we must keep asking ourselves this question: even if we can, should we?
One of the essential premises of capitalism is that, if we can, yes, we should. Historically and in the long, long run, this premise has generally worked to the world's benefit. Generally. "Creative destruction," proponents call it. Yes, people fall through the cracks, and yes, societies experience upheavals, but all these are part of the larger and necessary process. In the end, or so theorists say, everyone benefits.
This begs another question: how much do we wish to trust the people who are working on AI? Are they our moral arbiters? Are they the decision makers for the rest of us? What are their ethical starting points?
Again, we cannot turn back the clock. AI is here to stay. Nonetheless, we should keep asking ourselves about the virtue, and it is a significant virtue, of our human capacities: how do we balance our capacities to create with our capacities to be moral?
No comments:
Post a Comment