In response to calls to stop using unethical and exploitative generative AI tools, some critics of this perspective argue that such demands are implausible because the incentives in favor of use are just too powerful. This objection is strong. Indeed, the path of least resistance often outweighs willpower and commitment. Acknowledging this difficulty, some of these critics, let’s call them “responsible realists”, advocate for “responsible use” instead.
I find the stance of the responsible realists puzzling. If we take seriously the argument about the practical impossibility of resisting the use of a highly convenient tool (this is something the responsible realist argues, so let’s accept that for the sake of argument) and the erosion of responsibility), then the same argument could apply to the feasibility of truly responsible use. If we accept that convenience eats ethical commitment for breakfast, why do responsible realists believe that responsible use is more plausible than avoiding usage? Why would calls for responsible use be more consequential amid the same incentives that render rejection unviable? You can’t have your cake and eat it too.
And what do responsible realists mean by responsible use, really? As I understand it, it often boils down to accepting the ethical trade-offs as a given and using the tool while being aware of its problems. For example: “You should always check the output for biases and falsehoods.” This narrow, individual focus sets a low bar for responsibility undermining the strength of the concept.
If we understand responsibility in a stronger sense, responsible use becomes as difficult as rejection. If the commercially available generative tools are indeed unethical and exploitative, as many argue, then responsible use is essentially equivalent to no use at all.
Naturally, there are other strategies beyond these two. Critical awareness and participatory engagement in design and adoption, as well as regulation and activism, are among them. But this is a different topic.