Human-Centered (vs) AI

By Douglas Eyman
We've been thinking a lot about AI and its impacts on our work at Kairos. We recently updated our AI policy[LINK], which was informed by the work of a lot of other folks at journals both in the humanities and in the sciences—we're all struggling to understand how to work in a world that now produces so much content by machine, much of which has little, if any, value. I love the idea of seeing more words—more literate creations—on the Internet, but I don't love the flood of useless verbiage we're being subjected to. And it's making our work more difficult: AI adds errors that we have to correct when we copyedit the works we publish, even if only used to 'check your grammar' or 'proofread the text.' AI changes what doesn't need to be changed, fixes most (but not all!) errors, and then adds new errors.
And now we have an additional AI-induced headache: People can't read the work we publish at Kairos because our site is not always available. Why is it not available? Because AI scraper bots are absolutely hammering our server with connections, effectively performing a denial of service attack just like a malicious hacker would. These bots ignore our polite requests to not do this (in our robots.txt file, which such bots are supposed to use to see what's fair game for indexing and what is not). There are some technical fixes, but for small, volunteer-run sites like ours, they take time and expertise to implement, and we don't really have enough of either.
I started out interested in what these AI tools could bring as a new writing technology, but I've rapidly become disillusioned with their performance, as well as realizing that the negative environmental, labor, and intellectual property impacts of these systems represent far too high a price to pay for producing excessively generic, error-ridden texts, or summaries that lack nuance or understanding, or images of six-fingered people with hats floating above their heads. Although I confess that I have used some AI-generated images on projects that focus on AI in composing, both as potentially useful tools and for more critical approaches—our “Can AI Peer Review?” post uses one such image. And we have not used AI in any part of our writing process for these posts, although we did use it to help create a generic feature image for two posts we wrote early in the process of publishing this book. More recently we have made a conscious decision to use public domain and CC-licensed images from human producers rather than adding to the environmental damage being done by data centers supporting AI applications.
Recently, Emily Bender posted an interesting observation from UNESCO's Digital Learning Week event on Bluesky, where she said that if AI refusal is off the table, then “we have in fact centered the technology, not the people” as the default. I've found that thought hanging around, in part because our field has a very solid set of arguments for AI refusal that are informed by writing studies research: see Jennifer Sano-Franchini, Megan McIntyre, and Maggie Fernandes, Refusing GenAI in Writing Studies, or Fernandes and McIntyre’s webtext in Kairos, “Giving Voice to Generative AI Refusal in Rhetoric and Writing Studies”.

We do need both scholars and students to develop critical literacies around generative AI, and it's likely that we'll see genAI become part of a regular workflow, much as spell-check and grammar checkers work now—but it won't be any more revolutionary to writing or to work than those tools have been. And allowing individuals to make ethical choices about not using AI is something we (as editors and publishers) should support. We at Kairos also have asked our peer-reviewers to not use AI in their review work, for a host of reasons, ranging from the consequences of providing AI systems with authors' intellectual property without their consent to the damage that can be done when AI provides incorrect and unsophisticated review responses.
Kairos’s AI policy was drafted by me and reviewed by our staff; we do expect that it might evolve over time to account for changes in the technology, its environmental effects, and its uses. This is, perhaps, a less-strong form of my concerns about AI than I've indicated above, but in the context of editing and producing the work of the journal, I think it captures our stance fairly well. After posting the initial policy, we added the additional language about AI use in peer-review as a consequence of engaging with discussions about developing AI policies with editors of other journals. We think this is a good policy overall, so we are including the language of the policy below. We encourage anyone who finds it useful to appropriate, edit for their own contexts, and use with or without attribution.
Generative AI can be used for different parts of the writing process, and different writers will find value in using it in different ways. While we don't see it as appropriate to dictate how an author chooses to use (or not use) generative AI applications, Kairos believes that the inherent flaws of generative AI trade a false sense of efficiency for effectiveness, and its use introduces errors that slow down our editorial review and copy-editing processes. We also must acknowledge that there is strong evidence that AI is bad for the environment, bad for scholarship, bad for language, bad for art, and bad for people. Relying on AI to generate content or to summarize prior scholarship is antithetical to the research enterprise, and we advise authors to be critical users of these tools. Authors must ensure they carefully review and approve any output produced by these systems (as authors will be held accountable for it). We had originally noted that authors should cite AI when using its output—and for purposes of example or critique, this is still the case. However, AI output cannot be cited as an authorial source, as generative AI cannot be held accountable for its output. Similarly, authors should not treat generative AI as an interlocutor (that is, they should not report on AI output as if it were a conscious entity or as if it has the capacity to act rhetorically). Authors who use generative AI in their writing process should provide a statement acknowledging that they have done so, and in what capacities.
While we believe that AI should generally be avoided in the writing process, we also can see that it can be an effective tool for generating HTML and CSS code and assisting with the coding process as an author seeks to align their webtext design and argument. While AI can help with coding, the key design decisions should still be solely developed by the author and AI should only play a supportive role in the production of the webtext. If AI systems are used to essentially reproduce one of the many standard responsive templates freely available on the web, then it has added no value—AI coding should only be used to help an author realize their own creative vision. Authors will still need to review the code for accuracy and correctness. Authors should provide a brief statement acknowledging any use of AI for coding or design. Kairos prefers authors to use human-made art or graphics (and we encourage authors to engage their own creativity when it comes to visuals and design elements); however, AI produced images or text may be used in service of critique or commentary, just as we allow for fair use of published media for those same purposes.
We also ask our peer reviewers to refrain from using generative AI in their review processes. We know that AI is not an effective summarizer of complex texts (and it has trouble with the kind of designed arguments—webtexts—that we publish in Kairos). We view our peer-review process as a mentoring opportunity rather than simply gatekeeping, and letting AI do the mentoring and reviewing work is an abrogation of our responsibility as reviewers and as scholars.