How AI tools both help and hinder equity in higher ed

When it comes to artificial intelligence, faculty members across disciplines have had a demanding year. Many have redesigned assignments and developed new course policies in the presence of generative AI tools. At conferences and in idle moments, some have pondered what makes process human. (One possible answer: burstiness.) Others have designed, delivered or participated in workshops focused on AI in teaching and learning, with or without support. One sent students a message that he would “not grade this chat Gpt shit.” (No doubt the fallout required time.)

Amid 2023’s AI disruption, professors have also grappled with a paradox. In one narrative, large language models offer much-needed assistance to students by bolstering their creative, research, writing and problem-solving skills. Students with disabilities also benefit from AI tools that, for example, provide executive function support. But in another narrative, algorithms reproduce systemic bias and hold the potential to widen the education gap like no other technology before them.

“There are two schools of thought,” Belkacem Karim Boughida, dean of libraries at Stony Brook University, said, adding that he is committed to a middle ground. “All data is biased, and you can approach tools in an ethical and responsible way.”

New Texas education laws this year range from school safety to tutoring

School safety