GCU’s virtual AI center generates calm, clarity in world of AI anxiety

GCU launched its virtual Center for Educational Technology and Learning Advancement this year to support AI literacy.

EDITOR'S NOTE: This is part one of a two-part series on AI anxiety and how GCU's new virtual Center for Educational Technology and Learning Advancement is helping to support AI literacy and guide students and faculty in the ethical and responsible use of artificial intelligence. Part two, about two of the campus' AI champions, will be published Wednesday on the GCU News website, news.gcu.edu.

Artificial intelligence, generative or not, generates real-world anxiety.

“One of the things we heard last year from students was that they were afraid to use AI, some of them because of this anxiety that they had,” said Rick Holbeck, co-director of Grand Canyon University’s Center for Educational Technology and Learning Advancement, a virtual center launched this year.

Rick Holbeck is one of GCU's leaders in integrating artificial intelligence into learning. (Photo by Ralph Freso/GCU News)

That unease stems from unclear rules, a lack of guidance or uncertainty about whether using AI will be perceived as appropriate, ethical or reliable.

“Students absolutely know that they live in a world that is AI-driven,” said Dr. Jean Mandernach, CETLA co-director alongside Holbeck. “They’re fully aware that they’re going to need to understand and use AI in their future careers.

“On the other hand, they get a lot of messages about cheating and academic integrity and authorship, and so this anxiety really stems from the unknown; from a lack of clear guidelines.”

Holbeck and Mandernach want to quell those fears through CETLA, which GCU developed as a comprehensive approach to supporting AI literacy and fostering an AI-aware curriculum and, as Mandernach said, “enhancing education in an AI-era.”

GCU's CETLA aims to foster an AI-aware curriculum. (iStock image)

She sees AI use as a continuum, or a spectrum, with “ethical and responsible use” on one side and “unethical and irresponsible use” on the other. AI use needs to stay toward the middle, rather than follow rigid rules defining what’s perceived as what’s good and bad use.

“Instructors are still figuring out their own boundaries,” she said. “So students are getting very different messages in different classes. That inconsistency fuels a lot of the anxiety we’re seeing.”

"AI literacy isn't just how to use it. It's how to question it," said Dr. Jean Mandernach, co-director of GCU's virtual AI center, CETLA. (Photo by Elizabeth Tinajero/Grand Canyon Education)

The center aims to set clear guidelines for the ethical and responsible use of AI at the university.

Added Holbeck, “I think part of what we’re doing is just setting the stage so that we can alleviate some of that anxiety and let them (students) know that it is OK to use it as long as they’re using it properly and ethically.”

Proper and ethical use really are at the heart of CETLA policy at GCU.

***

Proper AI use means knowing how to prompt the AI model with what the user inputs, like data or a question, and those prompts are vital. Generated answers will vary by the prompt, model used and depth of the data search.

Still, whatever the prompt, AI-generated responses aren’t always accurate.

They’re exacerbated by false and misleading information or biases consumed and permanently stored in the AI model’s memory. AI only knows the information it sources from existing data.

In an educational setting, using AI means questioning the source of the information, assuring that academic work is attributed correctly if artificial intelligence is used, and other responsible AI use. (iStock image)

Bay Area Cognitive Behavioral Therapy Center founder Dr. Avigail Lev, a clinical psychology expert who has analyzed the impact of AI on human relationships and mental health, said “AI is much more flawed than people realize. It’s hallucinating a lot more, lying about things more, and filling in blanks that are not true. The more information people feed it, the worse it gets.”

The same question can prompt OpenAI ChatGPT, Google Gemini and Microsoft CoPilot to generate different answers and sources.

Mandernach and GCU CETLA policies take that eye on AI one step further.

“AI literacy isn’t just how to use it. It’s how to question it,” Mandernach said. “Verifying the sources, questioning and not just taking what AI gives you at face value.”

In 2024, news stories reported that conversational chatbot Google Bard, rebranded as Google Gemini, recommended using nontoxic glue on pizza to better hold the cheese in place. The results could have been disastrous without the prompt author applying critical thinking.

Igor Trunov is founder and CEO of Atlantix, a Paris, France-based company that uses AI to connect researchers, entrepreneurs, corporations and investors on a global platform. (Contributed photo)

In 2023, lawyer Steven A. Schwartz used ChatGPT to prepare a court filing on behalf of Roberto Mata, who sued Colombian airline Avianca, alleging a metal food and beverage cart injured his knee on a flight.

Avianca’s attorneys couldn’t locate the cases cited in Schwartz’s brief; AI had invented them.

Not even engineers coding the models know how artificial intelligence thinks or works, said Igor Trunov, founder and CEO of Atlantix, a Paris, France-based company that uses AI to connect researchers, entrepreneurs, corporations and investors on a global platform.

“The biggest issue right now in artificial intelligence is transparency,” he said. “Even engineers from OpenAI say what happens inside is a black box. We can train AI and feed it information, but how it comes to its conclusions, we don’t know. Everyone knows how to train it, but no one knows how it thinks.”

This ties back to Mandernach and Holbeck, who said users, particularly students, can’t yet critically evaluate whether an AI answer is correct, misinformation or a “hallucination,” a tendency of AI models to generate false information or provide inaccurate answers.

Dr. Nika White, founder of Greenville, South Carolina-based Nika White Consulting, wants to find ways to make AI more inclusive. (Contributed photo)

 “If you don’t have the foundational knowledge to know if it’s telling you correct information or not, it’s not going to be that helpful for you,” Holbeck said.

Added Dr. Nika White, founder of Greenville, South Carolina-based Nika White Consulting, who has spoken on ways to make AI more inclusive, “Responsible AI definitely is one that’s taking into account the accuracy of the information. Are we testing that, and how are we fact-checking? We have to make sure we’re not just seeing it as technical, but also as something that needs that human side.”

***

From an educator’s perspective – any professional’s perspective – one fear is that AI will take their jobs.

Lev sees that fear in the tech companies she works with.

“In these huge companies, the people at the top are very confused about really how much AI will replace people or not,” Lev said. “What’s happening right now is that people are getting laid off, (then) people are getting hired back again because it seems like these tech companies are not sure what the new AI is like. (Companies are) not so clear on what their end goal is because they are not aware of (AI) limitations.”

Educators also worry students are using AI to cheat.

“On the one hand, educators' biggest focus, as it relates to AI, is trying to control all the ways that their students may be using it, and then this, like, fear that it’s going to be used for cheating,” said Jessica Reid Sliwerski, co-founder and CEO of Bay Area-based Ignite Reading, which augments human tutoring with AI-enabled assessment.

Jessica Reid Sliwerski's Ignite Reading, based in the Bay Area, augments human tutoring with AI-enabled assessment. (Contributed photo)

“And then what I also see is a fear of using it themselves, because they haven’t been told how they’re allowed to use it. So there’s also anxiety about, ‘Am I even allowed to be using this as a teacher?’”

Even though GCU’s CETLA policy doesn’t expressly address AI anxiety, the university is defining the guardrails to keep students and faculty on the right side of the continuum.

“When we’re talking about students using AI, we want them to understand the appropriate way to disclose that use,” Mandernach said. “(They need to learn how) to attribute what came from them versus what came from AI. That’s part of the ethical literacy piece that’s missing right now.”

She added, “Ethical use is very much human-centered. It is engulfed in transparency. It is very equity-focused. Irresponsible use is when we are deploying AI without assessing its impact, reinforcing systemic bias, or prioritizing speed and profit over safety.”

Beyond cheating, other concerns are that AI stifles creativity and leads to a decline in societal intelligence.

Trunov said, “I’m worried about the education system in the future, when you have artificial intelligence and it can do all your homework. It’s difficult to find any arguments to explain why you need to get new knowledge.”

The one non-negotiable everyone agrees on is that there needs to be human oversight.

Ethical and responsible use of AI is central to GCU's CETLA, say co-directors Rick Holbeck and Dr. Jean Mandernach. (Photo by Taylor Fox)

“AI can create great content, but it has to be checked by people. That’s what makes it powerful instead of risky,” Trunov said.

Sliwerski puts teachers in the role of human overseers who know that, ultimately, AI, like books or calculators, is just another teaching and learning resource.

“We need to help teachers see AI as a tool that supports them, not something that replaces them,” said Sliwerski. “The human connection – empathy, responsiveness – that’s where real learning happens.”

***

How AI was used in writing this article: The writer uploaded interview transcriptions to ChatGPT, which was prompted to prepare an outline using interview quotes. ChatGPT populated sections and subsections with quote summaries, though a human chose, edited or deleted suggested sections and subsections.

The model was then prompted to pull relevant quotes as the writer drafted sections. The most pertinent quotes were selected via human intervention, then subsections were written and selected quotes inserted.

Grammarly, an AI-based grammar, style and spell-checking app, ran through the article. Then the computer read the article aloud so the writer could make further edits.

A second draft provided more nuanced editing. That draft was also reviewed through Grammarly and edited by a human while being read aloud.

Two human editors read and made significant editing suggestions. A final edit by one human editor reduced the story nearly in half with more focus and organization.

According to GCU guidelines, this article was prepared using AI responsibly and ethically.

***

Related content:

GCU News/GCU Magazine: Teaching A.I.de: How GCU is leveraging artificial intelligence in learning

GCU News: From rock climbing to the NBA, students display their deep learning at AI event

Calendar

Calendar of Events

M Mon

T Tue

W Wed

T Thu

F Fri

S Sat

S Sun

2 events,

2 events,

3 events,

5 events,

0 events,

1 event,

0 events,

1 event,

1 event,

1 event,

1 event,

1 event,

1 event,

1 event,

1 event,

1 event,

1 event,

2 events,

1 event,

1 event,

1 event,

1 event,

1 event,

1 event,

1 event,

1 event,

1 event,

1 event,

1 event,

2 events,

1 event,

GCU Magazine

Bible Verse

So then, let us not be like others, who are asleep, but let us be alert and self-controlled. (1 Thessalonians 5:6)

To Read More: www.verseoftheday.com/