Generative AI in Instructional Design: Adoption, Benefits, and Best Practices
Introduction
The integration of artificial intelligence (AI) in educational contexts represents a significant shift in how learning experiences are designed and delivered (McElheran et al., 2024). Within this educational realm, instructional design, the systematic development of learning experiences, has emerged as an area of focus for AI integration. Instructional designers are increasingly leveraging AI tools to automate routine tasks, streamline content creation, and develop more personalized learning pathways (Ch'ng, 2023).
Among AI technologies, generative AI has shown promise for instructional design applications. These systems, which produce human-like responses to complex prompts through pattern recognition in vast training datasets, offer capabilities uniquely suited to educational content creation (Lim et al., 2023). Beyond simple text generation, generative AI demonstrates more complex capabilities, including providing immediate feedback, intelligent tutoring, and tailored instructional responses, all functions that directly align with core instructional design work (Weng & Chiu, 2023). This merging of generative AI capabilities with instructional design creates both opportunities and challenges that deserve further investigation.
Problem Statement
Despite the rapid increase of generative AI use in educational contexts, three significant gaps exist in the current literature:
Theoretical Framework Gap
While existing studies offer rich descriptive data (Luo et al., 2024), connecting these insights to technology adoption theory presents an opportunity to gain a deeper understanding of instructional designers' decisions regarding the integration of AI.
Human-AI Partnership Gap
While existing research catalogs the ways instructional designers use GenAI, there is a limited understanding of the complex integration patterns that represent a human-AI collaborative framework, rather than simple tool use.
Domain-Specific Implementation Gap
Research has not adequately examined the instructional design-specific challenges associated with GenAI, particularly regarding pedagogical frameworks that are crucial to effective learning design.
This study addresses these research gaps by examining the actual adoption patterns, perceived benefits, experienced challenges, and emerging best practices among practicing instructional designers who use generative AI tools. Through the theoretical lens of the modified UTAUT framework, we investigate how instructional designers develop advanced human-AI partnerships and navigate domain-specific challenges. The UTAUT framework, developed by Venkatesh et al. (2003), predicts user intentions and actual technology use behavior by identifying four key constructs: Performance Expectancy (the belief that technology enhances job performance), Effort Expectancy (the perceived ease of use), Social Influence (the perceived social pressure to use technology), and Facilitating Conditions (the organizational and technical support for the use of technology). Using UTAUT as a guide, the study's research questions are:
How are instructional designers using generative AI to automate aspects of the ID workflow?
What opportunities or advantages have instructional designers discovered when using generative AI during the ID workflow?
What best practices have instructional designers adopted when using generative AI during the ID workflow?
What challenges have instructional designers experienced when using generative AI during the ID workflow?
Question 1 examines the actual use of AI, an outcome variable in the UTAUT framework, affected by other constructs. This also provides context for understanding perceived Performance Expectancy as designers utilize AI for various tasks. Question 2 relates to Performance Expectancy, examining the perceived benefits and improvements in job-related outcomes. Question 3 addresses Attitudes toward the use of AI and the strategies used within existing Facilitating Conditions. Question 4 examines the factors affecting Effort Expectancy and identifies potential limitations in Facilitating Conditions that might prevent AI adoption.
Literature Review
Generative AI in Education
As generative AI can transform remote learning and education (Bozkurt & Sharma, 2023), these tools open new opportunities for educators to identify areas in which students might be having difficulty. Students can also use these tools to get timely feedback, individualized advice, support, and feedback (Lim et al., 2023). In addition, AI to can be used to automate administrative tasks and repetitive processes, allowing instructors to focus more on pedagogy and material quality, thereby enhancing students' learning experiences (Triberti et al., 2024). It is also hoped that AI can be useful in allowing instructors more space and time to work with students and respond to their educational needs.
UTAUT Framework in Technology Adoption
The Unified Theory of Acceptance and Use of Technology (UTAUT) provides a theoretical framework for understanding technology adoption in educational contexts. Developed by Venkatesh et al. (2003), UTAUT synthesizes eight previous technology acceptance models.
The UTAUT model has been widely utilized in educational technology research across diverse contexts. In higher education, the UTAUT framework has been used to analyze technology acceptance regarding online marking and feedback tools, highlighting its effectiveness in assessing educational technologies rather than broader adoption trends. In K-12 education, UTAUT constructs have been used to predict technology adoption among educators, with Performance Expectancy and Social Influence identified as the most significant factors. Kittinger and Law (2024) conducted a systematic review that identified a limited number of studies applying UTAUT in K-12 settings, highlighting a gap in the understanding of technology adoption within this context.
UTAUT presents significant theoretical advantages for examining the adoption of generative AI in instructional design. UTAUT has undergone extensive empirical validation across multiple educational contexts (Dwivedi et al., 2019), providing a well-established foundation for descriptive generative AI studies. The UTAUT model also emphasizes the significance of Social Influence and Facilitating Conditions, highlighting the need for contextual analysis in technology implementation and adoption (Venkatesh, Thong & Xu, 2012).
Current Understanding of GenAI in Instructional Design
Recent studies have begun exploring how instructional designers incorporate generative AI into their practices. Luo et al. (2024) conducted a mixed-methods study to investigate the perceptions and experiences of instructional designers using GenAI technologies. Their research revealed that instructional designers employ GenAI for four primary purposes: idea generation, managing low-stakes tasks, optimizing design processes, and enhancing collaboration. The study also highlighted several challenges, including quality concerns, data security, and questions regarding authorship and plagiarism.
Recent studies on prompt engineering, which involves structuring inputs for AI systems, highlight its critical importance for instructional designers utilizing generative AI. Santana (2024) defines prompt engineering as the method of effectively interacting with an AI to attain specific goals, highlighting that without purposeful prompting techniques, outcomes frequently fall short of expectations. The effectiveness of generative AI in instructional design is heavily dependent upon the quality of prompts, which must be carefully formulated to include relevant context, explicit instructions, and output specifications (Santana, 2024). This indicates the need for instructional designers to upskill, as they must attain competency in both conventional design techniques and the specific prompting frameworks that provide optimal AI results. Madunic and Sovulj (2024) add that domain-specific prompt engineering in educational settings necessitates careful consideration of pedagogical frameworks, which many AI systems struggle to accurately execute and deliver valuable and useful results without substantial prompt refining.
Weng and Chiu (2023) and Gibson (2023) examined how AI-assisted tasks might improve the instructional design process in terms of efficiency, assessment development, content production, personalization, and engagement. Their results suggest that by automating repetitive procedures and enabling more personalized learning experiences, generative AI can enhance both the design experience and the quality of learning outcomes (Gibson, 2023; Weng & Chiu, 2023).
Current Limitations of GenAI in Instructional Design
While current studies primarily document how instructional designers employ GenAI, they lack an exploration of the complex integration patterns that define actual human-AI cooperative frameworks (Kumar et al., 2024). Although current research has shown general deployment tactics, a closer look is needed at how these technologies integrate into intricate instructional design workflow systems. Of particular concern is the limited research on domain-specific challenges unique to instructional design, especially regarding pedagogical frameworks and components critical to effective learning design (Kumar et al., 2024). Additionally, according to Luo et al. (2024), instructional designers recognize the importance of institutional guidelines when implementing new technology, but current research does not thoroughly examine how organizational limitations impact the adoption and integration of GenAI in practice.
AI Oversight and Human-AI Collaboration
Recent research has investigated human oversight of AI and the concept of human-AI collaboration, which provides a basic understanding of instructional designers' work with generative AI. Concerns around the adoption of generative AI center around permission, privacy, ethics, and data security (Yogesh et al., 2023). This has led to discussion around the need and structure of human oversight, which is defined by Sterz et al. (2024) as the management of a system by at least one supervising human who has the authority to control or change its actions or outcomes. As one goal of human oversight is to mitigate risks, Sterz et al. (2024) propose that effective human oversight is similar to moral responsibility combined with appropriate intentions.
Several emerging models of human-AI collaboration have been identified in the literature. Chiu (2023) introduced an Educational Collaboration Framework tailored for self-regulated learning, delineating distinct roles for AI and human educators. In this framework, AI is categorized as an observer, enabler, alternative intelligence provider, and content creator, while teachers are assigned complementary roles. Mosqueira-Rey et al. (2024) describe the Human-in-the-Loop model, which emphasizes human oversight and refinement of AI outputs, ensuring significant human control, particularly in high-stakes decision-making contexts. In this model, artificial intelligence is guided to produce preliminary materials that are subsequently evaluated and modified by specialists.
The UTAUT Theoretical Framework
This study employs the modified Unified Theory of Acceptance and Use of Technology (UTAUT) model (Figure 1) proposed by Dwivedi et al. (2019). The UTAUT model is particularly relevant in examining instructional designers’ adoption of Generative AI as it provides a comprehensive framework for understanding technology acceptance in the ID context.
Figure 1
Modified UTAUT Model

The modified UTAUT model includes seven constructs to explain both intentions to use technology and the resulting use behavior. This study focuses on a subset of four constructs that are most relevant to instructional designers’ acceptance and use of AI. The subset includes:
Performance Expectancy: This construct captures ID expectations about how AI tools will enhance job performance in terms of efficiency gains, quality improvements, and workflow automation. Performance Expectancy influences whether instructional designers perceive AI as beneficial enough to incorporate into their design processes.
Effort Expectancy: This represents an IDs’ perceived ease of integrating generative AI into existing workflows. This includes learning to create effective prompts, understanding AI capabilities and limitations, and adapting design processes to incorporate AI tools.
Facilitating Conditions: This encompasses the organizational, technical, and resource factors that support or hinder instructional designers' use of generative AI, including institutional policies on AI use, availability of training resources, technical infrastructure, and leadership support for AI integration.
Attitude: This represents instructional designers' overall feeling toward using generative AI in their work. This includes their comfort level with AI-generated content, ethical concerns about AI use, and general feelings about AI as a design partner rather than a replacement. Attitude significantly influences intentions to use AI and actual use patterns.
Table 1 shows how the four UTAUT components are connected to our research questions, which helps to understand how instructional designers adopt and use AI tools. Also included is the 'Actual Use of AI' (Use Behavior in Figure 1) as it represents the adoption and use of the new technology.
Table 1
Core UTAUT Components and Related Research Questions
Core UTAUT Components | Definition | Related Research Question(s) |
|---|---|---|
Performance Expectancy | The degree to which an individual believes using generative AI will enhance job performance | RQ1 |
Actual Use of AI | Resulting use behavior | RQ1 |
Effort Expectancy | The degree of ease associated with using generative AI | RQ2 RQ4 |
Facilitating Conditions | The degree to which an individual believes organizational and technical infrastructure supports generative AI use | RQ3 RQ4 |
Attitude | An overall feeling toward using AI in ID work. Attitude significantly influences both intentions to use AI and actual use patterns. | RQ3 All RQs |
Methods
Approach and Timeline
A survey was used to gather responses from 144 instructional designers. A focus group was then conducted with a subset of the survey group (n=6), following the UTAUT technology adoption framework. Our sample was larger than that of similar studies, providing a broader view of how instructional designers utilize AI across various workplaces. The survey questions also followed UTAUT components (Table 1), which helped connect data to theory.
The focus group data helped interpret the survey results by showing how instructional designers balance efficiency gains, ease of use, organizational support, and their attitudes toward AI in real work situations. Using both methods offered a broader and deeper picture than using either technique alone.
This study was conducted in the first quarter of 2024, capturing the experiences of instructional designers during a period of rapid evolution of generative AI, approximately 15 months after the public release of ChatGPT.
Instrument
No standardized instrument existed for surveying instructional designers about their adoption of generative AI. A new instrument was developed utilizing the 4PADAFE (Academic Project, Strategic Plan, Instructional Planning, Instructional Material Production (4P), Teaching Action (AD), Formative Adjustments (AF), and Evaluation (E), instructional design matrix (Ruiz-Rojas et al., 2023), offering a structured approach to the entire instructional design workflow. The instrument was developed using a 3-part process:
Phase 1: Initial Development
The 17-item survey was developed collaboratively by three researchers with expertise in instructional design and technology. Survey questions were mapped to both the 4PADAFE instructional design matrix components and the modified UTAUT constructs (Dwivedi et al., 2019) as shown in Appendix C. This ensured that the instrument comprehensively addressed both the practical aspects of instructional design workflow and the theory and scope of technology acceptance.
Phase 2: Expert Review and Validation
The draft instrument underwent review and validation in which ID experts evaluated the questionnaire and provided feedback on question clarity, relevance, and alignment with both the 4PADAFE framework and UTAUT constructs. Adjustments and revisions were made based on feedback to ensure questions were clear and appropriate for the target population.
Phase 3: Pilot Testing
The revised survey was pilot-tested with a small sample of instructional designers who matched our target population, but that data was excluded from the final study. The IDs completed the survey and provided feedback on the clarity of questions, the flow of the survey, and the completion time.
The survey was administered using Qualtrics software. The second instrument, a six-question focus group protocol (Appendix B), was developed based on preliminary analysis of the survey results. The instrument was also aligned with both the research questions and UTAUT components, as shown in Appendix D. The focus group instrument was reviewed for clarity and alignment with research objectives before being used to conduct the Zoom focus group.
Participant Selection and Demographics
Qualtrics Survey
Instructional designers (IDs), including managers and supervisors, 144 in total, made up the study sample. Convenience and snowball sampling were used to locate participants in instructional design communities, including LinkedIn, professional networks, social media, and instructional design and technology programs. The survey participants' experience ranged from fewer than five years to more than twenty years. The survey participants worked in government agencies, corporations, K-12 education, higher education, and ID consulting. Survey responses were collected from 42 U.S. states in addition to India and Canada.
Using convenience sampling, the focus group consisted of six instructional designers who were a subset of the survey population. All participants had completed or were in the process of completing graduate degrees from two different instructional design and technology programs at the same university within the past decade.
Table 2
Demographic Information for Qualtrics Survey
Roles | Percentage (Count) |
|---|---|
Instructional Designer | 78% (82) |
ID Supervisor/Technologist/Specialist | 15% (22) |
Other (faculty, trainers) | 7% (15) |
Organizations | |
Higher Education | 71% (93) |
K-12 | 14% (12) |
Corporate | 10% (11) |
Government & Consulting | 5% (7) |
Focus Group
The focus group included professionals between the ages of 28 and 42 with varying levels of experience in instructional design, ranging from two to 15 years. Focus group participants, who lived in Florida, Tennessee, and three different cities in Alabama, represented a cross-section of work environments: one corporate instructional designer, three higher education instructional designers, one instructional designer from a non-profit organization, and one instructional design intern. This diversity of contexts provided significant insights into the adoption of generative AI across various sectors.
Procedures and Data Collection
Qualtrics Survey
The 17-item Qualtrics survey included multiple-choice and percentage adoption questions about AI adoption rates, experiences, difficulties, and best practices. The specific topics included the frequency and timing of instructional designers' use, their outlooks on efficiency, the impact on quality, ethics concerns, comfort level, and their outlook on future AI use.
Focus Group
The focus group was conducted using Zoom and lasted approximately one hour. All six participants met simultaneously with the researchers, allowing for dynamic interaction and discussion. The session was recorded with participant consent and later transcribed using Rev.com. To ensure accuracy, one of the focus group participants reviewed the complete transcript before analysis began.
The researchers employed a semi-structured interview format, which facilitated a natural conversation flow while ensuring that all critical areas were addressed. Participants were encouraged to respond to each other's comments, creating a collaborative discussion environment that yielded rich qualitative insights. Examples of focus group questions include: "To what extent have AI tools allowed you to be more efficient in your ID work," "How has using AI impacted the quality of courses or training," and "What best practices and guidelines have you decided to follow in your ID practice to ensure that you are using Gen AI ethically?"
Data Analysis
Qualtrics Survey
The Likert scale and frequency items in the Qualtrics survey captured the adoption and current use of Artificial Intelligence in instructional design among 144 instructional designers, including their attitudes and perceptions regarding efficiency. The Qualtrics analytics platform generated descriptive statistics, providing frequency distributions and percentages for the categorical and ordinal data from the Likert scale responses. The descriptive statistics were mapped to the UTAUT framework constructs and synthesized with the qualitative focus group data to comprehensively address the four research questions.
Focus Group
The focus group audio file, capturing the discussion of the six instructional designers, was transcribed verbatim using Rev.com. Author #1 and Author #2 reviewed the transcript independently for accuracy and to become familiar with the data. One focus group participant performed member checking to confirm that the transcript accurately represented the discussion.
Two researchers then analyzed the data independently, with Author #1 using NVivo software and Author #2 coding manually. This dual-coding approach helped minimize individual researcher bias and enhanced the credibility of findings. Thematic analysis was used to identify patterns and themes from the data (Braun & Clarke, 2006). An inductive approach was used, in which codes and themes were derived from the data rather than using a pre-existing coding framework. Both researchers independently developed initial codes, searched for themes among these codes, and then met to review and refine the themes until consensus was reached. Throughout the analysis process, researchers maintained reflexive awareness of their positions and potential biases related to generative AI in instructional design. The analysis revealed eight codes that were refined into five preliminary themes. These themes were subsequently mapped to UTAUT constructs and the research questions (Appendix E).
UTAUT Construct Relationship Analysis
Our systematic mapping and cross-referencing approach used quantitative and qualitative data to evaluate UTAUT construct connections. Three analysis techniques yielded the construct correlations in the Results section:
Quantitative Analysis. Survey responses were coded using Table 1's UTAUT framework. We found construct linkages by analyzing correlational patterns between survey items. To construct the Performance Expectancy-Actual Use link, participants reporting strong efficiency improvements were cross-referenced with their AI tool adoption rates and usage frequency.
Qualitative Thematic Map. The same UTAUT framework was used to code focus group transcripts. When participants mentioned numerous UTAUT components in the same response or context, thematic co-occurrence analysis revealed construct linkages. Participants who described efficiency gains and prompt engineering challenges had a Performance Expectancy-Effort Expectancy relationship pattern.
Data Triangulation. Triangulation between quantitative and qualitative explanations of the same events validated construct connections. Survey data on adoption patterns was matched to focus group discussions. Only consistent correlations across both data sets were presented as findings. For instance, (1) the relationship between Facilitating Conditions and Attitude was established when organizational barriers (survey Question 10) consistently co-occurred with discussion about workaround strategies in the focus group; (2) the complex Effort Expectancy relationship was identified when participants reported both time savings (survey Question 8) and prompt engineering challenges (focus group themes). Correlating high-adoption tasks (survey Question 6) with efficiency expectations (survey Questions 4 and 8) confirmed use behavior.
Integrating Qualitative and Quantitative Data for Analysis
The authors used a convergent approach to integrate the quantitative and qualitative results. Survey data provided general patterns of use and frequency distributions. The focus group discussions offered a deeper understanding of the context and specific examples of how to apply the information.
The integration process had three steps: (1) locating results in which both data sources supported similar conclusions (for example, efficiency gains or quality concerns); (2) using qualitative data to explain quantitative patterns (for example, focus group participants explaining why specific tasks had high adoption rates); and (3) finding complementary insights in which the focus group shared information that was not included in the survey questions (for example, specific organizational workarounds or detailed quality control processes). As a result, the interpretation of both quantitative and qualitative data was improved, providing a clearer picture of how AI is being utilized in instructional design.
Results
RQ 1: How are Instructional Designers Using Generative AI to Automate Aspects of the ID Workflow?
This research question examines two UTAUT components: Actual Use of AI (the outcome variable) and Performance Expectancy (an adoption driver).
AI Use Patterns: Adoption Levels and Integration
The study revealed high adoption rates of generative AI tools among instructional designers. ChatGPT emerged as the dominant tool, with 83% of respondents (n = 93) reporting the use of GPT-3 or GPT-4. Most IDs also use generative AI frequently. Sixty-four percent used AI frequently or very frequently. Only 12% (n=18) reported not using any AI tools. Instructional designers strategically utilized AI in their workflows, with tools most commonly employed at the beginning of the ID process (43%), followed by the middle (38%), and with minimal use at the end (7%). This suggests instructional designers have identified the ideal integration points for AI assistance.
Performance Expectancy: Task-Specific Applications
Performance Expectancy, or the belief that AI will improve job performance, was identified in several specific tasks selected by IDs (Table 3). Focus group data reinforced these findings, with participants describing the use of AI for ideation, content production, quality control, and the automation of tasks such as voiceovers, language translation, and writing prompts. One focus group participant noted, "Using AI tools helped me to be able to spread myself a little bit further to accomplish more."
Table 3
AI Tasks
Performance Expectancy Tasks | Percentage (Count) |
|---|---|
Drafting Learning Objectives | 64% (70) |
Developing Assessments | 56% (54) |
Course Structure Outlines | 48% (45) |
Content Research | 47% (41) |
Creating Prototypes | 40% (36) |
Generating Feedback | 31% (33) |
Other tasks specified by survey participants included aligning objectives, editing and revising content, brainstorming, providing resource recommendations, incorporating interactivity elements, and summarizing content.
Construct Relationship: Performance Expectancy Drives Actual Use
Cross-referencing survey adoption rates (Question 6) with focus group efficiency discussions revealed a clear relationship between Performance Expectancy (the belief that AI will enhance job performance) and actual use patterns. Tasks with the highest adoption rates (learning objectives, assessments) align with areas where instructional designers perceive the greatest efficiency gains. The 43% who use AI at the beginning of their ID process corresponds to high-expectancy tasks, such as ideation and objective drafting, that occur early in design workflows. In their discussion, focus group participants specifically mentioned efficiency benefits in actual use cases, with one participant noting, "I literally have made 82 slides worth of information... it took me an hour... So it saved me a ton of time."
This Performance Expectancy-Actual Use relationship was validated through triangulation: survey data showing 83% ChatGPT adoption correlated with focus group descriptions of specific efficiency gains in ideation and content creation tasks.
RQ 2: What Opportunities or Advantages Have Instructional Designers Discovered when Using Generative AI During the ID Workflow?
This research question examines how the effort required to use AI tools affects both the advantages instructional designers experience and the challenges they encounter, thereby helping to fill the gap in theoretical understanding.
Effort Expectancy: Ease of Use Creating Efficiency Advantages
Analysis of survey responses (Question 8) combined with focus group time-saving examples revealed that Effort Expectancy directly influenced perceived advantages, with 67% reporting moderate to significant efficiency gains. This finding demonstrates how reduced effort translates to perceived advantages, with 11% experiencing very significant efficiency gains, 23% reporting significant gains, and 32% noting moderate improvements. Only 13% saw no efficiency benefits.
Human-AI Partnership: Beyond Tool Use to Efficient Workflows
Addressing the Human-AI Partnership Gap, findings reveal advanced workflow patterns that extend beyond simple tool use. Survey data indicated that 58% of participants agreed AI improved course quality, while focus group participants described enjoying more effective content development processes. One participant's experience exemplifies this partnership: "I spent 30-40 hours creating three course scenarios... I then prompted AI to design something similar; I had the same material (and similar quality) in less than an hour."
This approach allows instructional designers to "work on more creative and strategic projects" while AI handles routine tasks, representing a true human-AI partnership rather than simple tool utilization. The capabilities provided by AI proved valuable for instructional designers who needed additional resources. As one participant shared, "We are a small department with two instructional designers serving a faculty of 500... AI has allowed me to develop content more expediently and efficiently."
Domain-Specific Applications: ID-Unique Advantages and Challenges
Domain-specific advantages emerged clearly in the findings, addressing the Domain-Specific Implementation Gap by revealing instructional design-unique benefits. Participants described using AI to modify content, including making it more student-friendly and accessible, as well as quickly generating multiple variations of an assessment.
Survey responses highlighted domain-specific efficiency gains with "improved efficiency and time savings" (32 mentions), "faster drafting and content development" (25 mentions), and "idea generation and brainstorming assistance" (18 mentions), representing the most frequently cited advantages.
However, Effort Expectancy challenges also emerged, with most centered around prompt engineering. Focus group participants noted that "working on prompt engineering is key" and "developing prompts that work adequately has been a struggle," with one participant observing it sometimes "takes longer to decide the prompt vs just writing the outline myself." These challenges underscore that while AI tools offer advantages, the effort required to achieve them varies significantly based on task complexity and user skill level.
Construct Relationships: Less Effort Enables Performance Benefits
Analysis of survey responses regarding “ease of use” in conjunction with focus group discussion around task complexity revealed that optimizing Effort Expectancy leads to enhanced performance outcomes for instructional designers. Tasks with clear procedures (closed captioning, translation) showed the highest ease of use, while complex pedagogical tasks required more effort despite the advantages. This underscores that Effort Expectancy can regulate the relationship between Performance Expectancy and Actual AI Use.
RQ 3: What Best Practices Have Instructional Designers Adopted when Using Generative AI During the ID Workflow?
This research question examines best practices through the UTAUT constructs of Facilitating Conditions and Attitude to understand how organizational support and individual outlooks influence the implementation of ethical AI.
Facilitating Conditions: Organizational Context and Support
Analysis of survey best practices data (Question 11) alongside focus group organizational discussions revealed that Facilitating Conditions were critical in determining and adopting best AI practices. Forty-two percent of survey participants (n=61) used multiple best practices simultaneously, with the most common being combining AI with human expertise (23%) and verifying AI content for accuracy (21%). Organizational barriers significantly impacted the adoption and utilization of AI in instructional design practices. One focus group participant explained, "I think probably my biggest challenge is organizational. It's been hard to get our CEO on board with using it, so I can't use it for course creation." This barrier led to creative workarounds, such as using personal devices or generic terminology to protect sensitive information while still leveraging generative AI.
Attitude: Professional Stance Toward AI Integration
Attitude, or instructional designers' outlook on AI use, shaped how best practices were created and implemented. Participants demonstrated what one called "guarded comfort," balancing enthusiasm with professional responsibility. This cautious Attitude resulted in several approaches. One participant shared, "I use multiple of these best practices, including verifying accuracy of content, looking for bias in responses, engineering prompts to limit that bias, providing transparency." The participants’ emphasis on transparency was strong, particularly regarding disclosure to stakeholders: "I feel like it's important when I'm giving content back to my subject matter experts to review to point out the sections. This was done by AI…you need to review this very carefully."
Human-AI Partnership Best Practices
In relation to the Human-AI Partnership Gap, participants developed procedures that maintained human control while maximizing the benefits of AI. Privacy protection strategies included using "acronyms that I know what it means, but it's not really out there anywhere else" and participants "changing up the letters" to protect confidential information. The most cited practice, combining AI with human expertise (23%), reinforces that effective partnership requires active human judgment rather than passive acceptance of AI output.
Domain-Specific Implementation Practices
Domain-specific best practices emerged around maintaining pedagogical integrity, addressing the Implementation Gap. Participants noted that AI "does not always demonstrate good pedagogy strategies. It needs to be heavily edited to be used," leading to practices that prioritize educational soundness over efficiency. These included careful review for instructional design principles, transparency with subject matter experts about AI involvement, and maintaining an appropriate tone for learner audiences.
Construct Relationships: Facilitating Conditions Enabling Attitudes
Cross-referencing survey responses about organizational barriers with focus group discussions of workaround strategies revealed that the interaction between Facilitating Conditions and Attitude directly influenced individual practices. Participants with positive Facilitating Conditions developed more advanced and complex AI processes, while those facing AI organizational restrictions relied on personal ethical frameworks and individual workarounds. This validates the UTAUT model’s proposition that Facilitating Conditions can impact the effect of Attitude on actual use behaviors.
RQ 4: What Challenges Have Instructional Designers Experienced when Using Generative AI During the ID Workflow?
This research question examines challenges through the UTAUT constructs of Effort Expectancy and Facilitating Conditions, revealing barriers that impact AI adoption and effective use in instructional design contexts.
Effort Expectancy: Technical and Skill-Based Challenges
Analysis of survey Question 10 responses, combined with focus group prompt engineering discussions, revealed that Effort Expectancy challenges emerged as significant barriers for instructional designers. Survey data identified the most common challenges as verifying the accuracy of AI outputs (19 mentions), difficulty engineering effective prompts (11 mentions), and a lack of personalization/customization (7 mentions). Focus group participants discussed their challenges with prompt engineering: "Developing prompts that work adequately has been a struggle," with another noting, "I find it takes longer to decide the prompt vs just writing the outline myself." This suggests that the effort required to use AI effectively sometimes exceeds the effort of traditional methods, creating a negative Effort Expectancy that could inhibit adoption.
Output quality challenges further complicated Effort Expectancy. Multiple participants reported AI's tendency toward repetitive language, with one participant noting: "It really likes the word delve. It uses that word in almost every sentence." Another observed that responses often seem "clearly manufactured... way too elevated," requiring significant editing effort. The iterative nature of achieving satisfactory results created frustration for some participants: "You have to start over several times before you get a product... that you were looking for."
Facilitating Conditions: Organizational and Resource Barriers
Analysis of survey responses, in conjunction with focus group discussions on institutional barriers, indicated that the presence or absence of Facilitating Conditions significantly contributed to AI implementation. Organizational barriers emerged as the primary obstacle, with one participant stating: "I think probably my biggest challenge is organizational. It's been hard to get our CEO on board with using it, so I can't use it for course creation." Privacy concerns and institutional policies severely limited the AI use possibilities. As participants explained, the inability to input company names, role titles, or proprietary information meant "it's limited on both sides"—both in what could be input and the quality of output received.
Financial constraints (6 mentions) and lack of institutional support created additional barriers. One participant noted frustration with institutional limitations: "I would really love to be able to get some paid versions of things, but I have that limitation that I can't do that, and it's frustrating." These Facilitating Conditions challenges forced workarounds like using free versions with limited capabilities, reducing the potential benefits of AI integration.
Domain-Specific Implementation Challenges
Addressing the Domain-Specific Implementation Gap, unique instructional design challenges emerged around pedagogical quality and educational appropriateness. Participants identified specific limitations: "The output is too generic, and (AI) lacks the emotional intelligence to provide actionable suggestions." More critically, AI "does not always demonstrate good pedagogy strategies," requiring heavy editing before use. This domain-specific challenge suggests that while AI may excel at general content creation, it struggles with the nuanced pedagogical understanding essential to effective instructional design.
Human-AI Partnership Challenges
The Human-AI Partnership Gap manifested in trust and reliability issues. The data revealed fundamental concerns about accuracy, with 49% of participants reporting no comfort relying on AI without verification, and only 6% feeling comfortable or very comfortable with AI outputs. Focus group participants emphasized the need for constant vigilance: "I have to review it because I can't trust it 100%." This lack of trust creates additional workload, as one participant noted: "It's like a partner that is sometimes helpful and sometimes very annoying."
Construct Relationships: Compounding Challenges
Analysis of survey data regarding tool limitations, combined with focus group discussions on organizational constraints, revealed that the interplay between Effort Expectancy and Facilitating Conditions led to increased challenges. Poor Facilitating Conditions (limited tools, organizational restrictions) increased the effort required to achieve results, while high effort requirements discouraged use even when Facilitating Conditions were adequate. As one participant summarized: "Gen AI is designed to be convincing, not correct. It takes me more time to read through and verify anything created by AI than it would if I just do it myself." This relationship demonstrates how multiple UTAUT constructs interact to create adoption barriers, validating the framework's utility in understanding complex technology integration challenges.
Discussion
This study advances the understanding of generative AI adoption in instructional design by addressing three critical gaps in the existing literature through the lens of the modified UTAUT framework. Our findings both confirm and extend prior research while providing theoretical grounding for understanding adoption patterns, human-AI partnerships, and domain-specific implementation challenges.
Addressing the Theoretical Framework Gap: UTAUT Analysis of AI Adoption
Our application of the modified UTAUT framework reveals how specific constructs drive AI adoption among instructional designers, moving beyond the descriptive findings of previous studies. Performance Expectancy emerged as the primary driver of adoption, with 83% of participants using ChatGPT and 64% reporting frequent use. This high adoption rate, coupled with ID workflow integration (43% at the beginning, 38% in the middle of the design process), demonstrates that instructional designers perceive AI as significantly enhancing their job performance—a finding that aligns with but theoretically grounds the efficiency benefits noted by Gibson (2023) and Bozkurt and Sharma (2023).
Effort Expectancy revealed a complex relationship with adoption. While 67% reported moderate to significant time savings, challenges with prompt engineering created barriers, as noted by one respondent: "I find it takes longer to decide the prompt vs. just writing the outline myself." This paradox—where tools designed to reduce effort sometimes increase it—extends Luo et al.’s (2024) findings on prompt engineering importance by demonstrating how Effort Expectancy can both facilitate and hold back adoption depending on the task complexity.
Facilitating Conditions proved critical in shaping adoption patterns. Organizational barriers, privacy concerns, and lack of institutional support created significant obstacles: "I think probably my biggest challenge is organizational. It's been hard to get our CEO on board with using it." This finding highlights how the organizational context, often overlooked in previous studies, significantly influences technology adoption, validating the UTAUT model's inclusion of Facilitating Conditions as a core construct.
Attitude emerged as a moderating force, with participants demonstrating "guarded comfort." Participants acknowledged benefits while maintaining professional skepticism. The finding that 49% were not at all comfortable relying on AI without verification reveals how Attitude shapes use patterns, even when Performance Expectancy is high.
Addressing the Human-AI Partnership Gap: Beyond Simple Tool Use
Our findings reveal sophisticated patterns of human-AI collaboration that extend beyond the simple tool use documented in previous research. Participants developed nuanced partnership frameworks where AI serves as an "ideation partner" rather than a replacement: "AI is a good brainstorming partner. It's great for editing my work and changing the tone. You still need people who are knowledgeable to make sure AI isn't hallucinating."
Human supervision extends beyond individual verification to include collaborative review processes with subject matter experts: "I feel like it's important when I'm giving content back to my subject matter experts to review to point out the sections. This was done by AI." This multi-layered oversight approach represents an improvement from early AI adoption toward established human-AI workflows.
The partnership also involves sophisticated adaptation strategies. Participants modified AI outputs for tone, style, and pedagogical appropriateness: "The tone and writing style of generative AI tools isn't suitable for courses that need a more warm, welcoming tone from the instructor." This active reshaping of AI content demonstrates true collaboration rather than passive acceptance.
Addressing the Domain-Specific Implementation Gap: ID-Unique Challenges
The study identifies instructional design-specific challenges that have not been adequately examined in prior research. Pedagogical integrity emerged as a critical concern unique to this domain: "It does not always demonstrate good pedagogy strategies. It needs to be heavily edited to be used." This finding extends beyond general AI limitations to reveal how domain expertise remains essential for practical instructional design work.
Emotional intelligence gaps presented challenges for instructional design: "The output is too generic, and AI lacks the emotional intelligence to provide actionable suggestions." Focus group discussions revealed subtle challenges in maintaining an appropriate emotional tone and pedagogical soundness—issues that survey questions did not fully address. Participants noted AI's tendency toward overly formal language, which requires adjustment for "warm, welcoming" instructional contexts. The observation that AI "lacks the emotional intelligence to provide actionable suggestions" highlights domain-specific limitations requiring human intervention. Unlike technical writing or content generation, instructional design requires a nuanced understanding of learner emotions, motivation, and engagement, all specific areas in which current AI tools currently fall short.
Domain-specific prompt engineering challenges also emerged. Participants found that creating effective prompts for instructional design tasks required specialized knowledge, revealing that effective prompts must incorporate pedagogical frameworks, learning theories, and assessment principles that general AI systems struggle to understand without extensive refinement.
Integration of Quantitative and Qualitative Findings
The integration of both data types yielded complementary insights that neither method could have provided alone. While survey data established broad adoption patterns (83% use of ChatGPT, 67% efficiency gains), focus group discussions revealed more nuanced insights. For instance, the survey found that 58% agreed AI improved course quality; however, the focus group further explained how quality improvement occurred through "more varied learning experiences" and overcoming "creative blocks."
Theoretical Implications
The UTAUT analysis shows that instructional designers adopt AI in predictable ways, with some unique differences specific to their field. Performance Expectancy strongly drives adoption, confirming that UTAUT is effective in understanding technology acceptance. However, Effort Expectancy played a more complex role than expected, sometimes helping and sometimes hindering adoption. This suggests the model may need adjustments for professional settings. The significant impact of organizational support and resources confirms that these factors are crucial for AI adoption.
Alignment with and Extension of Prior Research
While the study’s findings confirm the AI efficiency benefits and AI automation potential highlighted in prior research (Weng & Chiu, 2023; Gibson, 2023), the study extends this work by revealing the strategies instructional designers use to streamline and improve the ID process. The identification of domain-specific issues extends Luo's (2023) general observations regarding prompt engineering, highlighting unique considerations relevant to instructional design.
Unique Focus Group Insights
While the survey revealed broad patterns of AI acceptance, the focus group provided additional contextual factors that broadened the understanding of technology use and adoption in instructional design.
Organizational Context and Workarounds - Focus group participants shared that company policies significantly influenced their AI use patterns. Some faced complete bans ("I can't use it for course creation"), while others could experiment with multiple tools. The bans resulted in creative workarounds, such as using personal devices, employing generic terminology to protect sensitive information, and developing "acronyms that I know what it means, but it's not really out there anywhere else." These strategies demonstrate how professionals can overcome institutional limitations to access technology that they believe will enhance efficiency and save time on specific tasks.
Evolving Skill Development - The focus group revealed learning progress that was not captured in the survey data. Participants described advancing from initial frustration ("Developing prompts that work adequately has been a struggle") to developing sophisticated strategies. One participant noted that while prompt creation initially took longer than traditional methods, they eventually could generate "82 slides worth of information" in an hour. This progression suggests that Effort Expectancy improves as users develop domain-specific AI skills.
Quality Control Processes - Participants in the focus group outlined comprehensive quality control workflows that incorporate multi-stage review processes, personal verification, subject matter expert review, and iterative content refinement. Participants highlighted the importance of clearly identifying AI-generated content, with one remarking to SMEs: "This was produced by AI…it is not your content...you must review this very carefully." The focus on transparency and professional ethics, with a preference for educational integrity over efficiency, was not reflected in the survey responses.
Implications for Instructional Design Practice
Grounded in the modified UTAUT theory and addressing several literature gaps, this study’s findings offer actionable implications for instructional design practice:
1. Strategic Task Allocation: The findings suggest IDs should strategically determine the use of AI based on task characteristics and Performance Expectancy. High-efficiency gains are likely with structured tasks (such as learning objectives and assessments), while complex pedagogical decisions require greater human oversight.
2. Prompt Engineering as Core Competency: The Effort Expectancy challenges revealed in this study indicate that prompt engineering represents a core competency for instructional designers. Upskilling should incorporate domain-specific prompt engineering training that addresses pedagogical frameworks, assessment principles, and instructional design theories.
3. Organizational Infrastructure Development: The role of Facilitating Conditions suggests that organizations must develop comprehensive AI support. This includes use policies, privacy protocols, financial resources for premium tools, and ongoing training opportunities.
4. Human-AI Partnership Protocols: The findings on advanced ID processes suggest the need for formal human-AI partnership conventions. These should include review processes with defined checkpoints, transparency requirements for AI-generated content, and ethical guidelines balancing efficiency with pedagogy.
5. Domain-Specific AI Development: The implementation gap findings highlight the need for AI tools specifically designed for instructional design. Developers should collaborate with instructional designers to create AI systems or agents that understand pedagogical frameworks, learning theories, and assessment principles.
Implications for Future Research
Future research should investigate how UTAUT constructs evolve as instructional designers gain experience with AI. Longitudinal studies could reveal whether Effort Expectancy decreases with skill development and how Attitude shifts with extended AI use. Other studies may explore how models like the Technology-Organization-Environment (TOE) framework complement UTAUT in understanding organizational adoption of AI.
Future research could also examine whether AI-assisted instructional design produces different learning outcomes. Experimental studies comparing traditionally designed and AI-assisted courses could validate the quality improvements participants reported.
Limitations
The study contains several limitations. The sample size of 144 survey respondents and six focus group participants, while providing rich data, may not fully represent the instructional design community. The focus group size (n = 6), although appropriate for qualitative research, limits the depth of organizational diversity represented in the qualitative findings. The sampling method may have attracted instructional designers with a more substantial interest in AI, potentially skewing the results toward more engaged users. With most participants in higher education (71%) in the United States, cross-sector insights are also limited.
The study also relies on self-reported data, which can be subject to bias, particularly regarding AI competencies. The design captures a snapshot during the rapid evolution of AI (Q1 2024), and the findings may not accurately reflect current adoption patterns due to the pace of AI advancement.
Conclusion
This study advances the understanding of generative AI implementation in instructional design by examining three gaps using the modified UTAUT framework. The study participants had adopted AI in various instructional design processes, with 83% using ChatGPT. This adoption represents advanced human-AI partnerships, not simply tool use.
The research also identified that Performance Expectancy strongly drives adoption as designers intentionally use AI at process junctures to maximize efficiency. Effort Expectancy is complicated, especially in prompt engineering, where expertise is needed. Facilitating Conditions, such as organizational support, also affect adoption patterns. Professionals strike a balance between enthusiasm for efficiency improvements and accountability for education and quality, striking a cautious yet comfortable approach.
The findings also show that instructional designers have developed advanced human-AI collaboration frameworks with various review processes, quality control measures, and complex adaptation mechanisms. These collaborations clarify the difference between human competence and AI support, closing the human-AI divide. Instructional design using AI requires strategic task allocation, prompt engineering competencies, a robust organizational infrastructure, and cooperation procedures. As generative AI capabilities expand, instructional designers must strike a balance between technology and pedagogy, ensuring human oversight while leveraging the efficiency of generative AI.
References
Bommasani, R., Hudson, D. A., Adeli, E., Altman, R., Arora, S., von Arx, S., Bernstein, M., Bohg, J., Bosselut, A., Brunskill, E., Brynjolfsson, E., Buch, S., Card, D., Castellon, R., Chatterji, N., Chen, A., Creel, K., Davis, J. Q., Demszky, D., ... & Zitnick, C. L. (2022). On the opportunities and risks of foundation models. AI Magazine, 43(3), 56-65.
https://doi.org/10.48550/arXiv.2108.07258
Bozkurt, A., & Sharma, R. C. (2023). Generative AI and prompt engineering: The art of whispering to let the genie out of the algorithmic world. Asian Journal of Distance Education, 18(2). https://doi.org/10.5281/zenodo.8174941
Bozkurt, A. (2023b). Generative AI, synthetic contents, open educational resources (OER), and open educational practices (OEP): A new front in the openness landscape. Open Praxis, 15(3), 178-184. https://doi.org/10.55982/openpraxis.15.3.579
Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101. https://doi.org/10.1191/1478088706qp063oa
Chiu, T. K. F. (2023). Empowering K-12 Education with AI: Preparing for the future of education and work. Routledge. https://doi.org/10.4324/9781003498377-1
Ch’ng, L. K. (2023). How AI makes its mark on instructional design. Asian Journal of Distance Education, 18(2), 32–41. https://doi.org/10.5281/zenodo.8188576
Creswell, J., & Poth, C. (2018). Qualitative inquiry & research design: Choosing among five approaches (4th ed.). Sage Publications.
Dwivedi, Y. K., Rana, N. P., Jeyaraj, A., Clement, M., & Williams, M. D. (2019). Re-examining the unified theory of acceptance and use of technology (UTAUT): Towards a revised theoretical model. Information Systems Frontiers, 21(3), 719–734. https://doi.org/10.1007/s10796-017-9774-y
Gibson, R. (2023, August 14). 10 ways artificial intelligence is transforming instructional design. Educause Review. https://er.educause.edu/articles/2023/8/10-ways-artificial-intelligence-is-transforming-instructional-design
Green, B. (2022). The flaws of policies requiring human oversight of government algorithms. Computer Law & Security Review, 45. https://doi.org/10.1016/j.clsr.2022.105681
Gupta, B., Mufti, T., Sohail, S., & Madsen, D. (2023). ChatGPT: A brief narrative review. Cogent Business & Management, 10(3). https://doi.org/10.1080/23311975.2023.2275851
Hasan, M. A., Noor, N. F. M, Rahman, S. S. B. A., Rahman, M. M. (2020). The transition from Intelligent to affective tutoring system: A review and open issues. IEEE Access, 8. 10.1109/ACCESS.2020.3036990
Hwang, Xie, H., Wah, B. W., & Gašević, D. (2020). Vision, challenges, roles, and research issues of Artificial intelligence in education. Computers & Education: Artificial Intelligence, 1. https://doi.org/10.1016/j.caeai.2020.100001
Kittinger, L., & Law, V. (2024). A systematic review of the UTAUT and UTAUT2 among K-12 educators. Journal of Information Technology Education: Research, 23(17). https://doi.org/10.28945/5246
Kumar, S., Gunn, A., Rose, R., Pollard, R., Johnson, M., & Ritzhaupt, D. (2024). The role of instructional designers in the integration of generative artificial intelligence in online and blended learning in higher education. Online Learning, 28(3). https://doi.org/10.24059/olj. v28i3.4501
Lim, W. M., Gunasekara, A., Pallant, J. L, Pallant, J. I, & Pechenkina, E. (2023). Generative AI and the future of education: Ragnarök or reformation? A paradoxical perspective from management educators. The International Journal of Management Education, 21(2). https://doi.org/10.1016/j.ijme.2023.100790
Liminary. (2025). Human-AI collaboration: Finding the sweet spot (Part II). Liminary Blog. https://www.liminary.io/blog/human-ai-collaboration-finding-the-sweet-spot-part-ii
Luo, T., Muljana, P. S., Ren, X., & Young, D. (2024). Exploring instructional designers' utilization and perspectives on generative AI tools: A mixed methods study. Educational Technology Research & Development. https://doi.org/10.1007/s11423-024-10437-y
Madunic, J., & Sovulj, M. (2024). Application of ChatGPT in information literacy instructional design. Publications 2024, 12(2). https://doi.org/10.3390/publications12020011
McElheran, K., Li, J. F., Brynjolfsson, E., Kroff, Z., Dinlersoz, E., Foster, L. S., & Zolas, N. (2024). AI adoption in America: Who, what, and where. Journal of Economics & Management Strategy, 33(2), 375–415. https://doi.org/10.1111/jems.12576
Methnani, L., Tubella, A., Dignum, V., & Theodorou, A. (2021). Let me take over: Variable autonomy for meaningful human control. Frontiers in Artificial Intelligence, 4. https://doi.org/10.3389/frai.2021.737072
Mosqueira-Rey, E., Hernández-Pereira, E., Alonso-Ríos, D., Bobes-Bascarán, J., & Fernández- Leal, Á. (2023). Human-in-the-loop machine learning: a state of the art. Artificial Intelligence Review, 56(4), 3005-3054. https://doi.org/10.1007/s10462-022-10246-w
Ruiz-Rojas, L.I., Acosta-Vargas, P., De-Moreta-Llovet, J. Gonzalez-Rodriguez, M. (2023) Empowering education with generative artificial intelligence Tools: Approach with an instructional design matrix. Sustainability, 15. https://doi.org/10.3390/su151511524
Santana, V. F. (2024). Challenges and opportunities for responsible prompting. In Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (CHI EA'24, May 11-16, 2024, Honolulu, HI, USA). ACM, New York, NY, USA. https://doi.org/10.1145/3613905.3636268
Sterz, S., Baum, K., Biewer, S., Hermanns, H., Lauber-Rönsberg, A., Meinel, P., & Langer, M. (2024). On the quest for effectiveness in human oversight: Interdisciplinary perspectives. In ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT '24), June 3–6, 2024, Rio de Janeiro, Brazil. ACM. https://doi.org/10.1145/3630106.3659051
Triberti, S., Di Fuccio, R., Scuotto, C., Marsico, E., & Limone, P. (2024). "Better than my professor?" How to develop artificial intelligence tools for higher education. Frontiers in Artificial Intelligence, 7, 1329605. https://doi.org/10.3389/frai.2024.1329605
Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. Management Information Systems Quarterly, 27(3), 425–478. https://doi.org/10.2307/30036540
Venkatesh, V., Thong, J.Y.L., & Xu, X. (2012). Consumer acceptance and use of information technology: Extending the unified theory of acceptance and use of technology. MIS Quarterly, 36(1). https://ssrn.com/abstract=2002388
Weng, X., & Chiu, T. K. F. (2023). Instructional design and learning outcomes of intelligent computer-assisted language learning: Systematic review in the field. Computers and Education: Artificial Intelligence, 4. https://doi.org/10.1016/j.caeai.2022.100117
Wiley, D. (2023). AI, Instructional Design, and OER. Improving Learning. https://opencontent.org/blog/archives/7129
Yogesh, K., Dwivedi, A. B., Nir Kshetri, C., Hughes, L., & Slade, E. L. (2023). "So what if ChatGPT wrote it?" Multidisciplinary perspectives on opportunities, challenges, and implications of generative conversational AI for research, practice, and policy. International Journal of Information Management, 71(0268-4012). https://doi.org/10.1016/j.ijinfomgt.2023.102642
APPENDIX A
Qualtrics Survey
Q1 Please choose the instructional design title that best suits your current role:
Instructional Designer
Instructional Technologist
Instructional Design Supervisor
eLearning Specialist
Trainer
Other (Please describe) __________________________________________________
Q2 Please choose the instructional design organization that most closely fits your workplace:
Higher education
K-12
Corporate
Non-Profit
Government
Consulting
Q3 Please choose the answer that best describes your instructional design experience:
Fewer than five years
5-10 years
10-15 years
15-20 years
20+ years
Q4 Please choose the range that best describes your current age:
21-29
30-39
40-49
50-59
60+
Q5 State of residence:
________________________________________________________________
Q1. How often do you use generative AI tools like ChatGPT when designing courses and training materials?
Never
Rarely
Sometimes
Frequently
Very frequently
Q2. How useful have you found generative AI tools in helping you create customized and personalized learning content quickly?
Not at all useful
Slightly useful
Moderately useful
Useful
Very useful
Q3. How comfortable are you relying on generative AI tools to provide accurate subject matter content without additional verification on your part?
Not at all comfortable
Somewhat comfortable
Moderately comfortable
Comfortable
Very comfortable
Q4. To what extent do you agree that using generative AI tools improves the overall quality of the courses and training you design?
Strongly disagree
Somewhat disagree
Neither agree nor disagree
Somewhat agree
Strongly agree
Q5. When designing a new course, at what point in your instructional design process do you typically leverage generative AI tools?
Beginning
Middle
End
Do Not Use
Q6. What specific instructional design tasks or activities do you use generative AI tools for? (Select all that apply)
Content research
Drafting learning objectives
Outlining course structure
Developing assessments
Creating prototypes
Generating feedback
Other (Please specify) __________________________________________________
Q7. How concerned are you about ethical issues related to using generative AI tools in instructional design?
Not at all concerned
Slightly concerned
Moderately concerned
Concerned
Very concerned
Q8. To what extent has using generative AI tools allowed you to be more efficient in your instructional design work?
Not at all
Slightly
Moderately
Significantly
Very significantly
Q9. How likely are you to increase your usage of generative AI tools like ChatGPT for instructional design activities over the next year?
Not at all
Rarely
Sometimes
Frequently
Very frequently
Q10. What benefits or challenges have you experienced from using generative AI tools in your instructional design process?
Q11. What best practices or guidelines do you follow when using generative AI tools in your instructional design process?
Verifying the accuracy of AI-generated content
Establishing ethical use policies at my organization
Providing transparency about AI usage to stakeholders/learners
Combining AI with my own expertise and knowledge
Setting realistic expectations on AI's capabilities
Other (Please specify) __________________________________________________
Q12. Which generative AI tool do you utilize the most for instructional design activities? (Select one)
ChatGPT
Anthropic
Jarvis by Anthropic
Claude by Anthropic
Rytr
Copysmith
Sudowrite
Jasper
Other (Please specify) __________________________________________________
APPENDIX B
Focus Group Questions
How have you integrated generative AI tools into your instructional design workflow, and what specific tasks do you use them for most frequently?
To what extent have AI tools allowed you to be more efficient in your ID work?
How has using AI impacted the quality of courses or training?
What best practices and guidelines have you decided to follow in your own ID practice to ensure that you're using Gen AI ethically?
Describe any best practices around the way that you're prompting generative AI.
How has your workplace approached the adoption of gen AI in instructional design? Are you getting the support or resources that you need to leverage AI tools?
APPENDIX C
Survey Questions, Modified UTAUT Model Components, and 4PADAFE Components
Survey Question | UTAUT Component | 4PADAFE Component |
|---|---|---|
Q1. How often do you use generative AI tools like ChatGPT when designing courses and training materials? | Attitude | Instructional Planning |
Q2. How useful have you found generative AI tools in helping you create customized and personalized learning content quickly? | Effort Expectancy | Instructional Material Production |
Q3. How comfortable are you relying on generative AI tools to provide accurate subject matter content without additional verification on your part? | Attitude | Evaluation |
Q4. To what extent do you agree that using generative AI tools improves the overall quality of the courses and training you design? | Performance Expectancy | Evaluation |
Q5. When designing a new course, at what point in your instructional design process do you typically leverage generative AI tools? | Attitude | Instructional Planning |
Q6. What specific instructional design tasks or activities do you use generative AI tools for? | Performance Expectancy | Instructional Material Production |
Q7. How concerned are you about ethical issues related to using generative AI tools in instructional design? | Attitude | Evaluation |
Q8. To what extent has using generative AI tools allowed you to be more efficient in your instructional design work? | Performance Expectancy | Formative Adjustments |
Q9. How likely are you to increase your usage of generative AI tools like ChatGPT for instructional design activities over the next year? | Attitude | Strategic Plan |
Q10. What benefits or challenges have you experienced from using generative AI tools in your instructional design process? | Facilitating Conditions | Formative Adjustments |
Q11. What best practices or guidelines do you follow when using generative AI tools in your instructional design process? | Facilitating Conditions | Teaching Action |
Q12. Which generative AI tool do you utilize the most for instructional design activities? | Attitude | Instructional Material Production |
APPENDIX D
Focus Group Questions, Modified UTAUT Components, and 4PADAFE Components
Focus Group Question | UTAUT Component | 4PADAFE Component |
|---|---|---|
Q1. How have you integrated generative AI tools into your instructional design workflow, and what specific tasks do you use them for most frequently? | Performance Expectancy | Instructional Planning, Instructional Material Production |
Q2. To what extent have AI tools allowed you to be more efficient in your ID work? | Effort Expectancy | Formative Adjustments |
Q3. How has using AI impacted the quality of courses or training? | Performance Expectancy | Evaluation |
Q4. What best practices and guidelines have you decided to follow in your own ID practice to ensure that you're using Gen AI ethically? | Attitude | Teaching Action |
Q5. Describe any best practices around the way that you're prompting generative AI. | Attitude | Teaching Action, Instructional Material Production |
Q6. How has your workplace approached the adoption of gen AI in instructional design? Are you getting the support or resources that you need to leverage AI tools? | Facilitating Conditions | Academic Project, Strategic Plan |
APPENDIX E
Thematic Analysis: From Codes to UTAUT Constructs
Initial Codes | Preliminary Themes | UTAUT Constructs | Research Questions |
|---|---|---|---|
AI tool integration and use | Using AI tools for various ID tasks leads to increased efficiency and time savings | Actual Use, Performance Expectancy | RQ1 |
AI-assisted tasks | |||
Efficiency/timesaving | AI enhances course quality through improved engagement | Effort Expectancy | RQ2 |
Quality enhancement | |||
Challenges and limitations | Best practices for AI usage include transparency, validation | Attitude, Facilitating Conditions | RQ3 |
Best practices and guidelines | |||
Organizational context | Organizational support significantly influences AI adoption | Facilitating Conditions | RQ4 |
Comfort level | IDs have guarded comfort but expect AI growth | Attitude | All RQs |