The world is highly evolving, and since the Industrial Revolution, technology has advanced unimaginably. Since the birth of the steam engine in the 18th century, we have traveled as a species to the Moon in just two centuries, which was remarkable. We live in an even more advanced civilization, which has Augmented Reality, Virtual Reality, Blockchain & Cryptocurrencies, and Robots. The latest addition to this list is Artificial Intelligence (AI). During the last few years, AI has improved drastically, and leaders of technology, such as Elon Musk, requested to halt further improvement of AI until the impact of AI on humans and society in the long term was determined. One of the most popular AI models is ChatGPT, created by combining OpenAI and Microsoft. ChatGPT caught the world’s attention quickly, passed 1 million users in just five days, and now has over 100 million users (Young & B Sauter, 2020).
The introduction of ChatGPT has changed the course of the world. This brought significant changes to different aspects of life, such as jobs and education. The most exciting difference between the two is education. Because ChatGPT knows almost everything that happened before 2022, it can quickly write a whole research paper or help you perform assignments if you give the correct prompts. However, with this feature, it has become evident that students feel guilty about their actions. Also, many think it is cheating but do it anyway because they cannot decide to do it themselves.
Universities are implementing different approaches to battling this misconduct. Some universities ban the use of AI, while others partially restrict its use. Universities such as Flinders University, the University of Adelaide, and the University of South Australia allow students to use ChatGPT as a writing prompter for assignments if they disclose it (How You Should—and should not — Use ChatGPT as a Student, 2023).
Because of the development of AI, students using it for their academic needs are inevitable. Implementing AI in the education system and obtaining full assistance from it will be the right approach. Therefore, exploring student perspectives and providing valuable insights into their attitudes, motivations, concerns, and preferences regarding integrating AI tools is crucial.
The primary objective of this study was to investigate students’ perceptions and experiences regarding the use of ChatGPT and other AI-related tools for academic support and learning enhancement by discovering their viewpoints on the usage of AI in education. This research paper will contribute to the existing literature by filling gaps in knowledge related to students’ perspectives on AI in education. It will also provide insights for educators, policymakers, and developers to enhance the design, implementation, and integration of AI tools in educational settings.
This Paper holds significant implications for various stakeholders in the field of education. Understanding students’ perceptions and experiences with AI-related tools can offer educators, educational technology developers, and policymakers valuable insights. Educators can gain valuable feedback to optimize the integration of AI tools into their teaching methodologies. At the same time, technology developers can utilize the findings to refine and tailor AI systems to meet students’ needs better. Additionally, this study’s outcomes can inform policymakers about the ethical considerations and potential challenges associated with the widespread adoption of AI in educational institutions. Furthermore, students can benefit
from the research, as insights into the strengths and limitations of AI tools can help them make informed decisions about their learning approaches. By comprehensively exploring students’ perspectives, this study contributes to the growing body of knowledge on AI in education, and fosters informed discussions about the role of AI in shaping the future of learning and academic support.
In summary, this research seeks to bridge the gap in the current literature by examining students’ viewpoints on AI tools’ effectiveness and relevance in the educational context. By addressing the research questions and discussing the implications of the findings, we hope to offer valuable contributions to the educational community and facilitate evidence-based decisions regarding the integration of AI in educational settings.
The Technology Acceptance Model (TAM) has been widely used to study how technology is adopted in fields, including education. TAM can provide insights into students’ attitudes and behaviors when integrating AI tools into education. Research applying TAM to technology has shown that student’s willingness to embrace AI tools in their learning journey is greatly influenced by Perceived Usefulness (PU). Perceived Ease of Use (PEU). Students are more likely to adopt AI tools if they believe that these tools will help improve their performance and enhance their learning experience.
Additionally, the user-friendliness and intuitiveness associated with AI tools also shape students’ perceptions and acceptance. When students find AI tools to use and intuitive, they are more inclined to incorporate them into their study routines. By utilizing TAM to examine the role of AI tools in education, this study aims to deepen our understanding of the factors that influence student engagement with these technologies and contribute to developing student-centered AI-based solutions.
Studies have consistently emphasized the importance of users’ perceptions regarding the usefulness and ease of use of technological advances regarding adoption (Grani & Maranguni, 2019). However, it is crucial to acknowledge that external factors beyond the immediate user experience can influence these perceptions. Venkatesh & Davis (1996) highlighted the significance of external influences on both Perceived Ease of Use (PEU) and Perceived Usefulness (PU) in shaping users’ attitudes toward technology adoption. These external factors could include social norms, organizational support, and perceived compatibility with existing practices. Understanding these external influences becomes essential in fully explaining a particular technological system’s acceptance or rejection, as Mathieson proposed (1991). Consequently, researchers have expanded the scope of the Technology Acceptance Model (TAM) to incorporate various external elements that can influence eLearning acceptance or utilization, as seen in the work of Scherer et al. (2019). By considering these external factors in the context of AI tools in education, this research aims to provide a comprehensive understanding of the factors influencing students’ acceptance and adoption of AI technologies in their learning experiences, thus contributing to more effective and contextually relevant AI integration strategies.
However, few of the findings were consistent. Researchers have looked into the relationship between TAM factors and various technologies. TAM variables impact students’ attitudes about learning in an AR environment created for Chemistry lessons, according to Wojciechowski &
Cellary’s (2013) research. Similarly, Briz Ponce et al. (2017) found that the usage of mobile technology in education has a direct impact on Perceived Usefulness (PU) and Perceived Ease of Use (PEU). The ongoing use of the Moodle learning system was also explored by Murillo et al. (2021), who found that PEU was effective on PU but did not immediately affect Behavioural Intention (BI). On the other hand, PU did directly influence behavioral Intention.
The growing presence of AI technologies, such as ChatGPT has raised concerns regarding the integrity and the escalating issue of plagiarism within their environments. Experts attribute this increase in plagiarism to factors, including the mounting pressure on students and the accessibility of technology, which makes academic dishonesty more tempting and straightforward (Jereb et al., 2018; Surahman & Wang 2022). Research has established a correlation between stress and cheating behavior, with some students resorting to AI tools like ChatGPT to cope with stress and complete assignments more efficiently (Ma et al., 2013). To tackle this problem, promoting a learning environment that prioritizes thinking and problem-solving skills is crucial. Additionally, raising awareness about AI usage can foster integrity and ethical practices.
Considering the increasing integration of AI in education, it is essential to strike a balance between harnessing the benefits of AI and upholding principles of honesty. Educators, researchers, and policymakers must collaborate to address the causes of plagiarism. By nurturing a culture of honesty among students emphasizing thinking skills, and promoting awareness about AI practices, students can take full advantage of AI technologies while maintaining their commitment to academic integrity and ethical conduct in their educational pursuits. The University of Helsinki has provided guidelines for using AI language models. The guidelines for using those as below (Using AI to Support Learning | Instructions for Students, n.d.).
- Large language models can, as a rule, be used in teaching and as a support for writing. The teacher for the course has the final call on the topic. If there is a risk that the use of large language models impedes achieving the set learning objectives, the teacher can prohibit the use of AI (independent work included).
- If you use a language model to produce the work you are returning, you must report in writing which model (e.g., ChatGPT, DeepL) you have used and how. This also applies to the thesis. Please note that you should never name AI as the author of the text or other written output. AI cannot take responsibility for the content of the text – this responsibility always lies with humans.
- The use of language models is never allowed in maturity tests.
- Your home faculty, degree program, or the University Language Centre can make additional guidelines on using AI in their teaching.
- The responsible teacher should tell the student about the principles, disadvantages, and benefits of using language models. If AI is prohibited in the course, the teacher should explain and motivate the limits of prohibited use in writing.
- Equality is a core value when planning education: ChatGPT and other large language models are only sometimes available, or there may be a charge for their use. It would be best if you never were required to use a language model that is not available for free.
- If you use a large language model in a course, part of a course, or examination where it is prohibited in advance, please note that this constitutes cheating and will be treated the same way as other cases. The same rules apply if you fail to report the use of a language model as instructed.
In addition, similar to the University of Helsinki, Open Universities of Australia have provided AI do’s and don’ts in their recent article. (Open Universities Australia, 2023)
Chat GPT dos
- Do ask for research guidance before writing an essay
- Do use it when brainstorming
- Do ask questions about study material you do not understand
- Do use it to proofread your work
- Do cite any AI assistance in your reference list
- Do not ask AI software to write essays for you
- Do not blindly trust AI-generated information
- Do not do anything that violates your university’s academic integrity policy.
As mentioned above, using the AI tools such as ChatGPT and Google Bard is not mentioned as unethical or punishable. However, having these guidelines will help students use them correctly.
It is a well-known fact that ChatGPT passed law tests for four University of Minnesota courses and the business management exam for the Wharton School of Business at the University of Pennsylvania. While earning a B to B grade in the business course, ChatGPT performed at the level of a C+ student in the law classes (Kelly, 2023). These results concern the lecturers and other high-level educators because these free tools can be misused for academics. The Italian Data Protection Authority temporarily banned the use of ChatGPT by its citizens because of potential data collection or data breaches (Milmo, 2023).
Before discussing further potential misuses or concerns, we should look at the potential benefits of using AI tools.
Improved student engagement.
Because of these new technology advancements, students tend to use those and find exciting ways to use them effectively for their day-to-day usage. ChatGPT is used by some students as their tutor, giving them customized and personalized tutoring. Moreover, AI could be integrated into all future jobs; therefore, having experience could benefit students in the long run (Roose, 2023). Quickness in finding answers to questions and 24/7 access also help students with their studies. It also searches for answers in a wide range of resources, so the accuracy of the answers is high (Mallow, 2023). According to Sotelo Muñoz et al. (2023), using a survey of 350 students and instructors, the results demonstrated that ChatGPT substantially impacted student motivation and engagement. Researchers have also emphasized the importance of promoting the incorporation of AI tools in educational systems.
Personalized Learning Experiences
ChatGPT can be used to learn new things and personalize them to your liking in just a few seconds, which could take a few hours or even days (Alves de Castro, 2023). ChatGPT can support the development of fundamental maths, reading, and writing skills in the elementary grades. Students in secondary school can gain from ChatGPT’s advanced support in various disciplines, including literature, science, and history.ChatGPT can assist students in acquiring critical abilities like time management, organization, and study habits, in addition to offering subject-specific guidance. You can ask ChatGPT for certain personalized learning content if a student needs assistance with a particular subject. When creating ChatGPT prompts, you should consider any context that might be required, such as the student’s age, grade level, and course level, to make sure that the responses are suitable for your student’s level and ability. Note-taking is also easier with ChatGPT. It can present different note-taking approaches, including the Cornell, outline, and mind-mapping methods while outlining the advantages and disadvantages of each technique. The AI assistant can offer examples and advice on using these ideas during lectures or while studying from textbooks (Driscoll, 2023).
Students’ Perception of Positives of AI
In the last few years, AI has improved exponentially, and students are one of the frontrunners of users of this latest technology. Even though AI has many benefits, a few problems are related to human psychology and behavior. While using AI to support their studies, some students felt like they are cheating the education system of their education. BestCollege survey recently published a study in which they took inputs from 1000 current undergraduates and graduates. It found some vital information. 43% of students had experience with AI tools such as ChatGPT. They used those tools to complete some of their assignments and exams. The meaning is that 1 out of 5 students are using AI to complete their education assignments and exams. The student view of the ethics of AI is as follows.
1. When asked if “using AI tools to help complete assignments and exams is morally wrong,” 41% agreed, while 27% disagreed.
2. When asked if “AI tools should be prohibited in educational settings,” 38% of respondents disagreed, and only 27% agreed.
3. The survey found that 48% of students agreed that “it is possible to use AI ethically to help complete my assignments and exams,” more than twice the percentage (21% ) who disagreed.
50% of the students who claimed they used AI tools to finish tasks or exams stated they utilized them for some of the work but did most of it themselves. 30% of students used AI for most of their assignments, while 17% used AI to finish and submit assignments without any changes (Nietzel, 2023).
According to a survey done by IEEE, the findings demonstrate that the students are impressed by ChatGPT’s capabilities and appreciate its appeal as a study and work tool. They enjoy its user-friendly and human-like interface with well-organized responses and clear justifications. However, many students feel that ChatGPT’s answers are only sometimes accurate, and most think that since it does not replace human intelligence, it needs excellent foundation information to work with. Therefore, most students believe ChatGPT requires improvement, but they hope this will happen soon. The students’ opinions on ChatGPT’s detrimental effects on education, academic integrity, employment, and daily life vary (Shoufan, 2023).
There are also many limitations and misconducts of AI tools
Limitations and Misconducts of AI Tools
ChatGPT has the potential to transform education by providing support and unique learning experiences for students. However, some hurdles must be overcome. Plagiarism and academic dishonesty are concerns, as some students may misuse ChatGPT to generate content without giving credit. It is crucial to ensure the use of AI and closely monitor any misuse. Furthermore, it’s essential to acknowledge that ChatGPT’s information could be more flawless, which means students should not solely rely on AI-generated content as it could lead them astray. Educators teach thinking skills and encourage students to cross-reference information from reliable sources to tackle this issue effectively. Additionally, we must be mindful of biases in ChatGPT’s responses. Take deliberate steps to mitigate them while promoting inclusivity in educational experiences.
Responsible implementation is paramount to fully harnessing ChatGPT and AI’s power in education. This involves establishing guidelines and best practices for using AI in settings with a strong emphasis on ethical usage and fostering a culture of academic integrity. By training students to understand the limitations of AI models and nurturing their thinking abilities, we can create a learning environment that effectively integrates AI into the curriculum. Proactively addressing these concerns will enable us to leverage ChatGPT for impact while enriching students learning journeys.
The integrity of homework assignments and online tests
Online tests have proliferated in higher education. Teachers and institutions need to be aware of the risk of utilizing ChatGPT to cheat on online tests because it can produce material that looks like a human being on academic subjects wrote it. In other words, ChatGPT compromises the legitimacy and fairness of online tests and assignments.
There are various techniques that educators and institutions can use to address these concerns. On assignments and online tests, teachers can provide students with detailed guidance on organizing their work and responding to questions . Before submitting their final draughts, students can send their assignments to teachers for review. The detection of writings produced by AI can be done using a sophisticated plagiarism detection program.
Advanced exam proctoring/supervision methods may also be successful for online exams . Further study is necessary to fully comprehend the effects of AI LLMs like ChatGPT and countermeasures to prevent ChatGPT abuse (Rahman & Watanobe, 2023).
Blind reliance on generative AI tools
Research and education may suffer from substantial dependence on ChatGPT and other generative AI technologies. This is due to the potential limitations of critical thinking and problem-solving abilities caused by the simplicity of finding solutions, problem-solving techniques, and scientific text generation. Recently, ChatGPT has endorsed and given credit to preprints and published publications. It also poses issues with researching and drafting essays. ChatGPT is quite constrained, but it is good enough at some things to give the impression of being extremely powerful, according to the CEO of OpenAI, who cautions against relying on it unthinkingly. Depending on it for anything more significant than a snapshot of development is a mistake. Sturdiness and sincerity require much work (Rahman & Watanobe, 2023).
It is important to note that ChatGPT, which is currently in active development, was used for our studies. During the studies, we saw considerable improvements in code creation, error checking and debugging, and solution code optimization. The following factors may affect the outcomes: The following factors could affect the results: (i) the release of a new version of ChatGPT; (ii) the use of questions other than those posed in this study; (iii) the problem description; and (iv) the solution code (Rahman & Watanobe, 2023).
In addition, the testing findings demonstrate that the basic compiler’s basic compiler and AOJ’s AOJ generate code with correctness of 95.83% and 75%, respectively. ChatGPT generates code with an average correctness of 85.42%. ChatGPT still has the limits of generating code based on the description, although producing roughly 85.42% correct code. To achieve error-free compilation and acceptance, it could occasionally be necessary to make alterations to the resulting code (Rahman & Watanobe, 2023).
The literature review has shed light on applying the Technology Acceptance Model (TAM) in predicting the adoption of new technology, such as ChatGPT, and its implications for education. TAM suggests that perceived usefulness (PU) and perceived ease of use (PEU) play crucial roles in influencing attitudes toward adopting information technology (Davis, 1989). Researchers have extended TAM to include external factors to understand eLearning acceptance (Scherer et al., 2019). The literature also highlights the issue of plagiarism and academic misconduct in the context of AI technology, with some studies indicating that academic stress may be associated with cheating behavior and AI misuse (Ma et al., 2013; Jereb et al., 2018; Surahman & Wang, 2022).
To address the potential misuse of ChatGPT for cheating on online tests and assignments, educators and institutions can implement a multifaceted approach aimed at upholding academic integrity and promoting the responsible use of AI technology. One essential strategy involves providing students with detailed guidance and clear instructions on
approaching their assignments and exams. By emphasizing the importance of originality and critical thinking, educators can help students understand the value of independent work and discourage them from relying solely on AI language models like ChatGPT (Rahman & Watanobe, 2023).
Plagiarism detection programs play a pivotal role in identifying writings produced by AI and distinguishing them from authentic student work. Educators can efficiently identify instances of AI-generated content by integrating sophisticated plagiarism detection tools into the educational system and taking appropriate measures to address academic misconduct (Rahman & Watanobe, 2023). Additionally, institutions can establish policies that clearly define the consequences of using AI technology unethically, emphasizing the importance of academic honesty and integrity.
Additionally, employing methods for monitoring exams can provide an added layer of security in assessments ensuring the authenticity of students’ answers. Techniques like real- time invigilation, screen recording, and facial recognition can be utilized to authenticate students’ identities and supervise their activities during exams. This helps minimize the potential for cheating facilitated by AI systems (Rahman & Watanobe, 2023).
Nevertheless, it is vital to acknowledge that combatting the abuse of AI is an endeavor. Given the progress in AI technology, continuous research and development of countermeasures are essential. Researchers should delve deeper into comprehending the impact of AI language models such as ChatGPT on performance and student conduct. By gaining insights into how AI tools influence learning outcomes, educators can devise strategies to prevent misuse while harnessing their potential as educational aids (Rahman & Watanobe, 2023).
Furthermore, collaborative efforts among educators, researchers, and AI developers are crucial in addressing the challenges posed by AI. Open dialogues and interdisciplinary collaboration can pave the way for guidelines on using AI in educational environments. These guidelines would clarify the boundaries and limitations of AI assistance while ensuring students understand how to integrate these tools into their learning process (Rahman & Watanobe, 2023). Blind reliance on generative AI tools, including ChatGPT, poses challenges for education and research. While integrating AI language models in educational settings has the potential to revolutionize learning experiences, concerns about its impact on critical thinking and problem-solving skills exist. As ChatGPT and similar AI technologies simplify finding solutions and generating scientific text, students might need to fully engage in the cognitive processes involved in understanding and synthesizing information (Rahman & Watanobe, 2023). Consequently, there is a risk that students may lose opportunities to develop their analytical and reasoning abilities, leading to a potential trade-off between convenience and the development of higher-order thinking skills.
Furthermore, ChatGPT’s endorsement and crediting of preprints and published publications raise questions about the originality and authenticity of academic work (Rahman & Watanobe, 2023). When students use AI-generated content without proper attribution or acknowledgment, the boundaries between their work and AI-generated text can become blurred, potentially leading to academic honesty and intellectual ownership issues. Therefore, educators must promote responsible and ethical usage of AI language models, encouraging students to understand the limitations of these tools and to evaluate the content generated by them critically.
The CEO of OpenAI’s caution against blind reliance on ChatGPT underscores the need for a balanced approach to incorporating AI in educational settings (Rahman & Watanobe, 2023). While AI language models can be valuable study aids and support tools, they should not replace the guidance and mentorship provided by teachers. Educators play a central role in helping students develop a comprehensive understanding of their subjects and nurturing critical thinking skills. Integrating AI into the learning process as a supplementary resource rather than a substitute for human instruction can ensure that students receive a well-rounded education that fosters creativity, problem-solving, and original thought.
To address the potential pitfalls of blind reliance on AI, educational institutions can take a proactive approach. They can develop policies and guidelines that encourage responsible AI usage and educate students about the ethical implications of using AI tools (Shoufan, 2023). Incorporating AI literacy into the curriculum can empower students to use AI language models effectively while maintaining academic integrity. Educators can foster a balanced and constructive relationship with AI technology in the academic context by allowing students to critically assess AI-generated content and collaborate with their peers and instructors.
Despite these challenges, AI tools, such as ChatGPT, also present potential educational benefits. It can improve student engagement and learning experiences by providing personalized tutoring and support (Roose, 2023; Sotelo Muñoz et al., 2023). Students can access quick and accurate information, aiding them in various subjects and improving their study habits (Mallow, 2023; Driscoll, 2023). Moreover, AI language models can be integrated into educational systems to promote student motivation and engagement (Sotelo Muñoz et al., 2023).
However, it is essential to acknowledge the limitations of AI tools, including ChatGPT. The model is still under development, and factors such as different versions of the model or problem descriptions may affect its outcomes (Rahman & Watanobe, 2023). Moreover, the potential misuse of AI for academic misconduct remains a concern, necessitating the implementation of guidelines and ethical usage (Nietzel, 2023).
The University of Helsinki and Open Universities of Australia have provided guidelines for the responsible use of AI language models, emphasizing the need for students to take responsibility for their work and not rely on AI as a substitute for their efforts (Using AI to Support Learning | Instructions for Students, n.d.; Open Universities Australia, 2023).
The literature review indicates that AI language models, like ChatGPT, have the potential to impact education positively and significantly. However, there are challenges to address, such as maintaining academic integrity and avoiding overreliance on AI. By understanding the benefits and limitations of AI tools and implementing responsible usage guidelines, educators and students can harness the potential of AI in education while upholding academic standards. Continued research and development are essential to optimize the integration of AI in the learning environment and ensure its positive impact on students’ learning experiences.
In conclusion, the literature review has provided insights into how the Technology Acceptance Model (TAM) can be applied to predict technology adoption, particularly AI language models like ChatGPT, and its impact on education. TAMs’ focus on perceived usefulness (PU) and perceived ease of use (PEU) has proven essential in understanding users’ attitudes toward adopting technology. Moreover, researchers have expanded TAM to consider factors that influence acceptance of eLearning, recognizing the nature of technology adoption.
The discussion section shed light on concerns regarding the misuse of ChatGPT for misconduct, such as plagiarism and cheating in online tests and assignments. To tackle these challenges, educators and institutions can implement strategies. Firstly, offering guidance and instructions to students about assignments and exams can underscore the importance of originality and independent work. Additionally, incorporating plagiarism detection programs can aid in identifying AI-generated content and distinguishing it from student work. Policymaking that includes consequences for AI usage can further promote academic integrity.
To ensure the ethical use of AI tools, it is crucial to foster efforts among educators, researchers, and AI developers. Engaging in dialogues and interdisciplinary collaborations can lead to establishing guidelines for integrating AI in settings while helping students understand the boundaries and limitations when utilizing AI assistance.
Teachers should adopt a balanced approach when utilizing AI tools in education, using them as resources rather than replacements for human instruction. It is essential to foster students’ critical thinking skills. Please encourage them to analyze AI-generated content.
While AI language models offer advantages such as tutoring and improved study habits, it is crucial to acknowledge their limitations and potential drawbacks. Responsible use of AI in education requires students to understand the implications and boundaries of this technology, empowering them to engage with AI.
Integrating AI language models into education demands consideration and a responsible attitude to maximize their impact while upholding academic integrity. Ongoing research and development, coupled with efforts among stakeholders, will play a role in unlocking the full potential of AI tools in enriching students’ learning experiences. By embracing AI technology, teachers and students can navigate challenges. Embrace the opportunities it presents, creating a dynamic and rewarding educational environment.
Alves de Castro, C. (2023). A Discussion about the Impact of ChatGPT in Education: Benefits and Concerns. Journal of Business Theory and Practice.
Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of Information Technology. MIS Quarterly, 13(3), 319–340. https://doi.org/10.2307/249008
Driscoll, T. (2023). ChatGPT Teacher Tips Part 3: Personalized learning. EdTechTeacher.
Granić, A., & Marangunić, N. (2019). Technology acceptance model in an educational context: A systematic literature review. British Journal of Educational Technology, 50(5), 2572–2593. https://doi.org/10.1111/bjet.12864
How should you—and should not—use ChatGPT as a student? (2023, February 16). Open Universities Australia. https://www.open.edu.au/advice/insights/ethical-way-to-use-chatgpt- as-a-student
Interactive Schools. (2018, February 8). 50 MILLION USERS: HOW LONG DOES IT TAKE TECH TO REACH THIS MILESTONE? https://blog.interactiveschools.com/blog/50- million-users-how-long-does-it-take-tech-to-reach-this-milestone
Iqbal, N., Ahmed, H., & Azhar, K. A. (2022). EXPLORING TEACHERS’ ATTITUDES TOWARDS USING CHATGPT. Global Journal for Management and Administrative Sciences.
Kelly, S. M. (2023). ChatGPT passes exams from law and business schools. CNN Business.
Mallow, J. (2023). ChatGPT For Students: How AI Chatbots Are Revolutionizing Education. eLearning Industry. https://elearningindustry.com/chatgpt-for-students-how-ai-chatbots-are- revolutionizing-education
Mathieson, K. (1991). Predicting User Intentions: Comparing the Technology Acceptance Model with the Theory of Planned Behavior. Information Systems Research, 2(3), 173–191.
Milmo, D. (2023). Italy’s privacy watchdog bans ChatGPT over data breach concerns. The Guardian. https://www.theguardian.com/technology/2023/mar/18/italys-privacy-watchdog- bans-chatgpt-data-breach-concerns
Nietzel, M. T. (2023, March 20). More Than Half Of College Students Believe Using ChatGPT To Complete Assignments Is Cheating. Forbes. https://www.forbes.com/sites/michaeltnietzel/2023/03/20/more-than-half-of-college-students- believe-using-chatgpt-to-complete-assignments-is-cheating/
Open Universities Australia. (2023, February 16). How you should—and should not—use ChatGPT as a student. https://www.open.edu.au/advice/insights/ethical-way-to-use-chatgpt- as-a-student
Rahman, M. M., & Watanobe, Y. (2023). ChatGPT for Education and Research: Opportunities, Threats, and Strategies. Applied Sciences, 13(2), 182–197. https://doi.org/10.3390/app13020182
Roose, K. (2023). Don’t Ban ChatGPT in Schools. The New York Times.
Scherer, R., Siddiq, F., & Tondeur, J. (2019). The technology acceptance model (TAM): A meta-analytic structural equation modeling approach to explaining teachers’ adoption of digital technology in education. Computers & Education, 128, 13–35. https://doi.org/10.1016/j.compedu.2018.09.009
Shoufan, A. (2023). Exploring Students’ Perceptions of ChatGPT: Thematic Analysis and Follow-Up Survey. IEEE Access, 11, 38805–38818. https://doi.org/10.1109/ACCESS.2023.3268224
Sotelo Muñoz, S., Gayoso, G., Huambo, A., Domingo, R., Tapia, C., Incaluque, J., … Pongo, O. (2023). Examining the Impacts of ChatGPT on Student Motivation and Engagement. Przestrzeń Społeczna (Social Space), 23, 117–137.
Venkatesh, V., & Davis, F. D. (1996). A Model of the Antecedents of Perceived Ease of Use: Development and Test. Decision Sciences, 27(3), 451–481. https://doi.org/10.1111/j.1540- 5915.1996.tb00860.x
Young, A., & Sauter, M. B. (2020, January 9). 21 most important inventions of the 21st century. USA Today. https://eu.usatoday.com/story/money/2020/01/09/21-most-important- inventions-of-the-21st-century/40934825/
Using AI to support learning | Instructions for students. (n.d.). Instructions for Students.