EDCAL-ACSALOGO_WHITE.png
Embracing AI while managing the risks
December 2, 2024
Facebook_icon.pngX_Logo.pngLinkedIn_Icon.pngPinterest_icon.pngEmail_share_icon.png
The following article was written by Nicholas Clair, Karina Demirchyan and Robert Lomeli of Lozano Smith.
In the last decade, artificial intelligence has evolved from a concept in science fiction to an increasingly important part of professional and personal life. The introduction of large language models has ushered in a transformational period in technological development that is increasingly being felt in the realm of education. From tailoring individualized learning experiences to streamlining administrative tasks, AI promises groundbreaking changes for schools. However, these advancements come with significant risks that administrators must be prepared to navigate. This article delves into how schools can embrace the innovation made possible with AI while managing its inherent risks.
Benefits of AI in education Schools are using AI to enable widescale personalized learning. Access to personalized tutors was once limited to a privileged few. Now, AI can tailor the educational experience to cater to the individual needs of each student. AI-powered adaptive learning tools, for example, can help ensure that students master concepts while continuing to be appropriately challenged.
AI further presents the opportunity to offer students with disabilities a more personalized and accessible learning environment. AI can assist in drafting individualized education programs and Section 504 plans. AI-powered text prediction software can be a powerful writing support for students with motor impairments, communication impairments and/or learning disabilities. AI can also translate text into speech and speech into text for students with varying impairments, and it can develop personalized lesson plans based on students’ individualized needs.
AI also offers substantial support to educators and administrators by streamlining administrative tasks, differentiating learning content and collaborating in decision-making processes. For educators, AI can automate time-consuming activities, freeing up more time for student interaction. It can also differentiate learning content, such as tailoring materials to a student’s diverse learning needs or particular interests, thus improving engagement and outcomes. For administrators, AI can provide valuable data analytics that help in making informed decisions about resource allocation, student performance and curriculum development. By leveraging AI, schools can not only improve operational efficiency but also enhance the overall quality of education.
Managing risks and challenges As the AI landscape evolves rapidly, it is paramount that school administrators ensure that staff are properly trained on the appropriate use of AI, as the emergence of unforeseen harms and risks is inevitable. It is recommended that leadership designate a team to maintain oversight of AI and to receive feedback for improvement of AI processes. Such a team will help create an AI environment that is constantly addressing any issues and/or developments in the field and within the workplace.
ADVERTISEMENT
One of the primary concerns surrounding the use of AI in education is the protection of student data. This concern is particularly acute in the context of AI, as user inputs and responses may be used to train current or future models. Thus, users should be aware that by including data in large language model inputs, there is a risk of exposing sensitive information. Accordingly, schools should avoid including confidential information, particularly personally identifiable information, in LLM inputs.
Users should also be aware that AI systems are not immune to bias, which can manifest in various ways, including biased outputs that reinforce existing inequalities. Decisions based on biased outputs can lead to discriminatory outcomes, such as unfair resource allocation, biased hiring practices, or unequal provision of services. For schools employing AI systems, it is vital to be aware of the potential for bias within these models and understand the general approaches for mitigating it. One strategy is early testing of AI models to identify potential biases. Testing AI models for bias involves analyzing how the system makes decisions across different groups and scenarios. This process helps identify if the AI system is unfairly favoring or disadvantaging certain groups. Since most schools may not have the technical capacity to test and correct AI models internally, it is advisable to engage with developers, vendors, or third-party auditors who have the necessary expertise.
Schools could also require that entities they contract with conduct thorough bias testing as part of their service agreements. This can include periodic reviews and audits of AI systems to identify and address any emerging biases. Incorporating a “human in the loop” (requiring human oversight of AI-generated decisions) is a critical mitigation measure for addressing potential biases in AI systems, especially within schools. This approach ensures that AI-generated decisions are not made in isolation but are instead reviewed and, if necessary, adjusted by human oversight. Schools should also have protocols for reporting potential biases or inaccuracies in AI decisions.
While AI can significantly enhance efficiency, decision-making and service delivery, it also brings forth critical questions of accountability. Schools must assume ultimate responsibility for their decisions, even if AI systems were utilized to inform or advise on decision making. Accordingly, it is critical for humans to be informed about the limitations and risks of AI, oversee all uses of AI, independently evaluate outputs, and ultimately come to an independent decision informed by, but not reliant on, information and recommendations produced by AI.
Vendor selection Thoroughly vetting AI vendors is crucial for schools to ensure they select tools that align with their educational goals while also safeguarding their students’ interests, particularly given the risk that some vendors may not have long-term viability. AI technologies offer substantial benefits, but the instability of start-up companies pose significant concerns. Schools must ensure that the vendors they select are capable of long-term support. If a vendor ceases operations or fails to maintain their technology, it could disrupt educational processes and require that schools fund costly replacements.
By selecting vendors with a proven track record, schools can mitigate the risks of unexpected disruptions. This due diligence ensures that AI tools not only meet current needs but also remain reliable and effective over time.
Conclusion AI holds tremendous potential for enhancing education, offering personalized learning experiences and improving administrative efficiency. However, schools must navigate a range of risks, including data privacy concerns, biases, accountability and challenges associated with vendor selection. By implementing comprehensive training programs, using bias-mitigation strategies, incorporating a human in the loop, and thoroughly evaluating vendors, schools can maximize the benefits of AI while minimizing its risks. As opportunities to use AI in education continue to evolve, thoughtful strategy in its implementation will play a crucial role in shaping the future of learning.
Nicholas Clair is an attorney in Lozano Smith’s Sacramento office, and Karina Demirchyan and Robert Lomeli are attorneys in Lozano Smith’s San Luis Obispo office.
ADVERTISEMENT