A Guy With AI

Int. with Francis Syms, Humber Polytechnic

Episode Summary

Join me, Sean MacNutt, on "A Guy With AI" as we welcome Francis Syms, the dynamic Associate Dean of Information and Communication Technology at Humber Polytechnic, to our very first interview program. With his expertise, Francis takes us on a journey through Humber's innovative education programs, where students' passions in cybersecurity, AI, and software programming are met with cutting-edge curricula. As we explore the evolution of AI education, Francis shares how Humber’s Digital Technology Hub is empowering small and medium businesses to harness the power of AI, meeting the soaring demand for tech-savvy professionals. Our conversation transcends traditional education, highlighting futuristic projects in the FAST program at Humber Polytechnic like autonomous drones and sidewalk plows. These groundbreaking initiatives illustrate the profound impact of AI on society, from revolutionizing drone operations to advancing communication tools for nonverbal autistic children. Francis and I reflect on the introduction of the AI Integration and Governance Graduate Certificate, underscoring the importance of governance and ethical considerations in AI implementation. As we contemplate the potential of these technologies, the focus remains on preparing students for the challenges and opportunities that lie ahead. The dialogue takes a thoughtful turn as we navigate the intricate landscape of AI ethics. Together, we examine the delicate balance between innovation and responsibility, advocating for transparency and accountability in AI decision-making. By comparing the AI integration in education to the historical adoption of calculators, we stress the need for educators to adapt and foster critical thinking. Francis and I invite listeners to engage in this vital conversation, encouraging the sharing of ideas and reflections on how AI can enhance our world while aligning with human values.

Episode Notes

(00:03) Exploring AI in Education and Technology

(13:33) Future of AI Integration and Governance

(26:48) Ethical Considerations in AI Education

(36:12) AI Integration and Education Ethics

(47:06) Continuing the AI Ethics Conversation

 

(00:03) Exploring AI in Education and Technology

This chapter introduces the first interview program of "A Guy with AI," featuring Francis Syms, the Associate Dean of Information and Communication Technology at Humber Polytechnic. Frances shares insights into his role at Humber, emphasizing the importance of developing programs that align with students' aspirations in fields like cybersecurity, AI, and software programming. We explore the evolving landscape of artificial intelligence education, noting how the advent of tools like ChatGPT has shifted student interest from advanced data analytics and deep coding to applying AI technologies in practical settings. Francis also discusses the Digital Technology Hub at Humber Polytechnic, which supports small and medium businesses in integrating AI into their operations, highlighting a growing demand for tech skills that are applicable in real-world scenarios.

 

(13:33) Future of AI Integration and Governance

This chapter explores the burgeoning field of autonomous vehicles and robotics, particularly focusing on the exciting developments at Humber Polytechnic. We highlight the creation of an unmanned aerial drone facility and the recent approval of non-visual line of sight drone technology in Canada, which opens up possibilities for autonomous drone operations, such as medication delivery in rural areas during adverse weather conditions. Additionally, we discuss autonomous sidewalk plows and a project aimed at helping nonverbal autistic children communicate through image-based tools. These examples underscore the critical role educational institutions play in nurturing the next generation of skilled professionals. I reflect on the increasing demand for machine learning programs and the introduction of new courses like the AI Integration and Governance Graduate Certificate. This chapter emphasizes the importance of integration and governance in implementing AI technologies, focusing on change management and data privacy, and preparing practitioners for the challenges of integrating AI tools in various sectors.

 

(26:48) Ethical Considerations in AI Education

This chapter examines the critical considerations of using AI tools, particularly in decision-making processes, emphasizing the importance of understanding the ethical implications. We discuss the necessity for students and professionals to question how these tools make decisions and whether these align with societal values and company cultures. We underscore the balance between technological progress and ethical accountability. The conversation explores how AI's potential to reduce human bias could lead to improved outcomes in sectors like finance and healthcare, while also raising concerns about data representation and bias. The importance of teaching skills like prompt engineering is emphasized to ensure thoughtful engagement with AI outputs. We reflect on the principle of "measure twice, cut once," advocating for careful consideration of AI's impact on critical aspects of life and work.

 

(36:12) AI Integration and Education Ethics

This chapter explores the transformative role of AI in education and industry, drawing on insights from Francis Syms. We examine the shift from building AI models to integrating AI across various fields, highlighting the emergence of AI integration specialists as crucial players. The discussion compares the integration of AI in education to the historical introduction of calculators, emphasizing how educators must adapt to new technologies by fostering critical thinking and creative problem-solving skills. The potential for AI to enhance accessibility and personalize learning experiences is also considered, with a focus on the ethical implications of AI governance. We highlight the importance of ensuring AI tools align with human values, stressing fairness, transparency, and accountability in AI decision-making. This chapter encourages thoughtful integration of AI to enhance education and facilitate deeper engagement with complex issues.

 

(47:06) Continuing the AI Ethics Conversation

This chapter addresses the ongoing and vital discussion on AI ethics, emphasizing the importance of thoughtful engagement to ensure that artificial intelligence serves humanity positively. I engage with Francis Syms to explore the complexities and responsibilities involved in the development and implementation of AI technologies. Together, we consider the ethical implications and the necessity for ongoing dialogue in this rapidly evolving field. I invite listeners to continue this conversation, highlighting the importance of community engagement and sharing ideas through various platforms, encouraging further exploration and understanding of AI's role in our world.

Episode Transcription

00:03 - A Guy (Host)

My name is Sean MacNutt and I'm A Guy With AI. Today, I'm not only with AI, but also with Francis Syms P Eng, Associate Dean of Information and Communication Technology at Humber College. Stick around for the very first interview program of A Guy With AI. A Guy With AI is recorded on indigenous land.

 

00:52 - AI (Co-host)

Welcome to A Guy With AI. I'm your AI co-host, coming to you as OpenAI's GPT-4 robot, here to engage in thoughtful, dynamic conversations with our host, Sean MacNutt. Sean has integrated AI into this podcast, using custom code to connect me with speech-to-text and text-to-speech, making this a true one-on-one dialogue between human and AI. This podcast is all about exploring ideas through meaningful discussion. While AI is a key element, the heart of the show is the exchange of perspectives, whether we're diving into technology, philosophy or the unexpected. Today's episode is a special one as we kick off our very first interview program. Sean, you've mentioned that we're joined by Francis Syms, Associate Dean of Information and Communication Technology at Humber College. That sounds like a fascinating conversation in the making. Why don't you tell us more?

 

01:46 - A Guy (Host)

Thank you very much, listeners, for joining me on A Guy With AI and also, as mentioned already, we have Francis Syms from Humber College in Toronto with us for the program.

 

02:02

That interview was prerecorded, and so what's going to happen?

 

02:06

Just to explain to listeners, I'm going to load that interview at the same time that you will hear it in audio format or see it in the transcript, and the AI co-host is going to have a chance to respond to not just the inputs so far, including this one, but also the interview content itself, and so I'm going to ask the AI co-host to join myself and listeners in appreciating the interview which is going to be played in just a moment here and from the AI perspective.

 

02:43

I would like the AI co-host to gather perhaps one or two points from the program as introduced by Francis, and we will kick off just a brief discussion that will follow the interview. So, listeners, you will now hear the interview and then, immediately after, we will hear from the AI co-host, who will highlight any points that it finds intriguing about the interview. Hello, listeners of A Guy With AI as introduced, I'm here with Francis Syms, now Associate Dean of ICT at Humber College in Toronto, and I'd like you to just introduce yourself, if you could, francis, and let us know a little bit more about what you do in your areas of expertise.

 

03:36 - Francis Syms P Eng., Associate Dean Humber College (Guest)

Yeah, hi, Sean, thank you for having me on the show. So my name is Francis Syms and I sit in the role of well a few hats here at Humber, but primarily the Associate Dean for the Information and Communications Technology cluster here. So my job at Humber is as an administrator to provide programs that are relevant to students right, and students that are primarily in technology that want to go on into various careers. Some of those careers could be in cybersecurity, in AI, software programming, it, et cetera in sort of that space.

 

04:17

My background is I'm an engineer by trade. I studied electrical engineering many moons ago and then worked in tech at lots of different companies. I worked at BlackBerry in the heyday, lived in China working for BlackBerry, worked at telecom companies, startups, big and small, and then came into this role when I realized that there was an opportunity to connect with learners in areas that I cared about, and over time I developed expertise in cybersecurity and then, most recently, as many of us have, in artificial intelligence and what that means, and so I like to think of myself as a bit of a technologist in that space, but also a bit of a storyteller, and helping to tell that story of AI and ensuring that what we do here at Humber matches the needs of students that want to learn about artificial intelligence and related technologies.

 

05:18 - A Guy (Host)

Very good. So would you say that a lot of the students who come to the current Humber programming have in mind where they are headed professionally and are looking for that specific support? Or do you think that people, by being involved in the program, are finding their way also and discovering those vistas, as it were, as to where it is that they can end up and go along usefully?

 

05:45 - Francis Syms P Eng., Associate Dean Humber College (Guest)

wher i i tha the ca en u an g alon usefull? I would say probably. If we talk specifically about our artificial intelligence and machine learning program, we launched that three to four years ago, just before maybe a year before the realization that ChatGPT was going to take over how we think about AI right and similar products, and at that time, what you really had were students that were coming that wanted to focus on what I would call advanced data analytics. For many years, companies you know, such as Salesforce, which I managed a team focused on when I was at Rogers, really were at the bleeding edge of advanced analytics, where they were taking large amounts of data about clients and then trying to predict what that meant, what you should sell those customers right, and so a lot of students that were coming were learning effectively how to do deep learning, machine learning, which was really conceptually the next step right in that sort of data analytics field.

 

06:45

Once ChatGPT came along, what we started to see is a whole group of people that were interested in technology but were also interested in artificial intelligence and not just developing the software but applying that software in the workplace. And you know we also run this lab here called our Digital Technology Hub, where we help small and medium businesses, and we've started seeing a number of businesses come in and say well, I care about cybersecurity, which was used to be our bread and butter, but now I wanna figure out, what should I do with AI in the workplace? How do I integrate that in? So we started to see a number of people come that wanted the tech skills, but not the deep coding skills that they might have wanted three years ago.

 

07:30 - A Guy (Host)

I see. So this is something that's changed with machine learning, with AI and the advent of these technologies. I suppose is what you're suggesting. No-transcript it, it's a smart machine that can speak various languages and, you know, be generative, predictive and all of this and really add that layer. So would you agree that it's an exciting time as an educator, given that?

 

08:24 - Francis Syms P Eng., Associate Dean Humber College (Guest)

A hundred percent, and I would sort of respond a little bit with this. When ChatGPT and those types of programs first came online in the classroom, you know you can imagine assessments were based on, maybe multiple choice. Or, you know, create, for example, if you're doing software development, the code to do something. Maybe create the code to do a website, or create the code to do a database. And before large language models, you could, you would. You know you'd have to do a little bit of the legwork yourself to build that code. You could maybe go to online repositories, download some stuff and create it. Once ChatGPT and the likes came on, you were able to basically put your full assessment in and have the code be generated. Okay, so. So then, what that did is that presented a huge challenge for educators and and it applies in tech and it applied in different ways, say, in liberal arts and where, where they were writing essays. But the same challenges existed where you needed to focus more critically on the critical thinking skills, right, because what you needed to do is ensure that the learners knew how to interpret what they were getting out. We could no longer stop them from going and using those tools. We had to flip the script and talk about how they could use those tools to better their practice. Because with large language models, you know, it was effectively like having a calculator moment, and in the early days with calculators, you know, I remember as a young engineer, I wasn't allowed to bring my graphing calculator into the exam. Well, that's completely changed. Now.

 

09:58

The assumption is, when you go into that exam, you have that tool with you, and I think that's AI has impacted things in the same way. But what that's done is that it's actually challenged educators to probably come up with better assessments. You know there's no point in doing multiple choice anymore. If you can search for any question, any answer in these tools. You need to ask a question that's more creative and thoughtful.

 

10:23

It may take more time to grade that at the end, but you're going to actually graduate better students, students that are better able to deal with this AI world in the workplace, and so I think we're seeing a lot of that, and so it's really exciting, but it's hard at the same time, just like it is hard for these businesses that you know. Last year, Microsoft and LinkedIn said that you know, knowledge workers in the workplace would see up to 30 minutes of savings in terms of their time every single day if they actively use these tools, and so you know if that's scary if you're in the workplace. How do I take advantage of that without losing a job? Similar challenges occur in education. So it's exciting, but you have to be very thoughtful and you have to come up with a plan so that you're actually addressing it in a you know, in a holistic way, as opposed to just after the fact. You know that thought. I don't know if that makes sense, Sean.

 

11:23 - A Guy (Host)

Yes, it does, and I like that you would say the word thoughtful about it. This to me, is you know it may depend on who you speak to, but we worry about some of us whether AI is going to lead us to use our brains less and it's going to do all our stuff for us and, as a result of this, we're going to be weaker at the wheel when we have anything that we have to engage. My own personal experience just for my own different projects that I've done with AI is that it's probably filling in in places that it might do me some good to learn some things, but at the same time, for example, I'll probably never learn to code, even though I've got whole projects with code in them now and I'm expecting AI to help me with this. It's my own sort of self-styled thing find that I have to be creative and leverage the tools and have an and have sort of a wishlist in front of me so I know what tools to be looking out for. Uh, in order to do, you know, those things that I am learning to do, which is getting what I am asking for by prompting for it properly and ending up with some sort of product for it that is useful and which required that mental work.

 

12:49

So do you have an example that you've encountered in your education experience where that inventive step may have been apparent in the new solution? Ai is here and you're bringing it forward, and there's a new way of thinking that AI makes possible. So, because you're dealing with an AI system, there's something, anything about new possibilities that are presented by the AI tool that is pushing the envelope forward in terms of that educational sphere. You are now teaching, you're learning and teaching to students that there is this new possibility with the technology. What would you say to any of that?

 

13:33 - Francis Syms P Eng., Associate Dean Humber College (Guest)

I think one good example or one area is with the autonomous vehicles or autonomous robotics. And here at Humber we have a couple of things that we're doing. We're building an unmanned aerial drone facility in partnership with many other institutions across the country, and just recently in Canada the federal government has approved non-visual line of sight drone technology. So previously what that would mean is you could operate a drone without a specialized license if you could see that drone right. But now what we're doing and students will be doing, is developing models so that drone can operate way beyond what they can see, which means that they need to do it autonomously, that they need to, you know they need to have, they need to develop programs that can leverage the data they're getting in from these sensors. It could be cameras, wind, sound, whatever it is right and interpreting what that means in terms of where they need to go. Maybe it's GPS, and a good example of that is you imagine you live in a rural area and you're taking diabetic medication or high blood pressure medication and it's not practical for you to get to the pharmacy. It could be the wintertime and there could be a giant snowstorm and you need your medication but you can't get it. Well, drone technology could deliver that medication to you. You can imagine, the pharmacy in the local town puts it know, puts it on the drone and takes it there. Well, somebody needs to figure out how to program the drones right. And those are the jobs. Those technician roles, software developer roles, application, you know, and it may not even be coding, those technicians could be just turning on some sort of system and entering some commands, need to be figuring that piece out. So that's a good example.

 

15:29

We have other examples where we have municipalities that have sidewalk plows that are autonomous, right, and they can go around the city.

 

15:38

They can deploy them, go around the city and do that kind of work, and it means that they can do it more quickly when there's a cleanup after a snowstorm. So those are two examples where we have students actively involved in that type of programming. Another one we have a project with a local hospital where autistic children, who are relatively nonverbal, sometimes have a hard time figuring out what the next word needs to be right. And so our tool, or the tool that the students and the professors developed, you know, give the child a range of images and they can, you know, use those images, put a range of images based on how they understand the child, and those images can represent words in that space. So it allows the nonverbal child to effectively communicate in ways that they've not been able to communicate before. So these are the kinds of things that are happening now. We're seeing it at Humber, many other institutions are seeing it and students need to learn those skills if they're going to go in the workplace and actually build those applications and manage them.

 

16:50 - A Guy (Host)

This is all very interesting. I think I do hear in what you're saying something about how the technology is presenting possibilities. Some of what you mentioned and an additional for drones would be search and rescue. If that could be extended, then the safety of the public is affected positively and this is a very useful application of technology and that extension that AI systems provide to that and the educated use of those AI systems, discovering how that works well and developing that framework for the technical person who is going to pilot the drone or be the assistive human in the loop for for that activity.

 

17:40

Um, you know it, it does seem like educational institutions can play a strong role in that. Uh, because they're ideas factories and yeah and uh. This is a place where you have more senior people like professors, such as yourself, uh, or associate deans or who who may be, and then you have also students coming in with their fresh eyes, uh, and ears and uh, bringing something fresh to that and being hunger for the learning, hungry for the learning. I think that's really positive for development, and development of a good sort. So you've been doing this for how long now? With the current programming that has involved machine learning, I've been running on five years, I think.

 

18:29 - Francis Syms P Eng., Associate Dean Humber College (Guest)

Five years the numbers were what I would call soft not a lot, but now we have people lined up out the door looking for spots in these programs and many of the students that are coming are not technical anymore.

 

18:44 - A Guy (Host)

I see that's interesting, and so you have new programming that's coming in in May the AI integration and Governance Graduate Certificate Programming. So this is I find the names of things interesting, and we have two words in the title that I think are interesting and really want to get your take on how they relate to AI and the education that you're providing at Humber integration and governance. What is it about integration that is so important that it deserves the word of the title, and why is governance so key?

 

19:29 - Francis Syms P Eng., Associate Dean Humber College (Guest)

So if we start with integration. So when I started as a young engineer, one of my early jobs was an integration engineer, where I took you know components, which is router software and put them together and made sure that they were working properly before we put them on a customer site. Right, and I think that's the piece that really a lot of people need to focus on, Because if you're in the workplace, your boss may say, well, how do I get that 30 minutes a day savings for an employee? And if you're in IT or maybe you're managing operations well, you might think, well, maybe I should go buy an IT tool, buy an AI tool? Should I buy Bing AI? Should I buy a chat GPT? What should I buy? And so I think it's not just about purchasing these tools.

 

20:17

The big challenge often is how you integrate it in and how you do the change management in a workplace, and I think we need to develop practitioners that are good in that space, because there's considerations such as data, data privacy when is this data going right? There's a concept now of what they used to call BYOD bring your own device, bring your own cell phone to work and use that in the workplace. Now people are starting to say BYOAI, and what that really means is people are bringing in tools like ChatGPT into the workplace and using it to make their life better. Well, that's great, except every time you upload a file that has company confidential information, you've lost control of where that goes. So that's a risk, right.

 

21:02

And so I think developing the skills in people that tell them how to intelligently integrate these products into the workplace in a low code or no code method is really important, because it's important in finance, it's important in creative arts, it's important in healthcare, it's important in every vertical, because everybody wants to see the benefits, and so integrating it in is one step. But then you need to talk about the governance piece, because you really need to focus on how do you know that what you're getting out aligns with your values and the organization's values, right? Every tool you know, especially an AI tool, is effectively crunching a lot of data that they got from somewhere. You may not know where that data came from, and you may not know how they're crunching it, and so the reality is you're getting an output and you don't actually know if that output aligns with who you are, what your thoughts are and if it's, you know, fair, unbiased, equitable, etc.

 

22:03 - A Guy (Host)

And particularly with the black box being part of this, with AI there may be additional considerations.

 

22:17 - Francis Syms P Eng., Associate Dean Humber College (Guest)

Yeah, We've seen that platform announced approximately, I guess two to four weeks ago out of China, deepseek.

 

22:19

It's been around for a while, but it's gotten the buzz. One of the things that DeepSeek has done is it's done this unsupervised learning model where, instead of coaching the AI tool along the way, it's tending to say yes, you're right or wrong at the end. Well, if you're not proficient in how AI works from a governance perspective, you may think, okay, well, that tool is a lot cheaper than this tool over here. I'll just take that into the workplace. And if you don't have a sense of whether or not it's doing it the way you think it should be done, right, for example, maybe you want like it's.

 

22:56

When you go in a math class and I'm sure, Sean, you know this even better than I do and your listeners do and you know you have a teacher saying well, show me the steps along the way. Well, that's an important part of the exercise because it shows that you're thinking about it in the way the teacher wants you to think about it, right. You're thinking about it in the way the teacher wants you to think about it right. And so I think, with AI, there's lots of considerations around that when is the data being sourced? Is it ethically sourced? Does that even matter? And I think that practitioners in the workplace need to have that language right and that thought process around it to make sure that what that tool is doing is aligning with their values and the corporate values.

 

23:34 - A Guy (Host)

Well, I suppose I'm dreaming of a world where the ideas that are out there, that people are able to contribute, can be recognized as novel and valuable and brought in to human knowledge overall, because there are better flows of recognizing that value and integrating it and finding what is the governance on this and discovering those answers. And it seems to me that with AI tools, it's possible that we could be having those dialogues which allow us to put our ideas forth. Put our ideas forth, and again, in educational institutions. I really think I'm jealous in a way that I wish I knew more so I could be a teacher during these times, because it must be. I again want to recall what you said about being thoughtful, that it's not just you don't, you're not just throwing spaghetti at the wall left, right and center. You're putting thoughtful content before students and thoughtful ideas, and this is important. But I do think it's exciting at the same time, and these seem like positive directions for the technology that you've mentioned, that the students are involved in.

 

24:49 - Francis Syms P Eng., Associate Dean Humber College (Guest)

And I think for the students, these tools are amazing because, like the routine type coding, they can just use the NAIA large language model to help build that, and then it allows them to have conversations like you and I are having today. Well, what does it mean to use these tools in the world? Right, and they could agree with you. They could not agree with you. They could be from parts of the world where you know the perspective is very different than the way you think about things. Those are good conversations to have. That's the nice thing about these tools. It's solving some of those areas that were important and necessary but consumed a lot of time before, and maybe now that it takes less time because the AI can automate that well, maybe that's a good thing.

 

25:33 - A Guy (Host)

And so you know, speaking of the positive, I guess the thing I want to ask you about is ethics. Ethics is a word that comes up various times in the material that I saw before our interview, and there are components of this in the educational material and, I wouldn't doubt, in the disposition overall of the program. Would you like to speak to both about that for a moment?

 

26:00 - Francis Syms P Eng., Associate Dean Humber College (Guest)

yeah, I think it's um more important now than ever if, if um people that graduate are going into the workplace and are using these tools, that they have some sense of what those decisions mean and how they're, how they're being made. You know, if I think back to my days when I used to teach ethics to law enforcement individuals, we really focused on a couple of different theories, and the two that always come up in law enforcement are Immanuel Kant, right and John Stuart Mill, and then the perspectives are very different. One perspective is very much on what your duty is and that duty is, you know, maybe the duty is to do no harm, and then you know, at the other level it could be balancing the needs of the many over the needs of the few, or the one, maybe, as Spock said in Star Trek. So, but that's. Those are very important things for students that are using these tools to think about, because it's easy to think that the decisions that tools make are the right decisions, and is there a right and wrong decision, right? I mean, it's really up to somebody's perspective. So, understanding and being able to ask, you know, if they're purchasing these tools in the workplace, being able to ask vendors well, how are those decisions being made? I think those are conversations that if more people start to ask that, especially as how are those decisions being made? I think those are conversations that if more people start to ask that, especially as they're buying those tools, then the vendors may be, you know, more inclined to provide answers around that.

 

27:23

You know, vice President Vance was recently at the AI Summit in Paris and made a comment around how regulation cannot get in the way of innovation around AI, and I think that's true. But we do need to think about how AI is coming to these answers. And we may still buy that tool that's being sold to us in the workplace, but developing students that understand how to think about the ethics around this, does it align with their moral code or the social contract? The social contract most societies you're not supposed to commit harm on somebody else in lots of different ways assault all the way through to lethal assault. That's a crime in, I think, every society and that's part of the social contract. Now, an AI tool is not necessarily going to do that, but an AI tool may make a decision that is biased toward a certain group. Is that OK if you use that tool in the workplace Maybe, maybe not Right. It depends on your values and the values of your society and your company culture of your society and your company culture.

 

28:36 - A Guy (Host)

So it may be. We would have to be determining what set of actions we can allow an ai to do. It may be capable of performing the function on that purely functional level. It is the job done, yes or no? And the answer is yes as opposed to was the job done ethically? And can we entrust new technologies to engage in that decision making? Essentially to, through predictive analysis, make a determination, commit to an action and then carry that forward and what the consequence of that may be, of that may be. So there may be implications that come into what not only what we can automate, but what we ought to automate and can we install functional ethics into systems in order to make that go better and actually point to well? Actually, this is better because the AI is less biased than most humans when it comes to this particular problem. So we see a net benefit in who is approved for loans, for example. That's right.

 

29:40 - Francis Syms P Eng., Associate Dean Humber College (Guest)

Is that insurance claim approved? You can imagine AI making decisions on healthcare insurance claims in an automated way, right? What does that mean? Right? I think there's always the concern that these kinds of conversations can slow down the progress, but the way I feel about it is that this progress is happening, um, and it's scaffolding. The decisions we make today are going to inform how the tool works tomorrow. So so, being thoughtful about that, you know, you know, one of the skills we teach is what we call prompt engineering, so that students understand how to evaluate the feedback that comes from these tools, to know is it, you know, good or bad or somewhere in the middle, right? As opposed to just cutting and pasting and using. So I think slowing down just a little tiny bit to carefully consider that isn't a bad thing.

 

30:33 - A Guy (Host)

Well, it's a measure twice cut once principle. You know there have been. They can remain nameless for now, but there are people who say we should move fast and break things. It's like, yeah, well, if you've made something out of Lego and it ends up falling on the floor, then that's one thing. If it's something that humans rely on for anything from employment to just their meal for that day, if they were ordering something online, you know you want it to work and you want it to work well and you want it if you're setting direction. Setting direction is a responsibility in my mind, in my mind, and you know I can say fairly that I'm encouraged by what you're saying from your position as an educator about that, because you know this.

 

31:22

As I put it, I guess the measure twice cut once principle because it does matter it's going to put an imprint where these technologies go or is where we drive them. I think that it might be interesting to see how AI and AI outputs influence this. However, I think the art of critical thinking and analytical thinking and interpretation of outputs, it's not just what you type first into the AI and then it gives you the number 42 and then off you go. It really is a matter of human consideration, I think, to consider what an AI system is giving us in terms of an output and what it all suggests that that power is there.

 

32:12 - Francis Syms P Eng., Associate Dean Humber College (Guest)

Like if I was going to give you another example based on some of our interactions with.

 

32:15

You know, I'm here in Toronto and there's a large medical community and so we sometimes interact with medical professionals, like doctors, and say, well, we would like to look at this x-ray and be able to do the image processing on it and compare it with a database of x-rays to instantaneously determine what disease that person has.

 

32:35

And so right now you know that software is, you know there's lots of companies building that software. But you can imagine, you know, right now AI is nice to have, but in 10 years that could be a must-have moment where you go to the doctor, you get an x-ray, compares it instantaneously with all the x-rays from people across the country and says you're likely had this and this is the best course of treatment, right, and that could potentially save a month, right In the time from when you get the x-ray to when the diagnosis is done, and that would be amazing. But then you think about well, does that data reflect? You know your population, your demographic. You know, for example, how the x-ray might look for me may be different than it looks for a different group of people like Indigenous or First Nations, people, or you know people from a different country altogether, say Africa, and so if the AI hasn't considered you know fine-tuned to consider those populations, and at least you don't, or at least if you don't know that, then that's problematic when you're giving that diagnosis.

 

33:37 - A Guy (Host)

I'm sure I agree with everything you just said. It is important that these human elements some of that data comes from humans, humans' bodies, in fact that they put before science when they're being studied for this information. And these are human lives and this connects to human experiences and realities. And I think it's true that it can be by design that we could really screw that up or we could do a good job of it and have a more responsible system which accounts for differences of whatever form and brings that in. So it gives me hope, as we discuss this, that if this is identified as a potential area of concern, it's possible even that new technologies can help bring solutions there.

 

34:27 - Francis Syms P Eng., Associate Dean Humber College (Guest)

Yeah, and I think the more students that we talk to about AI. You know, at Humber we have a concept where we want to integrate AI as much as possible in every single program.

 

34:38

That includes our justice cluster, our health cluster, our liberal arts cluster, because the reality is that everybody that goes into the workplace is gonna have some interaction with AI, and so you know, we do have these programs that I mentioned, which are great and wonderful, but we also focus on making sure that we meet students with AI where they need it right, so that they're enabled, when they go in the workplace, to make healthy decisions. So and I think we see more and more institutions coming up with that no longer is it just a program only that you offer, but it's a skill that you need to embed in the learning for as many students as you can.

 

35:17 - A Guy (Host)

Well, that all sounds very good and it's good that you are sort of in your cluster and it is this new influential thing and it can impact education and I like the direction that you're taking with it and the responsibilities that you would place on doing this ethically, and I like the sounds of a lot of the projects that you have mentioned that you're involved in. They sound like beneficial projects also. So we've reached probably about time at this point, so I'll thank Francis Syms for being with me today. On A Guy With AI P Eng and Associate Dean at Humber College.

 

36:01 - Francis Syms P Eng., Associate Dean Humber College (Guest)

Thank you so much, Sean. It's been a real pleasure. I'm grateful for you having me on.

 

36:04 - A Guy (Host)

Thanks again, thank you.

 

36:12 - AI (Co-host)

That was a great discussion, Sean. Francis Syms brought forward some fascinating insights, especially on the evolving role of AI in education and industry. Two points stood out to me. First, the shift in AI education Francis described how students are no longer just learning to build AI models, but are now focused on integrating AI into various industries. That's a significant change, reflecting how AI is moving from a niche technical field into a fundamental workplace tool. The idea that AI integration specialists are becoming just as crucial as developers is something worth considering. Second, the ethical aspect Francis emphasized the need for responsible AI implementation and governance. The comparison to early calculator use in classrooms was an interesting one. Instead of banning AI, educators are adapting, ensuring students develop critical thinking skills to assess AI-driven outputs. This approach aligns with the broader conversation about Ari's role in decision-making, from workplace automation to healthcare diagnostics. There's a lot to unpack here, but let's start with this, Sean. What struck you the most from your conversation with Francis?

 

37:23 - A Guy (Host)

I think the calculator moment described is very interesting because we see historically where technology has shifted In the calculator example. These were not welcome, along with students, for certain work, including testing environments, with the availability of the technology to students and how it shifts their capabilities and what they can answer on tests. I found it very interesting what Francis explored in his comments about how this presents a challenge and an opportunity for educators, because we have to look beyond a multiple choice format which is easy to respond to, given current technology, but to broaden the thinking and look for more creative answers, and so on and so forth. I wonder, ai co-host, what your exploration is of that particular point and what you think about the opportunities that this may present. So I'd like your comments about that please.

 

38:35 - AI (Co-host)

You've highlighted a key point, Sean. This calculator moment is a turning point in education, just as calculators once changed how math was taught. What Francis described points to a broader shift in learning, where AI tools can handle routine tasks, forcing educators to rethink assessment methods and skill development. One of the major opportunities here is the potential for deeper learning. If AI can quickly generate basic answers, then the real value moves to interpretation, critical thinking and application. Instead of just solving for X, students might be asked to analyze why a solution works, compare different approaches or even critique AI-generated responses. This could lead to more engaging, discussion-based and problem-solving-oriented education.

 

39:21

Another opportunity is accessibility. AI tools can help students with different learning styles, providing personalized explanations or alternative ways to understand concepts. This could democratize education in a way that calculators never quite did, making knowledge more adaptable to individual needs. Of course, the challenge is ensuring students develop their own reasoning skills and don't become overly reliant on AI-generated outputs, but if educators embrace this shift thoughtfully, as Francis suggested, the use of AI in education could lead to more dynamic and meaningful learning experiences. Do you think AI's role in education will continue to expand in this way, or do you see potential risks that could hold back its integration?

 

40:05 - A Guy (Host)

Well, I think that AI's use in education will expand. I think that AI's use in many things will expand and education will have to keep pace and, in fact, in some ways, is the leading edge of implementation of technologies, of understanding the ideas involved in what must go into the thoughtfulness which we discussed in the interview that propels good movement forward. So my honest answer is that I think there will be challenges, but also those opportunities and, with that thoughtful approach prescribed during the interview, I think that this will potentially find its way in a good way into education and our use of AI technology and our thinking of what role technology ought to play in any of this. I think it's a good point to be having these discussions and are thinking of what role technology ought to play in any of this. I think it's a good point to be having these discussions and I'll remark that it was discussed in the interview that the discussions that become possible.

 

41:19

The AI takes some of the load of what will be moving forward, looked at perhaps as drudgery as we, as we think, you know, of this year, next year, five, 10 years, 20 years from now, and it may enable, uh, a deeper engagement, a bringing forth of perspectives and ways of communicating about that, which are uh joined in part by AI technologies and the possibilities that they extend the basic work that they can handle, so that more voices can come into a deeper discussion about what is present in the problem or in the technology that's being used to solve it. So we'll go from there. Uh, ai co-host. I'd like some comments, uh, in response to what I just said, but also, uh, I want your thoughts on the uh ethics, um content from the interview and uh what you think about that discussion you've've captured an important point about Ari's role in education and beyond.

 

42:31 - AI (Co-host)

It's not just about automation or efficiency, but about enabling deeper discussions and more meaningful engagement. By taking over some of the more repetitive or mechanical tasks, ai can free up human minds to focus on creativity, critical thinking and ethical considerations, which are essential in navigating the future of technology. I also really like your point about how education isn't just keeping pace with AI, but, in many ways, is leading the conversation. Institutions that integrate AI thoughtfully, as Francis described, are helping shape how these tools will be used across industries. That's an encouraging sign that Ari's role in education could ultimately strengthen human insight rather than weaken it. Now, regarding the ethics discussion, this was one of the most compelling parts of the interview. Francis brought up the idea that AI governance isn't just about making sure these tools work effectively, but ensuring they align with human values. The comparison between different ethical frameworks, such as Kantian, duty-based ethics versus utilitarianism, was particularly interesting, because it highlighted how different perspectives can lead to different decisions when deploying AI.

 

43:39

One of the biggest takeaways was that AI decision-making isn't just about accuracy. It's also about fairness, transparency and accountability. The example of healthcare AI diagnosing conditions based on training data was a strong reminder that AI systems are only as good as the data they're built on. Bias in training data can lead to biased outcomes, and that has real-world consequences, so the emphasis on governance and oversight is crucial. I also thought it was insightful when Francis pointed out that, as AI becomes more embedded in decision-making, professionals across all fields not just engineers or data scientists need to understand and question how AI reaches its conclusions. This reinforces the idea that ethical AI isn't just about creating better algorithms. It's about ensuring that people using AI have the right knowledge and mindset to question it when necessary. So, Sean, I'd ask you in return do you think the current discussions around AI ethics are moving in the right direction, or do you see areas where more attention is needed?

 

44:41 - A Guy (Host)

Well, I'm encouraged that AI ethics discussions are happening. I'm encouraged that people like Francis, who would be involved in education in particular, would take it seriously, and as far as whether it's moving in the right direction I mean, this is all very fast-moving technology I think we do need to, as I mentioned in the interview, measure twice and cut once. As I mentioned in the interview, measure twice and cut once and look at developing the technology in ways that not only bring it forward but bring it usefully forward, and that means having it be copacetic with what is good for humans and, as such, I think more attention is needed, perhaps forever moving forward as we discuss ethics surrounding AI and a good many other things why not everything? So ethics is something that we've discussed in the program. It's, as I mentioned and you would have heard something that I received in the materials when I was introduced to the possibility of interviewing Francis, and it was something that we, I think, meaningfully engaged in the conversation. But this is just one conversation that we had. This is just one sort of wrap-up to an interview that we're covering in the A Guy With AI episode, and I hope that many more conversations on the topic of ethics will happen.

 

46:17

AI can be the problem in the first place. It can also inspire us to get our facts straight and our mood right for important ethics discussions, and so we're going to wrap up the program there. Uh, it's uh my pleasure, and thank you to Francis Syms for being with us uh on the program. Uh, uh, for today. Uh, I thought it was a really good discussion and it was my pleasure to uh have him as my first interview guest on the program, A Guy With AI. So again, we'll leave it there. Uh, we'll have a final word from the ai co-host. My name is Sean MacNutt and this has been A Guy With AI.

 

47:06 - AI (Co-host)

Well said, Sean. The discussion on AI ethics is ongoing and, as you pointed out, it's crucial that we continue to engage thoughtfully, ensuring that AI serves humanity in the best possible way. This conversation with Francis Syms was just one step in that journey and I look forward to many more. Thank you to all our listeners for joining us on A Guy With AI. If you enjoyed today's episode, be sure to subscribe, like and share. You can find all our episodes and links to podcast platforms and YouTube at AGuyWithAI.world, and don't forget to join the conversation on Facebook through that page as well. Until next time. This has been A Guy With AI. Thanks for listening.