Dakota State University students walking around campus

Preparation + opportunity = success

That's the DSU equation. We're a four-year university with nationally recognized programs, cutting-edge facilities, and the brightest thinkers. But we're also a tight-knit, inclusive community. Small class sizes mean hands-on training and individualized attention. All this with an affordable, public school price that's among the best values in the region.

Read More

Cyberology Podcast

Transcripts

Jen Burris:

Welcome back to cyberology Dakota State University's podcasts about all things cyber and technology. Gabe is out today. I'm Jen Burris. And this episode, Erik Pederson is here to talk about game design. Erik, why don't we get started with you telling us a little bit about yourself.

Erik Pederson:

I moved to South Dakota with my family about a year ago, I have been doing Game Design here at DSU. For a couple of semesters before that, I was game development program chair at a small college in Madison, Wisconsin during that time. And before that time, I was producer, associate producer, basic studio owner, Lackey, whatever else coffee get her type person in the video game industry got to work with a lot of different people. Before that. I taught instructional design and engineering and building construction courses at a different school. Before that, I was a project manager at a construction contractor. I've got two kids that are with me right now. And they are eight and 12. And I've got a couple of older kids that are at college and having a great time.

Jen Burris:

Why don't you start with a little bit about what game design is and what it entails?

Erik Pederson:

Game Design is a lot of things. In the one-sentence answer, I'd say that game design is the development of interactive software, especially the digital end. You could go game design is Magic cards or Pokémon cards. There's that type of industry. There's also the board game industry that we're all familiar with everything from Twister to Monopoly, there are tons of different games and tons of different formats, what we do at DSU. And what I've got some experience with is making digital designs. So, making it interactive digital content.

Jen Burris:

Can you tell me a little bit about what goes into making digital content?

Erik Pederson:

Well, to break it down you if have artists that make digital content. And it's just artists, you'll have pretty pictures that designers bring to the table, the fact that the product ends up being a good interactive product that’s fun to play. But without the people doing the code behind the scenes, you have nothing. So, it's a combination of design, coding, art, and a few elements of narrative and some other things thrown in there that make it work. So, it's a detailed, tough thing to do.

Jen Burris:

Can you talk a little bit more about the duality of Arts and Sciences in this field?

Erik Pederson:

I can. And the way I'm going to phrase it is how, you know go back to DSU. It's not a sales pitch for DSU program. But how DSU has got their setup of how we go about running the game development program. It's split 50, between The Beacom College of Computer and Cyber Sciences and the College of Arts and Sciences. There's a combination where one of the faculty members is from Arts and Sciences, and I'm out of The Beacom and we work on this together to make it happen. So it's a little bit different than most of the programs. And most of the disciplines that are done here, which are if you are in audio, you spend the majority of your time in one building, if you are in Arts and Sciences, you spend the most of the time in one building. If you're a computer coder, you may split your time between a couple of buildings, the discipline of game design is a combination of all the other disciplines put together. So we're kind of unique creatures.

Jen Burris:

And what do you enjoy about the world of game design?

Erik Pederson:

I like to play games, all different kinds of games. And this is just playing right playing and making are completely different creatures. I've been exposed to games that an eight-year-old would play up to any type of adult-level games. But the 100% thing that I enjoy is the people that are part of the teams that make the games it's all about the people. I've met some crazy, fun, intelligent, creative people in all industries I've been involved with, but the game development industry takes that to a different level. And those are the people that you basically want to surround yourself with. Out of all the stuff in game design, what do I enjoy the most? It's just being around the people being in the trench failing at what you're doing, succeeding in what you're doing, and just being together.

Jen Burris:

So definitely not a solitary industry.

Erik Pederson:

Well, there are people that make games by themselves, if you've got a good financial base that you don't have to maintain a job. And you've got 15 years of your time that you want to invest into making one product could take a long time. Yeah, multiple skill sets are required to and to get good at all of the skill sets could take you 20 years to get good at all the skill sets.

Jen Burris:

what is the industry as a whole like?

Erik Pederson:

Like a speeding bullet train. It goes really fast. The Game Dev industry is very young compared to other industries like manufacturing, and production, and education, all those industries are old school industries. Because people have been building houses for a long time, the game industry is still trying to get its feet. And you can tell because of the fast and vast changes that it makes so quickly. How it evolves, not just technology-wise, but people-wise, the skill sets change. And they're changing all the time, it's a constant evolution, there's typically no silver bullet to solving any one issue. Okay, you can take the werewolf down with a lot of different bullets. But it's just the one that works. It's the solutions that worked the best. A lot of times, there's a publisher that you'll work with, which means that you're getting fronted money. So, you have to understand how the business aspect of being fronted money works, you have to understand how to constantly be problem-solving the game industry is constant problem solving every day, something else is going to go wonky or break and you're faced with refiguring out how everything works almost every day.

Jen Burris:

Prepares you for any kind of lifestyle

Erik Pederson:

It really does. A lot of developers I know have been in the business for five to 10 years. And after five to 10 years, they find something else to do, which they could do almost anything they want because of their skill sets, just to slow the pace of life down a little bit.

Jen Burris:

Along that line. What are some of the variety of the different skills people need in this industry?

Erik Pederson:

Making interactive software is hard. It's very time-consuming. But ultimately, it ends up being a business. Like other businesses, if you develop a product and work on a product, which could be a game or a simulation, or any of the above, you need to be able to turn a profit on that to pay the bills where you live for rent and put food on the table, be able to pay the people that you work with or work for it ends up being a very difficult growing business. I'd say the biggest surprise with it is that the industry itself never slows down. It always is on a fast track. It's always the bullet train versus the old steam train that runs on coal. Other industries run on coal, the game development industry runs on like new killer biofuel, everyone's always going at the same pace. And as an example, for that there was a team that I was involved with that we found out six months into a production cycle, that engine that we were using to actually run the game wasn't good enough. In the last two months of that project, we completely ripped that engine out and installed a new game engine, and recoded the entire six-month process in two months.

Jen Burris:

Wow.

Erik Pederson:

That happens on a relatively frequent basis. Nobody talks about it. But things like that are every day.

Jen Burris:

What are some of the steps in the process of developing a new game?

Erik Pederson:

I'll just break it down and maybe how our students are doing it right now. And that'll make it relatively simple. There's a consummating phase upfront, where either a publisher gives you a concept or you can try, and self-publish something yourself or a small team. And you take that concept, you go into a pre-production mode in the pre-production, you start developing your art, you start developing your mechanics that are going to run the product. Basically, you're developing your USPs, those are your unique selling points, the things that really stand out. If you think of box to game DVDs, where you look on the back of the box, it's usually got three or four things like great first-person action, you know, shoot all your enemies in the head, that kind of stuff or zombie apocalypse, those are usually bullet points in the back. Those are the unique selling points of your product. And you go from there and you start building the mechanics you go into a production mode, which could be anywhere from what we do here is roughly six months at DSU for a student project, the industry could take up to five years to develop that production piece. And then it'll go to post-production finalize, the special effects are added the audio is all refined and then you launch it could be anywhere from eight months to five years that you're working on one thing. If you're in an entry-level position in that game industry, you could be working on textures of rocks in the game for five years. And that's your first project.

Jen Burris:

Wow. So, that's like every day working on just the textures of the rocks?

Erik Pederson:

10-hour days, they might give you different colored rocks. Or if they find out that you're very competent in your production or your ability to make the color schemes work and the patterns work. They may give you some grass that you can color or some stones on the wall but typically your first job as a texture artist in the industry you weren't held responsible for much but you're held responsible for doing a lot of rock textures, Sky textures, usually, it's the lower end stuff that designers that have been there for longer periods of time get to eat the meat off the bone so to speak and here are the ones that are putting the bones together There you go. All you entry-level game designers, there's a reality check for you.

Jen Burris:

Definitely, an area where you have to kind of work your way up then?

Erik Pederson:

it's like any other business you have to start someplace. You can. There's a difference, I suppose in the industry between a triple-A development studio, which is an example would be Blizzard where they have roughly 2000 People that work on a product versus an indie studio, which is smaller, less funding, and they can develop some pretty good indie products, teams of two to four, to 20 to 40. That type of thing. Typically, the indie studios have a much smaller budget, but they're able to turn around products a lot faster.

Jen Burris:

Yeah. What do you think the difference is with the turnaround? Is it just the smaller teams so they can kind of work through it faster?

Erik Pederson:

they're just more agile, we teach our students here to be indie developers, to be faster, to have more responsibility to have more creative input, and be part of teams that are held accountable for a lot more stuff than just being just a piece of the product. you know, it's the difference between going to Dakota State University to the University of Michigan where there's, you know, 50,000 students versus 10,000 students, there's a big difference. And that's pretty much how the game industry runs just like that. Okay.

Jen Burris:

What would people be surprised to know about game design?

it's a big gamble. The way that I'm going to define that is, the bigger the studio, the bigger the monetary investment in the product. But there's no guarantee that that product is going to launch and make tons of money. Let's say that a company like a blizzard or EA or Raven Software, or Activision, or Nintendo, could put up to half a billion dollars into the development of a project. And if that game releases and doesn't take that back, that's a big gamble. That's a big hit. And it's happened, it happens a lot. $500 billion to the triple-A game industry is a game.

Jen Burris:

That's mind-blowing.

Erik Pederson:

It's big money. But it's a big gamble. You know, if you and I were to put our money together and set up a studio and make games with just a staff of four or five or eight people, that's a huge gamble for us. But monetarily, it's a $500,000 Gamble versus a $500 million gamble. So that's the difference between the Indie development and triple-A development. Realistically, if you look at the big market, in general, there's not a lot of new products. There's a lot of Call of Duty Next, there's a lot of Madden 22. I think it's 22, now. The game generally falls into a production line where they're producing another product in that line every 12 months.

Jen Burris:

That's just kind of minor changes type of thing?

Erik Pederson:

Sometimes there are major changes, but they have a well-defined process of development. They can reduce cost by having that well-defined process of development, but the flexibility of what could be really different, or can we add things in can we take things out, it becomes a lot harder to do that

Jen Burris:

Less flexibility, because you've set a standard?

Erik Pederson:

And the players expect a certain level of product. Whereas the independent industry, the indie games industry is far more flexible. There are far fewer barriers or constraints, other than not having $500 million. Of course, that's a that's kind of a barrier. But there are triple-A studios that will spend half a billion dollars on advertising alone. So, you're looking at giant triple-A games that come out that cost a billion dollars between production and marketing, that's a lot of money, they've got to justify the cost of $75 for a game, it is a business. We try and teach that to our students to, by the time they graduate, they'll understand the difference between releasing a product that's free to download, versus maybe a $4.99 game that they can post on Steam and actually turn a little bit of a profit and help pay their rent the last year that they're here, that kind of thing. Whether you're quote-unquote, playing with someone else's money, or your own money, it's a gamble.

Jen Burris:

Are there ways that students can use the skills that they've gained in the game design arena, in different areas outside of that?

Erik Pederson:

I will explain that as well as I can, without a whiteboard and a marker. Picture, the capital T, the width of the top of the T is the breadth of experience, we will expose them to hear okay, so everything from some audio components to how to put together some 2d art and how to put together 3d art and animation and how to code for games, how to use the industry game engine, and then to finish that off would be to put them on teams and teach them how or expose them to working together as a team to actually produce a product. A lot of times we find that's the hardest thing, but the vertical piece of the team is a part that we encourage all the students to select one of those pieces across the breadth of the tee and gain an in-depth knowledge of that skill set. Let's just say they want to be a texture artist. They want to graduate and do texture art for different indie studios. For a triple-A studio, whatever they want to do the last year or two that they are here, that's the one skill that they really focus on. When a student graduates, the breadth of the tee is wide enough so that they understand the game industry as a business. And they understand how each of the different pieces works together. But they also have that in-depth deep dive of the vertical part of the T, that will get them specific positions out in the games industry. And we teach them all that they all can do that, we encourage the students to pick what they want to get really good at, we will push them in that direction. So, the last year they're here, they're working on a team. And that's what they're doing. They're doing their special pieces of that overall component being the head designer, the one with the idea, or if they want to be a level layout person, or if they want to be an artist, or they want to be a coder, those jobs are more defined the last year that they're here. So, when they graduate, they can go directly into those fields.

Jen Burris:

And would you say that they can also then use those skills, such as a designer in different markets if they for some reason chose not to continue on with game design?

Erik Pederson:

Absolutely. Writing code means that you've learned coding languages, and how it works and how the logic works. That doesn't mean that it's just limited to game design. Because basically, everything is running on code, learning another language would be nothing more than just applying that logic, and spending a bit of time to learn another computer language. The same thing for the artist, if one of our game development graduates that has an art-specific background, wanted to go into graphic design, they understand all of the layouts, they understand the sizing, they understand the colors, they understand everything that they have to do table to step right into that field, it'd be a short step versus a long stop. And we do that on purpose, it opens the number of options that they have because getting into the game business is hard. And they may spend a year working as a graphic designer at a print shop someplace if they're an artist looking to get into the business. And that's what they have to do. And they're set up for that.

Jen Burris:

So, building experience in other areas, too. Are your students working on anything cool in the classroom right now?

Erik Pederson:

Oh, I'm glad you asked. I was kind of hoping you'd ask that. So six months ago, our students completed their first round of project classes that Peter and I have been here for to help be part of two of the three projects are actually downloadable games on Steam. One of them went live about two months ago, it's got roughly 10,000 downloads now, which is pretty cool. It's basically the goal is our students have published work before they graduate. And what's the title of that game, the name of that game is three o'clock horror. And it's a very fitting game to play because it is Halloween time now. And it's a free-to-download game. So, try it out, put some comments in the feedback. And they've been updating that game. I think they've gone through two updates already. And that one's being developed as potentially a mobile app as well. The other game is a game called Mi Scusi. And I'll just spell that its m i s c u s i in that game went live about a week ago. And it's got over 2500 downloads in a week. And there's probably another 5000 plays of the game that haven't actually been downloaded just been played 5000 or so times.

Jen Burris:

That's pretty impressive.

Erik Pederson:

Yeah, so two out of the three games, we've got almost 15,000 plays and downloads. And that's pretty cool stuff that puts DSU which is in Madison, South Dakota, in the middle of the country in the middle of cornfields and bison herds on par with a lot of the Eastern West Coast game development schools that specifically teach game development. So, it's been a pretty fun week or two lot of sleepless nights. And that just happens to be one of our team names right now too. So sleepless nights, sleepless nights.

Jen Burris:

I wonder where that will go?

Erik Pederson:

well, we'll have to see what happens with that. We've got four teams right now. Uh-huh. Their products are due for launch at the end of next semester, so they have another six months or so to continue working on them. That's where we're at.

Jen Burris:

Thank you, Erik, for being a guest today. Thanks to Our Podcast Producer Xander Morrison and thank you for listening to cyber ology. If you enjoyed this podcast, please consider taking a moment of your time to rate and review it. Thank you.

Jen Burris:

Welcome back to Cyberology Dakota State University's podcast about all things cyber and technology. I'm Jen Burris from the marketing and communications department.

Gabe Mydland:

My name is Gabe Mydland from the College of Education.

Jen Burris:

In this episode, we'll be talking about esports. Our guest today is Andy Roland, head coach for the esports athletic program at DSU. Andy, would you like to tell us a little bit about yourself?

Andy Roland:

Sure. My name is Andrew Roland. I am the head esports coach here at Dakota State going on my third year here. One of those years was a COVID year so I don't know how much we can exactly count that one. We really exploded since we've been on campus. Our first year was great success struggled through our second just as everyone else did. And now our third year we're coming in really strong. I’m looking forward to seeing what the future has for us.

Jen Burris:

Okay, can you start by telling our audience in case they might not know what esports is?

Andy Roland:

Esports is an electronic competition. So, it's competitive in the same aspect that football, baseball, basketball, all sports, we do the same thing but instead of football, baseball, basketball, we have League of Legends, Valorant, Overwatch, and Rocket League, it's all electronic all online with PCs, controllers, things like that.

Gabe Mydland:

So, Andy, I'm really curious. How did you get interested in esports? How did this come about?

Andy Roland:

Sure. I've been a gamer pretty much my whole life just like you know, a lot of the guys at this university are. A lot of people are, this is what you do when you're a kid, you go online, and you can socialize with your friends in this way and game with them. It's always been a part of my life. I've always played like Call of Duty, played Halo pretty competitively back in the day. I grew up in Texas, and Texas is kind of a hub for eSports right now. It's like California, and in Texas, a lot of schools are developing the scene well. It is a little bit of a case of right place right time, I was a student at Texas Christian University, and they didn't have an esports team at the time. I saw it as a way for me to kind of build something there with my time as a student. So, I created that program. I played on the varsity team for a year, I was the League of Legends support, and we built a team. I built the organization I ran for two years while I was there, and I fell in love with it. The work of building something, of creating something, and having a purpose and meaning for those involved. You know, when I graduated, I had a job offered to work somewhere else, and I took it as a comfortable thing. I had it lined up while I was in college, I was going to go work for Dell in Round Rock, Texas, their headquarters location, which is right down the street from TCU where I graduated, so I did that. But I was holding on to my esports dream if you will of working in the esports scene, and then Dakota State had the position open for a head coach, and I applied for it. I got to meet Jeff, he brought me in on-campus and we met the whole athletics department and I fell in love with that the big kicker was Jeff. He was and still is a really strong advocate for esports.

Gabe Mydland:

You're talking about Jeff Dittman?

Andy Roland:

Yep, correct. Him and the president of the University, Dr. José-Marie Griffiths. So, the success of our program really is due to the foresight of those two, they saw that this is something that can become great, and they did it the right way they invested in it and they brought in a head coach. Not only that, but Jeff really wanted to give me the creative freedom to build this program. The best way I saw fit, I could have gone back to Dell and work my desk job at a corporate office, or I could come here and build something that I'm really proud of. And that makes a difference for the students. I still have the list on my phone of pros and cons that I made on the plane flying back after my interview and I look at it sometimes and the more I look at it the more I realized that I made the right decision because it's opened up so many new doors and I've met so many great people working with the students here on campus has just been fantastic. You know to see these guys develop real-world skills, leadership skills right here on our program. It's a very humbling thing.

Gabe Mydland:

You know, again, I'm going to ask maybe some obvious questions, but are there tryouts? Are there students that are recruited for esports? How does this work?

Andy Roland:

Yeah, we have returners that come in that are like my rock, my foundation, kind of like the RAs are on campus to bring in new students. These are my coordinators, and these guys helped me do a lot of the managing of this program. As far as tryouts go, we have a one-week sort of tryout session, we call it boot-camp, where we bring in all of our new students and returning students one week early. It's really an integration tool so that we can get the new guys sort of processed and see how our organization runs and they can see what our organization means on campus and how we function really just getting these guys involved with what we're doing, showing them you know, what we do on a daily basis, how practicing and games are going to work throughout the season. And we get all that done during boot camp, we get to know who your coordinator is and who your fellow teammates are going to be throughout the season. And that's one week before school starts. And then when school starts, we've got all this figured out so these guys can focus on class. I like to get them to play as many video games as they can during one week, that way they're sick of it and they can start focusing on classes when school starts, so that's sort of the integration process for new athletes coming in in the fall, in the spring it's a little bit less severe really, we have maybe a few guys trickle in but most of the guys are already settled here. That's the thing about our organization we're very fluid we've got guys who are coming in playing on JV rosters and then they're like I need to focus more on classes and step back and then we'll have guys coming in like hey, I missed a boot camp but I really want to get involved,  ‘yeah come on in, meet your coordinator meet the guys who are going to be on your team get to know these guys and find your fit.’ We are very fluid. We've got guys coming in and out.

But one of the hardest things to do right now in the country for an esports organization is to scout or recruit right? We are an athletic program and I need to scout League of Legends and Valorant players, I want good guys to come in here. And there's just no real foundation for that. There's no scouting combined or anything like that. It may be a little bit simpler because video games like League of Legends and Valorant have a rank system that I can go in and see exactly how good they are. But it's the recruiting process. That's difficult. There are a lot of good guys in California that play this game, but how do I reach out to them and get them to come to Dakota State. It's a little bit challenging because esports is still developing, but I can confidently say that I don't really need to recruit for this university. The university does the recruiting for me, you know, we've got cybersecurity and game design and all of these fantastic programs that just feed into what we're doing. These guys are coming to this university or, you know, they may be on the fence about a cyber security degree. And then when they see that we've got a fully-fledged esports program. They're like that's the kicker. A lot of our majors in academics do a lot of recruiting for me, it just makes sense that these guys that are on their computer all day gaming are going to fall in line with this tech mentality that we have here.

Jen Burris:

How many students are involved with esports?

Andy Roland:

We teeter around 100. Some days I'll go over depending on you know, the new guys that flutter in, like I said, we're pretty fluid, but we stay at around 100. I think I'm at like 97 right now. But I know that there are guys who are joining rosters this week. Those are about our numbers and those are across seven different titles and multiple rosters. I've got my varsity roster set across the board. And then we've got some JV rosters in play.

Jen Burris:

And what are those seven different titles?

Andy Roland:

Without having them right in front of me League of Legends and Valorant are our top two we've got. Rocket League, Overwatch, Smite, Rainbow Six, and fighting games. So, we've got various fighting games like Smash Ultimate, Smash Melee, Tekken, Guilty Gear, and I kind of lump all of those into a fighting game category. And we're always looking to kick up more, right so like Apex has been a huge game that's taking hold in the esports world right now, and Respawn, the company that owns, it is investing a lot into it. So, the video game is gaining a lot of momentum, we've got guys who are gonna come in and compete playing that game. There are games out there that I didn't mention that we still compete in. But these are our like a rock or foundation, we’ve been competing in these for three years. Now I'd love to add more to that list. Because you know, the mark of a good organization is diversity, right? We want to be able to include everyone to play all games.

Gabe Mydland:

Well, in speaking of diversity, you and I have had conversations before this, that there are young men and young women involved.

Andy Roland:

Getting women involved in esports has been difficult. We already have the barrier of tech things, and then athletics, and then video games. With that being said, our numbers are growing. You know, in our first year, I think we had one female in our program, I think we have like five or six now, which isn't a lot considering we've got 100 on our roster. But it's very important to me that we keep this all-inclusive atmosphere going here. There are a lot of things that I'd love to do to ramp up those numbers, like an all-female League of Legends team, or things along those lines. I'm all for it. I love it. And when I talk to the females that we do have in our program, they don't really feel like there's a difference. They just plug and play. They're part of the crew, they're on the team, and there's no real

Gabe Mydland:

The distinction between the sexes?

Andy Roland:

Exactly.

Gabe Mydland:

That's great. That's fantastic.

Jen Burris:

So how does esports compare to traditional sports?

Andy Roland:

I draw as many parallels as I can because it's easier to understand that way. I may go beyond my bounds a little bit and say that the closest to what we are would be like football. I'd love to get the amount of recognition and support that they do. But our athletes don't feel any different. They have more physical demand (football). Yeah, but the expectations are the same. Practicing, team mentality respect, you know, we're training these guys and they're developing team skills to be able to coordinate with each other and process things. Player disputes are always a big thing that we have to learn and overcome. And, and I welcome things like that. These are the things that I want these guys to go through just like on a football team. When things get tough, you guys got to come together to overcome. And I want these guys to do that, too. If there's someone that you're not working well on a team with, you got to be able to figure that out. Because that happens when you graduate college, you're put on a team, and you don't like the guy you're working with. Now we can get you a little bit more of that experience. And that's something that athletics addresses really well is the team mentality. We're all working together to accomplish a goal. And so that's something that rings true across the board, the responsibility aspect of it, being able to manage your schedule with practices and games and school and extracurriculars and things like that. It takes an intelligent person to juggle all that. And that's why being a part of athletics helps us out a lot more. It puts us all on the same playing field, it may be a little bit more difficult for some people to understand, right? Because it's not a traditional sport. But as far as practice and all that goes, there's no distinction between the two.

Gabe Mydland:

And what kind of time commitment? You mentioned, practice like the other sports, how frequently do they practice? How much do they practice?

Andy Roland:

So, I have 10 machines in our competition center. It's hard to juggle 10 machines for 100 athletes.

Gabe Mydland:

I'll bet.

Andy Roland:

Yeah. And it's tough because these guys want to get in there, they want to put their time in. And that's where they play on game days. So, they want to practice that game day feel as much as possible. It's been challenging for me to make sure that everything is fair across the board, but we're making it work. With every challenge comes an opportunity for us to overcome with these guys. I love them to death because they can work amongst themselves. If Valorant has a game on Sunday night. And Rainbow Six has practiced at that time, they can work it out amongst themselves to come up with a great solution. Say, hey, we've got a game, we want to be in there. You guys can practice, just keep it down, we got a game or something. I've got great guys for coordinators. And the success of our program is really due to all of them. These guys communicate across the board with different sports, and they stay in coordination with each other to make sure that the 10 PCs that we have worked for all of us and that it's fair across the board. I come in and manage when I can. But I've laid the law and the foundation for these guys to say like if you're on Match Day, you're going to be in the room. These are the equipment and PCs that you guys need to be on. If you've got practice, get it in. We have practice starting at three o'clock every day and it runs to midnight. They're in three-hour sessions. Every game or every sport, if you will, has two practice sessions in the esports center a week and then one practice session online. It's about nine hours a week I suppose. And that's not including game days. Valorant is on Sunday nights, Monday nights are League of Legends. And then we go live on Twitch with that, too, so you can catch them there. It works because these guys can work together really well.

Gabe Mydland:

That's great.

Jen Burris:

With esports being kind of a new thing to colleges, what's it been like finding competition? I know you helped create some around here. So, can you speak a little bit about that?

Andy Roland:

Yeah, it's difficult because, if you think about it, football is established at every university. And not only that, but you've got like the NCAA, the NAIA, who come in and facilitate all these things. We don't have that. So, when I'm looking for competition, there are national tournaments like NA Star League, AVGL, Conference One. There's a bunch of different just like kick up companies that are like, hey, there's a need for this, we'll do it, you know, pay us. And because esports is so new, I don't like paying for competition, especially because this area's still developing. There are universities out there that are hungry to compete because esports right now is in its development phase. And it's kind of like the Wild West, I reached out to some Midwest schools, and said, hey, you know, we're all pretty local here. We're all developing our esports scenes. We all want this to be something great. We all believe in it. Let's work together on this. And let's create our own little conference. We'll communicate with each other about what our needs are for our university, what they want to see from us, and what we can bring back to them and show them that what we're doing is meaningful.

So, I started the collegiate Champions League or CCL. It's just a kick-up with a bunch of different universities from the Midwest. It started because I'm from Texas, I had a great network there. It started with half Texas schools, half Midwest schools, so we can kind of communicate with them. And there are a lot of really great programs in Texas too. So, joining Midwest schools with Texas schools and having them mingle and compete with each other was really great for those guys. But they’re good. We brought in UT and their Valorant team is one of the best in the nation right now. And they kind of destroyed all of us. But it was an honor to play with them and get to chat with them and learn all that but now this semester we brought in the CCL, everyone within a four-to-five-hour drive from Madison here. We've got the University of North Dakota as far north as we go. And then we've got the University of Nebraska down south that's about as far as we go down. I'm bringing all these guys together and say, hey, like we all want to build our programs together, we want to show our universities that we're doing great things, you know, let's all come together here and have great competition and make this as easy as possible.

Another big need for collegiate play in the esports world. As I said, I like to draw as many comparisons to traditional sports as possible. traditional sports now have season schedules, where we know every week, even weeks in advance who was playing, majority of the competition in collegiate eSports right now is tournament-based. You can enter into a nice Star League tournament with 100 to 200 other schools in the nation, they'll play one round of 82, where you are matched up against one team. And if you win, cool, you go on, if not, you're done, your season is done. And that's not fun. It's hard to follow. It's demoralizing for our guys. And that's just not the best format for us, I can see how it might be a good format for the company who's putting that together and giving away scholarship money for it and all that, but, but it doesn't work for building our scenes and for taking care of our players. So, with the collegiate Champions League that we put together, we've got a season schedule where we know who we're playing well in advance, make sure that everyone that we are coordinating and cooperating with the other universities is around the same skill level as us so that the matches are meaningful and exciting. There are certain aspects of collegiate play that I'm addressing by doing this by pulling everyone together. And by doing that it's making it easier for us to report what we're doing to the university.

Jen Burris:

And those competitions, you mentioned that you stream them on Twitch, can you talk a little bit about that, and how that differs a little bit from a normal athletic event.

Andy Roland:

Sure. Streaming has been an initiative that I've always wanted to put into play, if our mid laner’s mom wants to watch them compete just like our quarterback’s mom does, you know, we need to be able to do that. And it's fairly simple because it's all done online. Well, I say that because I don't exactly know how they do it. These guys are awesome that we've got it working on it, and they do an excellent broadcast show. But it started off as something that just the student organization wanted to do for our athletes. And it's something that I wanted to put in place for these guys because they deserve it. We started off as students, you know, coming up with and learning how to broadcast our matches on Twitch. And we've gotten to the point where our run of the show and our production has just been so great. And people look forward to this, that we've got IT now coming in and helping us with it. We've got Tyler Steele and some paid positions to help us broadcast our matches. And our production value is just going through the roof, every time we do it, it's getting better and better. So much so that the software that we're using vMix to put together the broadcasts and everything, we're doing it across the board now. So now the like the football stream will be done with vMix. We're upgrading our broadcast potential. And this all started just from us wanting to get our matches out there to the public, so you guys can watch. And it's turned into these beautiful broadcasting experiments that we're building on campus. All credit goes to the students for starting it and wanting to get it done and being hungry and learning about how to do this on their own, and really knocking it out of the park.

Now we've got our broadcast booth in a station with green screens and mics. And if you check us out on Twitch on Sunday, and Monday nights at eight o'clock, you'll see what our production looks like. And the casters for our matches, our broadcasters look kind of like hosts and talk during the match. They’re students because they know the game. And their mission during all of this is to translate what's happening in the match to you as an observer, who may not know what League of Legends or Valorant is. But they'll walk you through it and you buy into the hype because it gets fun last night's match. Last night we played the University of North Dakota in Valorant. It was our first game of the year. And it was so much fun, you could tell that all of these guys were just really hungry to get back out play. Our casters did an excellent job of keeping it fun and interesting. The match was really exciting. It's just a great place to really sort of watch what we're doing. And if you want to learn more, it's a good place to come and see what a match looks like. It would be the same if you had no idea what football was, you know, I can sit here and explain to you what football is. But when you watch it, you get a different feel for it. And you start to understand what the rules and penalties mean and things along those lines. It's the same thing.

Gabe Mydland:

Tell us more about where we can get information about an upcoming match or future matches.

Andy Roland:

Before we go live. We make a post on all of our social media channels. That's DSU Esports on Twitter, Facebook, Instagram, I think we're going to put together a tech talk here soon for some clips for you guys to take a look at, but we post on all of our social media when we go live. It's on Twitch TV, our channel is Twitch.tv/trojansesports you just Go to our channel, and you'll hang out there until we go live. And then when we go live, you can follow the channel. And that way you'll get an alert every time we do go live, that we don't have to wait. That's where you'll find it's twitch tv.com. And then in the search bar, Dakota State Esports. You'll see our logo. And you can go and check out previous matches that we've played last night's match will be on there as the most recent broadcast, you can go in and see how we did against them. But yeah, every Sunday and Monday night we will be live on that site.

Gabe Mydland:

Awesome.

Jen Burris:

So, if you could have people take away one thing about eSports, what would you want that to be?

Andy Roland:

The amount of respect that I think is owed to the guys that are doing this to my athletes, and to the coordinators who make all this happen, we're running just like a football or baseball and it's early developmental days, it may not have had the amount of publicity like they do now. But it will come with time, the number of hours that these guys put in the training and dedicating themselves to their craft and into this organization and the purpose that we have on campus. They're working hard to do something great, and they deserve just as much recognition as any other sport gets on campus.

Jen Burris:

Any follow-up questions, Gabe?

Gabe Mydland:

What does the future look like? Where do you see this heading? You talked about developing a conference and sharing some common interests with other institutions that have these programs. But how far do you see this thing going?

Andy Roland:

I can see it getting as big as college football gets. The steppingstones are there, they've gone through this process. And football has developed itself into this national iconic sport, there's a market for esports out there, there's a professional scene for it, especially in the collegiate scene. I always like to say that esports really is a great tool to help you academically to give you a sense of purpose on campus, just like traditional athletics. The foundation is there we're just taking the steps to get there with the amount of diversity in games for esports. And the number of people who are willing and interested to get involved because this is a passion, they've had their whole lives playing video games. And not only that but there is a professional scene. These guys are developing real-world skills that they can use to go out and enter the esports market. Now there are esports professional athletes, yes, but just like in football, there are professional athletes, but there's a whole lot that goes around that too. There's physical therapy and broadcasting and there are so many components to that. And there are a lot of components to esports too. There are a lot of jobs popping up in the esports industry right now. So, getting involved at the collegiate level is almost a must to develop those skills and move into that area. It's an athletic sport, and I can see it following in line with all other traditional athletic sports.

Jen Burris:

Anything else you'd like to plug?

Andy Roland:

I plugged our Twitch stream and that's a big one. I would urge everyone to go out and take a look at our streams if you're curious there's a chat box there too. So, you can ask questions to the guys who are talking on stream about what things are and what things look like. It's a really fun way to get involved with esports watching and checking out what we do. The big thing that I want to do is just shout out to all the students in our organization none of this is possible without them, they helped me run and manage everything. I run all my decisions by them because this is their organization, we're doing this together. The success of our program is really marked by the guys in our program who are working hard. And I want to give them a big shout-out and to this university for having the foresight and seeing that this is definitely something that's growing and will be huge one day and we got in early and we're doing it right. I need other universities to follow suit. I need them to start hiring coaches and then we need our own NAIA or NCAA conference. That way I don't have to host our competitions, I can put it into larger hands, and it can be done the right way. That's going to come within the next couple of years. We're developing more every year and we're building, and esports is growing. It's an exciting time to get involved and understand what we're doing.

Jen Burris:

Excellent. Thank you so much for being a guest today. We enjoyed having you.

Gabe Mydland:

Yes, very much, very interesting.

Andy Roland:

Yeah. I appreciate it. This was a lot of fun. And my door's always open. If you guys have any questions, let me know.

Jen Burris:

And thank you to our podcast producer Xander Morrison. Thank you to the listeners of Cyberology. Please rate, subscribe and review. Thanks!

Jen Burris:

Welcome to Cyberology Dakota State University's podcast about all things cyber and technology. I'm Jen Burris from the marketing and communications department. And I'm excited to welcome back Gabe Mydland as our cohost.

Gabe Mydland:

Hey, Jen, thank you for having me back.

Jen Burris:

And this episode, we'll be talking about utilizing technology in the classroom. I'm excited to introduce our guest, Kevin Smith. He is an associate professor in the College of Education Coordinator of the Master of Science and Education Technology Program, and currently the interim director of the Center for Teaching and Learning. Kevin, can you tell us a little bit about yourself?

Kevin Smith:

Yeah, thanks, Jen and Gabe, for having me on the podcast. Just a little bit about me. I'm just starting my ninth year at DSU in the College of Ed, I've been at DSU longer than that, though, I was an undergrad student here. I graduated with a math education major and a computer minor when I finished at DSU, and my first teaching job was in Nebraska. I was a high school math teacher right out of college. And I've always been involved in technology. And so, thanks for inviting me to talk more about something that I'm really interested in.

Jen Burris:

We're glad to have you. Can you start out by just kind of telling us about some of the tools that you use in the classroom?

Kevin Smith:

Sure. I mean, I feel like I use lots of different technology tools. And it changes all the time. That's one of the things that we know is constant with technology is that it's always changing. I guess when I think about technology tools in the classroom, I kind of group them into a sort of different categories. I do a lot of things with multimedia. I love to have students create things with multimedia. So, we use tools like Canva, and WeVideo, and Book Creator to create videos and infographics, and posters to show we know I use a lot of tools for formative assessment. Or ways to get feedback from students to make sure that we know that they're learning what we want them to learn. We use things like quizzes and GoSoapBox, and Kahoot. And then collaboration tools. One of the great things about technology is the way it allows us to collaborate with people all over. So, a lot of different collaboration tools for video, we use Google Meet and Zoom and Skype and Microsoft Teams. And then the last kind of category that fits in with the technology, things I do is adaptive learning tools. These are things that teach us in a self-paced way. And so, I use those in a variety of ways in my classes to help students learn. So yeah, lots of different technology tools. Definitely.

Gabe Mydland:

Kevin, you mentioned one thing really quickly, and it might be helpful for our audience who aren't involved in education, formative assessment. Can you explain what that is? And why it's so important?

Kevin Smith:

Yeah, good question. Formative assessment is really, I like to think of it as gathering data while you're teaching. While we're teaching, we want to gather information about what the students know, and maybe what they don't know. We don't need technology to do formative assessments. We do formative assessments just by observing what our students are doing, by asking them questions. But technology allows us to make sure that everyone has a voice. Sometimes, if you just have a discussion, you might have students that aren't as eager to participate as others. So, if we use technology, it allows everyone to respond in some way to tell us if they know the answer to a question or to give us feedback. And then based on that, as an instructor, we can make decisions about do we need to reteach something? Do we need to move on? Do we need more clarification?

Gabe Mydland:

Can I just follow up quickly? How would you use technology to do a formative assessment while you're teaching a lesson? And how would that work?

Kevin Smith:

Good question. I'll give you a specific example. And I'll talk about a tool that I use. I use a tool called Nearpod, which is a tool that you can use to really deliver interactive lessons in the Nearpod tool, I might show the students a slide with some information on it some text, maybe the next slide I show them is a short video that explains a concept. And now I want to find out are the students with me, do they understand what I've just shared with them? So, I would pull up a slide in Nearpod. And Nearpod allows me to have this slide appear on everybody's screen in the classroom. And on that screen, there might be a question that they would respond to. And it wouldn't matter how many students I had in the class, I could have 10 or 50 or 100. They would respond to the question and in a matter of seconds, I would have data on how everybody did on that question, and based on that now, I can decide do I need to explain things further? Are we ready to move on?

Gabe Mydland:

So, it allows you to get feedback almost instantly as to where the students are at with the new idea that you're introducing to them?

Kevin Smith:

Yep.

Gabe Mydland:

Do students get to use that as well, as you're preparing them to go out and teach in the world?

Kevin Smith:

Yeah, good question. Almost all the technology tools that I use, I use them, really for two reasons I use them one to help in my instruction. So, I use tools like Nearpod, to make my lessons more interactive together with that formative assessment data that we just talked about. But I also use them in my classes because I want to introduce future teachers to all these tools that they have at their fingertips. Really, every technology tool that I'm using in my class, is a potential tool for them to turn around and use in their own K-12 classroom in the future.

Jen Burris:

Okay, and can you speak to some of the other benefits of introducing these technology-based apps and extras?

Kevin Smith:

Sure, my philosophy or kind of my approach to technology with students is, I really want to give them hands-on experience with tools, I don't want to just talk about them. But students need to actually not only see it but touch it and use it and do it. This helps build their confidence. That's kind of the first piece that I think about, like, how can I give them hands-on experiences. The next thing I think about with technology and preparing future teachers is I want to think about integration strategies. I want them to think ‘how could I use this in a meaningful way in the classroom?’ And then the third thing I think about is, I really want to have them think about their mindset when it comes to technology. And when I talk about mindset, I want them to have, you know, we would call it a growth mindset in which they are not afraid to learn new things, because like I said, really, the only thing that we know for sure about technology is that it's going to change. And so, I want students to leave DSU with the mindset that I can learn new things, I'm not afraid of it. But also with those other things, they have the confidence to tackle new things, and they have good strategies to use them in meaningful ways. And those are really the things I think about, I try to impart all of that information and kind of take that approach to technology with my students.

Gabe Mydland:

I was kind of wondering when you introduce a tool to students, some of them really take off running, can you share some of the most gratifying moments or of some of the things that students have done with technology that just kind of made you go, wow, I hadn't considered it that way. I mean, what are some of the success stories?

Kevin Smith:

That's a good question. I feel like I'm always happy when I hear about a former student, or you know, a current student that's in a field experience that tries a technology tool that we use in class, it tells me that they're confident enough to give it a try, while they're just learning to be a teacher. That always feels good. I feel like a more gratifying thing is when a former student talks about a new technology tool that they learned about and they're excited. And they turn around and share that with me, which really demonstrates to me that they have embraced that idea of having a growth mindset, learning new things, you know, adapting to change, so that's gratifying. One gratifying experience for me was I had a former student who was teaching in Bangkok. And that student wanted to connect with me and my students using Zoom. And it was a simple technology integration. It wasn't sophisticated. You know, we just got on a Zoom session, but how powerful to be able to have my students in Madison, South Dakota, talking to a teacher in Bangkok, you know, it was 10 am our time it was 10 pm her time, and getting to hear her talk about her experiences teaching middle school math, and really powerful use of technology, and it’s gratifying that a student is willing to take a risk and do that.

Jen Burris:

Very cool. You use so many different areas of technology. And can you speak a little bit about the virtual classroom that you use?

Kevin Smith:

Sure. At DSU, we have something called the VALE. It stands for Virtual Avatar Learning Experience. It's something that a colleague of mine brought to DSU in 2018. His name was Dan Klumper. He's the person that really brought this, what we call a mixed-reality teaching environment to DSU. Dan saw it at a conference that he was at. There were no other universities in the area that were using it. There were universities in other parts of the US that were using it but no one in our region and he thought it would be a really good experience for our students. So, he wrote a grant and brought this technology to DSU. And now we use it a couple of times each semester to give our students a chance to practice teaching. And so, what mixed reality is, is it's a combination of virtual reality with a human component. And that's why they call it a mixed reality. The way it works is our DSU students, our students that are learning to become teachers, they go into one of our classrooms, and they stand in front of what looks like a big-screen TV. And they teach lessons, they lead discussions with avatars, and the other five avatars on the screen, are middle school-aged avatars. And we have them work on things like classroom management, and strategies for leading discussions. We have faculty observe them while they're doing it. Then when they're done, we have debrief sessions to talk about what went well, and what didn't. It's a really unique experience. It's not something that students get at other universities in the area. We think it really is beneficial for our students because it gives them one more chance to get actual teaching experience with students and then get feedback from faculty on how they did one thing.

Gabe Mydland:

I'd like to add to that discussion about the VALE not being involved with it directly. But being in the same building. When the students are using virtual reality, the mixed reality, as you explained, they kind of gather in a group outside the room, and they're all going to go in one at a time. When they come out they share their experiences with others. And what I thought was really kind of neat was the students stay, even though they've had their time in the VALE room to find out how it went for others. They share experiences and they give each other tips about how to watch out for this thing because it might happen to you. But the VALE is a unique experience for each of the students. It's not just one simulation that's repeated over and over again. And so, the students are kind of learning vicariously, if you will, not having observed what another student went through in their experience, but they're sharing different ideas of how to handle different situations. And funny things that happened and frustrating things that happened. And there's a real sense of collaboration when the students have a chance to do it. I think it's just an incredible tool to help prepare students when they are sent out to the actual classroom. Because even though there's just five of them, I've seen these avatars behave, and they are just like, you've got the crowd-pleaser, you've got the student who's distracted, you've got the student who's distracting other students. It's a very good simulation of what it's like in a classroom. And that kind of technology is a nice way to be able to practice and learn from before you actually do the real thing. It's amazing. I think it's great.

Kevin Smith:

One of the things I'll add to that is I totally agree with Gabe, I think it's a great experience. Last semester, one of the powerful things about this is the fact that we can have faculty, watch all of our students teach to the same group of students. That's really powerful because when we talk about a student and behaviors, we can all kind of speak a common language because we all know that all the students experience that same student and that's hard to simulate. When you send students off to different classrooms, it's hard to zero in on behaviors and talk about them. So, in the spring, when we did it, we had a couple of students, a couple of the avatars that were fairly defiant, they did not want to do the activity that they were asked to do. And this happens at times. And the DSU students were kind of unsure of how to handle this, and they don't have a lot of experience with it. So, what I did after the VALE session, I observed them, gave them some feedback. And then I asked several of my colleagues in the College of Ed if you had a student that did this, that demonstrated this behavior in your classroom, how would you handle it? They all responded to me, I didn't let them share their responses. I wanted to see how each of them would handle the situation. And then I went back to the DSU students and said, here is what the faculty said about this. And the interesting thing was, what came out of that discussion was really the importance and the value of relationships. All the DSU faculty had really good ideas for how they could deal with that behavior at that moment. But in teaching, there isn't just one magic word that you can say that's going to correct it or one thing you can do. It really all came back to you have to have a really good relationship with the student and understand where they're coming from and that was just a consensus among faculty. And then sort of bring that back to students and have them experience that student and then hear from faculty I felt like it was a really great teachable moment for them.

Jen Burris:

Kind of a little bit of preparation for that year-long student teaching?

Kevin Smith:

Definitely, our students, they student teach, like you said, for a full year. We want to give them as many classroom field experiences as we can. And this just adds to that part of their learning experience at DSU.

Jen Burris:

And does it help to have maybe those avatar instigators to give them an experience of a child or a student that might not be the most responsible in class?

Kevin Smith:

I think so. I think you don't know what to expect with teaching, you don't know what kind of students you're going to have. And oftentimes, those students that you have that might be defiant, that might not be doing what you want, when you do it, oftentimes, there's an underlying reason for that. That's where it really comes back to the relationship piece. And so, for our students to get to experience a student that doesn't listen to them. And then to think about, how can I change that? How can I help them listen and learn so that this is a good experience for everybody? I feel like you can talk about those things. But the only way to really learn and to really make headway in that area and work to become a really good teacher is to get some really concrete experiences in the VALE. I think that it's a nice simulation closer to the real-life experience without actually dealing with actual students. So, I think it certainly boosts their confidence. And it makes them more self-assured that while they can’t anticipate everything that's going to happen in the classroom, they've had some experience in different situations and how best to handle them.

Jen Burris:

In addition to the technology in your classroom that you're using with DSU students, you also do a Chasing Einstein challenge and math mentorship with some elementary schools, if I'm correct?

Kevin Smith:

Yeah, I'll tell you a little bit about my Chasing Einstein activity first. So that's an activity that I do in one of my courses. It's K-8 math methods, Chasing Einstein is a gamification activity. If you're not familiar with gamification, it really means you're going to add game-based elements into a non-game context. In this situation, we're going to add game-based elements into the classroom, to motivate and engage students. And when I talk about game-based elements, it means things like leaderboards, and quests, and challenges, and badges. And so, I started to do this Chasing Einstein activity as a way to introduce my students to gamification. I wanted them to think about this as a tool they could use in their own classroom to motivate and engage students. I started it in 2017. It's a nine-week challenge. And we partner with area schools on this activity. My students are math mentors, for students in area classrooms, and every week, they create videos for these students in classrooms. We all do math challenges, both my DSU students and the classrooms, students that we partner with. I keep score, based on the challenges they do, I have a leaderboard, we give out some prizes. I think it's a really fun and unique way for my students to learn and see gamification in action. And it's a neat way for students in area classrooms to get to learn from college students. The ultimate goal for these classrooms that we partner with is to really show them that math can be fun. That really the most important thing in terms of being successful or being in the classroom in the math classroom, and being a productive student is to really have a positive attitude and put forth the effort. So, we really try to stress those two things, attitude, and effort. That's kind of our goal with the Chasing Einstein activity that makes all the difference, I think, having a positive attitude.

Gabe Mydland:

Yeah, and I think that the whole idea of paying attention to the effort, not how fast we can resolve a problem or solve a problem or how fast we can find an answer. But the approach that if I really put some effort into it, I can be successful is what I'd hope more teachers would pay attention to, rather than the student who is just more performance-based. They're doing what they need to do to get the grade, more mastery type of learning through effort, how it applies in different situations, or how it can be used to solve certain kinds of issues or problems. That's exciting. I like that a lot.

Kevin Smith:

Yeah, I would say there's a real push in math teaching to really do just that, to deemphasize speed and to really emphasize effort. And the fact that sometimes the best mathematicians are not the fastest, they’re people who can look for patterns and make use of structure and take their time and put forth the effort to solve problems. That's really what we want to try to instill in our students, as they think about math.

Gabe Mydland:

And I think that makes for a better educator, to be quite honest. I mean, generally speaking, I think you have two types of educators, you have one, that the subject matter is something that comes easily to them. It's more like a talent. And then you have someone who's just genuinely interested in the topic, who might not again be the fastest or the quickest to find an answer or have a response ready, but genuinely enjoys the topic. And I think the best teachers are those that fall into the second camp rather than the first camp because they know what it's like to kind of wrestle with the information and struggle with it a little bit, and have had success and they've tasted success, and are excited about others who, again, find it challenging find it somewhat of an obstacle to move through, that they can to experience and taste that success. Certainly, we want everybody to be proficient. But proficiency doesn't always mean speed, or how fast they get something done. It's how they go about solving the problem and their ability to get the problem done.

Jen Burris:

And to that point Gabe, I think the second camp of teachers that you were describing sounds a little bit more empathetic in connecting with their students….

Gabe Mydland:

I would guess so I mean, I think that they understand what it's like to be stumped. And maybe a little bit frustrated, but also a little bit determined to want to find the answer because they enjoy the challenge. And they're learning and they love learning. I think everybody enjoys being around people who enjoy what they're doing and find it not something that comes automatically, but something that comes with a little bit of effort and work.

Jen Burris:

Kevin, you've also been involved with some learning apps in the Apple Store. Can you tell us a little bit about what inspired you to create those and how you went about doing it?

Kevin Smith:

Sure. I've always been interested in technology, and when the App Store first started the Apple App Store, I immediately was thinking, how could I get an app in the app store? How could I come up with an idea of what can I do to get an app in the app store and I remember one of the early apps that was really popular was angry birds. And you know, you would hear stories about millions of downloads. So, a friend of mine from graduate school and I put our heads together and started to brainstorm ideas for apps. We wanted to do something with education. And at the time, I had three young kids at home that were doing spelling tests. And so, my friend and I thought a spelling app would be a really good choice. At the time, he happened to be an assistant principal at an elementary school. So, he saw a need for helping students practice their spelling lists. That was really where the idea came from. We went through the process of trying to figure out how do we get this idea into the app store? You know, we started kind of through the whole software development process, we created wireframes, which were kind of sketches of what our app might look like. We spent a lot of time researching other spelling apps thinking about what we liked and what we didn't like. And then once we got to a point where we were happy with our wireframes. And we had kind of thought through what it might function like we started to think about the interface, and then programming. We were at a standstill with programming, we weren't quite sure how we were going to program something to get in the app store. So, we actually found a college student that was a computer programming major, and the college student built our first app. I did the design, and he did all the backend programming. We launched the first app that we have in the app store in 2012. It's been quite a while ago now. And it's called Spelling Star. It's a pretty simple app and allows parents or teachers to enter a spelling list. They record their voice, they read the spelling words, they can put the spelling words in a sentence, they record it, and then their child or the students in a classroom can open up that spelling list. It randomly gives them one of the words they can hear the audio, they have to type in the word correctly three times in order to master their spelling list for the week. So that was the first app. And now since that time, we have two more apps in the app store. One is called Math Mountains Add and Subtract and the other one is called Math Mountains Multiply and Divide. And those apps are very similar. But the idea behind those is we really wanted students to see the relationship, in fact, families.  want to talk about fact families. In addition, I'm talking about six plus two equals eight. But we also want students to recognize that eight minus six equals two, we want them to recognize the relationship between addition and subtraction. And we want them to visually see this. So, in the app, there's a triangle, and that kind of represents the mountain. Those are our three apps, but certainly a fun process. We don't have as many downloads as Angry Birds. We're not in the millions for downloads, but we have had more than 500,000 downloads for our app. Wow. So that's kind of exciting. From lots of countries, we actually have a lot of downloads of Spelling Star in Australia. But we've learned a lot about the whole process of coming up with an idea and then thinking about what are all the steps that go into actually getting this into the app store? And then not just when you get it in the app store? But how do you actually tell people about it? How do you let them know that it's there? I always think about the movie Field of Dreams If you build it, they will come. I think people think that oftentimes about websites or apps, all I have to do is build it and all these people are going to come but there are 1000s of apps out there. You really have to think about the marketing piece. If you're going to get any downloads. We've learned a lot about the marketing piece, of how are we going to get people to know about our app. So, it’s a fun hobby, but also a great learning experience coming up with these apps.

Gabe Mydland:

Wow, where were you when I was trying to learn master spelling?

Jen Burris:

Okay, well, any tech tips for current educators or future ones?

Kevin Smith:

I guess my tech tips would be kind of back to what I said, the three things that I really tried to instill in students for technology or try to really work on with our students is to give them hands-on experience to build their confidence. I want them not only to hear about tools but use them, I want them to think about integration strategies, you know, don't just use technology to use it. But really think about what kind of value it is adding to the learning experience. And then work to develop a growth mindset, work to develop a mindset that you're not afraid to try new things, to learn new things, because I feel like that's probably the most important thing when it comes to technology. Really, those are kind of the three things I'm always thinking about when I'm working with students, how can I do those three things, I feel like that sets them up for success with technology and puts them on the path to continue to learn about technology. One of the things I really try to work on with my students and I feel like some semesters I'm pretty successful and other semesters, I don't know, but that is I really try to encourage all of my students to build their personal learning network, their PLN. And you know, for me, my tool of choice for that is Twitter. I try to get my students to use Twitter, to connect with other educators. I feel like it's a great way to learn about new technology tools that are out there, see how technology tools are being integrated into classrooms, that's a piece of advice that I would leave anybody is to really work on building a PLN. Try to be connected so that you can you know, learn new things, and then just embrace change and don't be afraid to try new things, it may not go as planned, but it's fun to try new things.

Jen Burris:

Okay, well, thank you Kevin for being our guest today. I really appreciated you coming on the podcast, and it was quite interesting to hear what you had to say. Thank you to our podcast producer Xander Morrison and thank you for listening to Cyberology. Please rate, review, and subscribe.

Jen Burris:

Welcome back to Cyberology Dakota State University's podcast for sharing and discussing all things cyber and technology. I'm Jen Burris from the marketing and communications department at DSU and I'll be your host. Today we'll be talking about how evolving technology has impacted the study of English. I'm excited to introduce our cohost for this episode Brittni Shoup-Owens. Brittany is a content writer in our marketing and communications department where she writes copy for the website, media paperwork, like pamphlets, social media, and so much more. Brittni, tell us a little bit about yourself.

Brittni Shoup-Owens:

Hi everybody, my name is Brittni Shoup-Owens. And like Jen said, I'm the content writer for DSU. I am an alum of DSU as well. I graduated in 2017, with a Bachelor of Science in English for new media. And prior to that I was actually in English education for about three, three and a half years, but switched my major a semester away from graduating with that. And I've been here ever since I came to DSU. In 2013. I have a husband and a nine-month-old little boy. And so, he's the joy of our lives right now. But yeah, that's about it.

Jen Burris:

And our expert guest for today is Dr. Justin Blessinger. He is professor of English in the College of Arts and Sciences. And Justin teaches courses like intro to lit and Media Studies. He's also an award-winning published author and director of the DSU AdapT Lab for Accessible Technology at Dakota State. Justin, why don't you tell us a little bit about yourself?

Justin Blessinger:

Well, thank you, Jen. Well, let's see, I came to DSU in 2003. And at the time, the job was called professor of computers and writing. And it was just one of those first, I think indicators that DSU was a little bit different. And I'd grown up around technology. Even though I grew up in northeastern Montana on a farm and ranch up there, my family was, I guess, early adopters of home computers and that sort of thing. And so, I was pretty comfortable with computers. And a lot of my friends were in computer science when I went through my undergraduate years too. And it was really a an attractive fit for me to be able to come to someplace that was happy about the alignment of those two different skill sets, I had a little bit of programming and a little bit of early HTML, and that sort of thing, just you know, enough to get myself into trouble. But it was such a good fit, then to come to someplace that didn't just act surprised about being able to do both of those things, but really celebrated that. My wife, Christina, and our two kids live here in Madison with me, and they're both in middle school in high school.

Jen Burris:

Okay, and do they have a love of English and/or technology?

Justin Blessinger:

Yeah, they're equally comfortable in both worlds, I'd say. And a little bit of the mechanical side, too. You mentioned the work that I do in the AdapT Lab. And there's some hardware and modifying wheelchairs and little electric cars for children before they're able to use a wheelchair with the go baby go program, stuff like that. And so, one of the things that probably growing up on a ranch really instilled in me was just a familiarity with the tools around me to be able to keep something going after it's broken a couple of times, you're too far to return things to the store or even buy another easily when you're up there. We were 100 miles from a Walmart, I think so you when you're out in the big open area in Montana, you do have to figure out a way to get things done absent professionals or experts around. So, if you end up sort of learning on the job a lot. And I think that that has served me well at a place like DSU, where as I said, it's kind of celebrated not just tolerated or looked at with a mixture of amusement and concern, perhaps that any English professor might have some other skill sets. But here, it's a place where everyone has always been encouraging along those lines. And so, I've been able to do a lot of different things and develop some talents that I think I wouldn't otherwise have been able to

Jen Burris:

Okay, and so you mentioned your first course, was entitled computers –

Justin Blessinger:

Yeah, Professor of computers and writing. That was my first job title here. those first couple of years, I was teaching composition and a class called advanced writing at the time, which was what eventually became what we call media studies now, so even though it sounds like it was mostly about writing, writing, writing’s a tool. You know, when we look at the big technologies that have really changed the world, and the printing press is one that makes everybody shortlist and, of course, the internet. And both of those are publishing technologies. And we often talk about the code behind them the advent of HTML and how important that was, because really, then we start to see what we call the World Wide Web, it becomes recognizable. But the bulletin board systems were before that, which is again, a sort of metaphor for publishing that, well, two of the world's most significant technologies. And writing itself, of course, would be among the technologies that have to do with you know, communication and publishing. So, English has long had a relationship with what we might call high tech. When the book first showed up, separating the manuscripts scrolls and chopping them up and putting it into something that we would recognize now as a book. That was a huge change. Writing itself, literacy, the movement towards the book, becoming a culture and certainly in western civilization where we celebrate the book, it became a metaphor for all things. You see in Christian iconology, Mary starts to become a woman of the word. You know, her earliest depictions are she's holding baby Jesus, you know, she's the mother of Jesus. But then later on, you see Mary holding books, you know, carrying that metaphor of Jesus being the Word made flesh into something that the culture really celebrates, which is the written word and the kind of access and privilege that education affords us. So, in a sense, the icon of the book really starts to penetrate all culture or Western culture at that point.

Jen Burris:

So, in looking at the association with a high technology, how has that evolved? Then you start with printing presses and things like that? And how would you say that television and other forms of media have been a part of that?

Justin Blessinger;

Well, there's a sense in which our progress has been, I suppose, fits and starts. And I think that's really how all what we might call progress moves. It's never a smooth curve, right? There's always a sudden move when we encounter a new technology. And so, television works a bit like that, there was a lot of ink spent decrying the damage that television was doing to us intellectually. And I don't want to say that that was all without merit, we were moving as a culture away from the written word and moving towards the spoken word. And you don't go through something like that, as a culture without something being given up, you might celebrate what we gained along the way. But that's a major change. But there are two of course, the English for new media program is one that starts to say that a text is more than the printed word. So, we start to use the word text to describe things like music, like a computer game, you read the language of advertising, when you consider it all at once. So when you look at the Opus, or the collection of some massive amount of work, movies are texts in our world. Now everything can be read, and I suppose decoded in that sense. But as I said, it was kind of fits and starts. So early in the 20th century, college professors were bemoaning the state of writing. And so they asked English professors to help fix this, because what was happening is they were assigning usually papers at the time for students to write not English professors, everyone else was too. And they went to the English faculty and said, what can we do to remediate the quality of writing that we're seeing in our incoming freshmen especially. And that's when comp one was born, you know, so your composition class that just about everybody's taking a when they come to college comp, one comp two, it's kind of the bread and butter of an English Program in terms of the bulk of the classes that we teach, and was certainly the bulk of the classes that I taught when I first came here to DSU. But it was intended to sort of fix a problem that the rest of the faculty were expressing, and that was, they weren't seeing as high quality as they wanted in response to the essays they were assigning. And where are we now it's probably an extremely rare non English professor who's assigning essays as the output as the project as the great measure of what it is that you know, as a student, what sorts of sources you know how to use all the things, they still want that critical thinking they want demonstrated use of expert sources, but a lot of times these days, it takes the form of a video, or it might take the form of an interactive program, it might take the form of a song here at DSU. There's a lot of different outputs now. And so, you know, the writing side of the academic life, certainly for an undergraduate is really different, I think, than it was 100 years ago when composition was really born. And really different than 200 years ago, when it was assumed that you had that skill set, and that it was sort of beneath the university to remediate anything like that. Like, of course, you know how to do that. I mean, General Beadle is one of the founders of our university had to spend a year before being fully admitted to college. So, at 18, he did a year of remediation because his Greek was so bad. The assumption was that if you went to college, you had Latin, you had Greek maybe had some French, because if you're going to be a world traveler, that was sort of an expectation of the time, but a couple of different languages, and certainly Greek and Latin, not necessarily you could speak them fluently, but you could translate them. And since Beadle was from a rural place, he knew going to college was going to be an upward climb for him because he had some Latin, but he didn't have the Greek and so he had to spend a year and he did just fine. He knew that this was something needed to remediate. But that was before they even thought about remediating writing. That's how much of an expectation being a good writer was for the culture, that it didn't even occur to anybody that you would take a class in that as a college student. I mean, now we have the occasional class in reading, you know, I mean, that's, that's how much things have changed.

Jen Burris:

So that expectation that you should already have those skills.

Justin Blessinger:

right, that expectation was that not only could you write very well, but you also have Latin, Greek, probably French, you know, those kinds of things, maybe a little Russian, you know, those were just expectations for coming to college. I have a copy of an exam that was given to incoming freshmen here at DSU. Wasn't the issue at the time Eastern State Normal School, I think the letterhead says and it was a handwritten exam. And it was on Lake Chautauqua hotel letterhead. I would guess that the faculty member who was proctoring, this exam was staying out at the famous hotel out on Lake Madison, and made copies of the exam by hand, and then distributed them when they had freshmen coming to test into college to see if they were ready. And I think it would be a rare freshmen indeed, who could pass that in part, because the language of what we talk about when we talk about language has changed. We have different names for certain grammatical phenomenon these days. I still remember when a professor of Hebrew I took a class in Hebrew when I was an undergraduate, because it sounded interesting (laughter). He was Princeton trained, and I was just thrilled to be able to take anything that he was offering. And when he offered that I thought, well, that'd be really neat to know, this ancient, ancient language. And I remember a day when he kept talking about the preterite tense. And I was just dumbfounded. I had no idea what he was talking about. And I, I don't know if it was I was brave enough to ask finally. But someone finally asked, what is the predator and oh, my goodness, the disgust on his face? Because there were several English majors like me in the room. And he's like, how could you possibly not know what the preterite was? And I was deeply ashamed at that point. I'm like, wow, this is really elementary and I don't know what it is. And so, by way of explanation, he said, it's, it's what you use when you want to talk about the past? It's the tense and I said the past tense? Yes. Well, for goodness sake, why can't we just call it the past tense? Right? As grammar books have changed, some of the vocabulary that they use have changed. So, some of this exams, difficulty lies in that, but some of it just lies in the expectation that, of course, you have a pretty solid understanding of how grammar works. And part of that is, if you've studied any foreign languages, of course, you have a better understanding. That's what was happening when I took that Hebrew class. If you didn't understand English grammar, before taking foreign language, you have a much better understanding of English grammar after taking foreign language because you got to answer all these questions like how do they use the definite article or the indefinite article, and those are fancy words for the indefinite article is a, the letter A, and the definite article is the T H E, right? So, these aren't complicated words, we all know how to use them. When we have something specific in mind, we say the pencil, when we have something nonspecific, I don't care which pencil give me a pencil. It's called the indefinite articles. So, these are names for things that you don't have a need for the vocabulary until you study, especially a foreign language, but you study your own language, as such, that'll do it too. So, you know, that's one of those incoming expectations, they wouldn't have thought of offering really a class on that it was expected, you knew how to do that, if you were ready for college. So, it was a threshold kind of a skill. And that has changed. Now I suppose we're in a place where, really, it's more about the device and the use of the device. The technology that gets us to the publishing world, you know, of the of the internet, but the, the keyboard that goes along with it. If you didn't know how to use a keyboard, if you didn't know how to use on its basic level of computer, not, you know, Macintosh versus PC, or those kinds of things, but if you just if this was new to you, I think you would struggle a great deal to prosper in modern higher education. Because think of the remediation you would need to do just to learn to use a keyboard, and all of the little tricks that we know, without thinking when we're scrolling, when we're swiping when we're double clicking. There's a lot of things that you know, intuitively from having used these technologies for a very long time. And we don't remediate that we don't. And we, and everyone would sort of think, well, why would we? And that's how English grammar, or some of the foreign languages were just assumed that if you were ready for this level of study, that of course, you had those kinds of skills.

At that point, we started to see English professors become a little more specialists. And we were always sort of generalists before that, and by that, I mean, you started to see a certain group of professors who studied and trained to become compositionists. They were teaching composition, studying rhetoric, and, and studying, writing. And they became sort of specialists in that area. And meanwhile, all of modern education started to move towards I think, maybe a disparaging of the generalist saying that if you really wanted to be respected, you had to sort of become extremely esoteric, you had to become an extreme specialist in your field. You know, as we got the flood of new students that happened in the wake of World War Two, you know, we saw all these people on the GI Bill, for example, who were coming in some of my my most beloved professors, when I was an undergrad had gotten their degrees, thanks to that amazing opportunity. That was the GI Bill. You know, there were no student loans yet. The GI Bill was transformative, but that plus student loans then some years later, kind of terraform to the modern university where you had so many more people now seeking a college degree than you ever did before. And that change, I think, is what really drove people to become specialists, absolute specialists. And there's something lost when you do that, right? There's a lot to be said for becoming a specialist. But, you know, you often lose where your specialization fits in that in the big picture. I think that English is one of those fields that while we have our specialists, to be sure, there's still an area that we simply call a generalist. And it's just somebody who can teach American literature, British literature, can teach composition. And that's sort of how most of the faculty at DSU have been. It's a small enough university that all of us at some point or another are going to teach something that's maybe a little bit outside of what we focused most on in our dissertation process or something like that. So, while I don't teach American lit here, it shows up in my intro to lit class all the time, I teach one of the world literature classes, but not the other that's a little more modern world lit. In order for there to be any conversation between Ancient World Literature and modern, I need to know what the other faculty members doing, heck, maybe we should switch every once in a while. So those of us at DSU, end up treasuring our specializations, but we're not allowed to be true specialists all the time, because the reality of what's needed here is that generalist kinds of thing, and I think that's actually been a really good thing for the kinds of people who thrive at DSU, especially in our, in our English program, where we're part doing Gen Ed kinds of things like composition. And we're also trying to bring talented students into the English new media program and help them find a place in the world of modern publishing. It's not a publishing degree, per se, but because of the way new media works, and we're always looking at how media is changing. And that means thinking about how we get information to people. And, you know, obviously, that's a question of publishing, a lot of times even this podcast that we're making, right, we're, we're thinking both about the editing process, and how do we move this thing online? Where do we market it? All of these are questions for the modern English new media specialist.

Jen Burris:

Okay, Brittni, could you speak a little bit about your experience in English for New Media

Brittni Shoup-Owens:

So, I absolutely loved the English for new media program when I did switch. And through all my classes, the aspect that I loved the most was the analyzation of everything. It wasn't just, you know, hey, here's a story. What do you think of it, it was a literal, in depth analysis of the text itself. And it was like, well, what do you think he's actually meaning when he says this, or she says this. And so, I find that a really unique perspective of the English for new media program, and I'm just kind of curious, how do you teach your students in your classes to kind of have that perspective or to kind of be aware of that perspective while they're reading?

Justin Blessinger:

I love what you brought up there. And in part because it reminds us that the English new media degree at DSU, the way we do it is still a very lit heavy major. So, we didn't give that up. When we started talking about what is English look like for the 21st century, it's still an English degree. And we weren't willing to let go of what makes so many of us truly fall in love with this field, which is the written word and the actual text. In the original sentence, the words on the page, I don't think you'd be drawn to this field, if that wasn't already something that you'd developed. But we have certainly broadened our definition of, as I said before, text right of what we're thinking about when we do analysis, so we can analyze James Joyce, and take a look at Dubliners, for example, just did this this semester in my Irish lit class, read most of Dubliners. And each story you know, standing alone is wonderfully fun to analyze. And then of course, there's that question once you've finished the whole work, how do they interact with each other? And that's everything that an English major traditionally would do. But then we might take a look at some of the video interpretations that we might take. Look, if somebody ever dared try and make a computer game version of this. Just as an example. I sincerely doubt somebody, well, somebody has surely tried it. I don't know how successful that would be. But there's a lot of what we call transmedia worlds now, which are worlds that have been created, usually because of something that started in the world of original text of text on a page. But you think about something like the world of the Lord of the Rings, Middle Earth, right that has a computer game iteration, multiple computer during game iterations, movie iterations, probably there's a series out, or something like that, certainly some animated attempts at it, and all manner of different ways in which media works. They're all feeding from the essential world created. And so, when we talk about analysis, we just have more grist for the mill than we ever did. Our world of what we can analyze has gotten bigger. So, one of the skills that we're all Seeing employers are looking for one of the things that the modern University actually seems to struggle a bit with assessing for measuring how good students are at it. They're always talking about critical thinking. I mean, you know, everybody heard this way back to middle school and so on, you know, how do we assign things that really challenged critical thinking? And how do we measure successful critical thinking, and everyone in English has always sort of been baffled by the question, everything we do, is about critical thinking. That's what's fun about the study of English. The analysis, the looking for patterns, and looking to see where connections can be made between not just, you know, one text or another by Joyce, but then between those texts, and non-printed texts, songs, and pamphlets. I mean, all of those things, touch on that central habit, that's it's so human, we're always looking for the pattern, we're always looking for the signal in the noise, you know, humans wouldn't have lived very long as a species, if we weren't able to say, Boy, you know, is that growly sound that always comes before a tiger grabs somebody out of the cave here (laughter), you know, because we can notice those patterns. And obviously more sophisticated ones than that, because we can notice those patterns and then start to develop our own. That's the birth of communication, of language, eventually, of writing, it's all pattern making. And so, you know, English is part of, I guess, a pattern recognition, and critical thinking pattern that goes all the way back to the dawn of the human, as a member of a group, you know, from the very beginning of humans starting to behave cooperatively, wanting to share tools, and share protection and so on. The basic elements of what we might call a tribe today, that's how far back what we call the English major really, really would be, or at least the things that we study in English major, all of those communication skills and writing, and message sending and message receiving. Because, gosh, think about smoke signals, think about beating on a drum to communicate over long distances, these are all part of how we communicate. And it's necessary for us to be able to do that pattern recognition, encoding and decoding in order for us to, to have thrived as a as a species. So, you're never gonna see the English major go away utterly. You know, there'll be times when it's more popular and less popular. But it always has a form. And, while it might go through name changes, there might be more technical skills added to, you know, the modern English major than then many people expect to see. But it's not something that can ever disappear from the modern university.

Brittni Shoup-Owens:

So, with that, do you think there's physical copies of books, and then there's Kindle then all those kinds of things? So, I guess my question is, what do you think's gonna happen? Are? Are those physical copies of books going to continue to be out there? And, you know, publishing houses and all that? Or is it going to go away at some point entirely and go online?

Justin Blessinger:

There was a lot of concern about this, when we really started to see ebooks emerge, there was a lot of concern about well, what about the classic book? And most of that was overblown, you know, I listened to someone speak recently about how everything's going digital, including, of course books, and this person was speaking about textbooks. And I agree that the majority of textbooks are likely to go towards the digital, in part because search ability. A textbook isn't something you read cover to cover. a textbook is something you access selectively, you might have read a good chunk of it by the time the semester is over, but you probably didn't read it in order. And you paid very special attention to specific things. When it came time to study for that exam. You probably use the Ctrl F or the search tool and tried to find keywords and look for those keywords. That's, of course, just being expeditious, with your time. That's just being strategic about how you're going to study. So, we will continue to see certain types of information move towards the digital. I am so grateful that some people took the time to digitize some of the old sources that I access today. For example, I mentioned General Beadle earlier and the research that I've done on him. And it's so fantastic to be able to find his autobiography digitized. Because while I've read it, I often think to myself, Oh gosh, where was the section where he was talking about his first arrival in Dakota territory? No, it's at the beginning somewhere but I just couldn't find it. Well, you know, with a digital text makes it effortless to access that information really quickly. So yes, we will see especially certain types of books, maybe move entirely into the realm of the E book, and nobody's gonna be really grieved by that, right? It's a rare person for who holds their tattered copy of a textbook as a precious artifact, right? But your your copy of One Flew Over the Cuckoo's Nest or your copy of T.S. Eliot's collected poetry, right, that's something that is well worn, and you've dogeared certain pages. And you just simply wouldn't trade that for the digital, even though sometimes it's nice to find the digital copy, use the digital copy, because you're looking for something. So, I don't see the panic that arrived early on, as having been justified. But I do think we're going to see, you know, continued change, we will see that probably there will always be a certain market for print, but it'll probably remain more for books that we enjoy reading, rather than have been assigned to read. And it'll be more for the classics especially. And let's not deny there are those who use books as decoration (laughter), you know, they read that book, they want others to see that book. They put it in a prominent place in their home. I mean, there's a certain status, that's always been associated with the book I mentioned before, you know, painting Mary with a book in her lap. And that's a prestige thing, too, even though they really did read the book, nobody's you know, faking this here, but it was, it was a valuable thing. However, I think the novel itself is changing. And here it at DSU, Professor Joseph Bottum recently published a really important work, The Decline of the Novel. And in it, he makes the argument that as you've seen the collapse of what he calls mainline Protestantism in the United States, so there were like five denominations that 50 years ago, something like 80% of all Protestants belong to one of these five, and now it's something like 22%, it is a really tiny percent of those, but some of them switched denominations to more evangelical or to Roman Catholic or something like that. But a lot of we just left the church entirely. And so, the mainline Protestant group in America has been really, really changed and decreased in terms of its influence. And Dr. Bottum makes the argument that the novel as we know, it had a lot to do with Protestantism, especially with Protestants. Protestantism is focused on the individual. And being an individual and being able to sort of think for yourself do for yourself, which, you know, doesn't work quite as well within hierarchical systems like the Roman Catholic Church. And so, he makes that argument that that it had a lot to do with Protestantism. And therefore, when you see Protestantism collapse, you're going to see the art that Protestants championed, also collapsing. There's another conversation for you to have you guys need to do a chat with Dr. Bottum, because he's certainly seen how that technology, the novel itself, has really changed. He's not saying that we don't write novels anymore. He's saying we don't use the novel as the way to communicate the biggest ideas of our culture anymore. We used to use it that way. And we don't anymore. It's a fascinating argument. And really a worthwhile read if you can get a copy.

Jen Burris:

Going back to the ebooks, did that change the publishing scope somewhat? Because there seems to be a lot more self-publishing going on?

Justin Blessinger:

Yep. I mean, Brittni may be able to speak to this better than I can. But, you know, we certainly offer classes that have an eye on both types of publication. So, our publishing for new media class in particular. It looks at publishing in the print world, and it looks at publishing in digital environments. And so, it as one of its projects, creates the New Tricks DSU literary magazine. And I've always really enjoyed getting that in both forms, I like to have the traditional paper form. But the digital form allows us to celebrate, you know, full color, or 3d images, or interactive images that have been created by our art students, and so on. We can feature things that you can't with the traditional book, so we want our students to have that skill set that can flex in either direction, right, they can work with the classic, because, you know, certainly, in promoting DSU, right, we're still using a lot of paper. We're still publishing a lot of documents and so on, but it isn't all we do. And to do it in only that way would extremely limit your audience. Right. And so that's the sort of ideal for our graduates is to be able to be very functional with whatever platform comes along. You know, this is going to continue to change and and we need to be Right there, alongside it and in many ways to help shape it, not just to follow it where it leads, but to detainment.

 

Jen Burris:

So, kind of a flexibility in platform use? Like journalists using Twitter for sources and to get the word out on breaking news, things like that?

Justin Blessinger:

Great examples. Yeah, absolutely. And you know, one of the other things that the English major would do, we have a class on text analysis and on data analysis, that has a text component. So, you mentioned Twitter, that's an absolute must for our students to be able to say to themselves, well, I'd like to take every post by this organization, or every Twitter post by this individual. And then I'd like to run some analytics on it and see what patterns emerge. what's so cool about text analysis, a lot of times, we have no idea what we're about to find, when you start running a text analysis program on a body or a corpus of work, you sometimes have a question in mind about what you're going to find or an expectation, but you rarely have an inkling of what you're really going to find. And so, there's all kinds of sort of surprises that come along when you start running an analysis of text. A recent graduate of ours, did an analysis of Romeo and Juliet. And, you know, that's one of those things I mentioned, professors becoming increasingly specialized. And, you know, one of the first areas that became over specialized was the Shakespeare arena, to the point where it was it felt impossible to say anything new about Shakespeare, 400 years of popularity, a lot has been said. And then once you had all these newly minted PhD is a lot more was said. But when text analysis came along, you were able to plug in everything that he'd ever written and be surprised by some of the patterns. One of our students plugged in Romeo and Juliet and took away all the stage directions, the sides, and you know, the specific direction of who's speaking so Romeo colon, Juliet, colon, right, all of that was stripped away, left with only the spoken words. And she found that Romeo spoke about Romeo more often than he did Juliet (laughter). I mean, this should shock no one who has ever been or been around a 16-year-old boy. But you know, that sort of narcissism is such a delight to find in Shakespeare who 400 years ago, has a teen boy who actually talks about himself a lot. And that alone was a revelation. And it's not something I've ever heard any scholar of Shakespeare observe, because you don't really notice it while you're reading it. But text analysis made it very apparent that Romeo likes to talk about Romeo. Juliet actually likes to talk about Juliet too, but not to the degree that Romeo likes the sound of his own name.

Brittni Shoup-Owens:

So, you're talking a little bit about text analysis. And earlier we mentioned kind of, you know, it takes form of many different shapes, like scripts, and even movies and all that stuff. I remember in your media studies class going in, I didn't really know what it was about. And you just kind of took it nice. It was an analysis of movies, basically. But compared to text as well. And I remember watching “Amelie” and you said something, I cannot remember why it's been a long time. You said something specifically about how it's an outside perspective, but they're going in through like a window. And it's like, it's supposed to signify the I can't remember exactly what it was.

Justin Blessinger:

No, there's a boxing metaphor that shows up. Oh, it's such a neat piece to look at. Because throughout the film, characters are being boxed. They're being boxed in with, with visuals, so that when we meet the parents at every beginning of the film, there's columns alongside and a sort of arch above each of them. There's this shot that gets repeated many times where the camera is sort of deep inside a hole in someone's looking in, through the chink in the floor, the tile at the bottom of a bathroom wall, a tile has come out and someone's looking in there. It's underneath the refrigerator when someone has had to jack up the refrigerator and is looking in and each time it places their face, you know, visually in a box. And it's so extensive that the art designers who put the DVD box collection together created a box that slides into another decorative box, and there's holes cut in the exterior of the box to show just on Amelie's face peeking out from her bedroom and then and not on the opposite. It's her face peeking out from while she's in the park. But the box itself is playing that same game, extending that same metaphor.

I do really enjoy media studies because it's a stealthy way of getting students to read some pretty deep texts. But then it gives them the skills to take those texts. We start with the apology Plato. And we try to apply that immediately to some of the texts that were the visual texts that we're looking at. We read Ursula K. Le Guin, the ones who walk away from all mullahs, in which she says that it's treasonous of the artists to refuse to refuse to admit the banality of evil, the terrible boredom of pain. She's criticizing the habit that we have of celebrating something as a little more intellectual when it's really dark. And it's the seedy underbelly of the things Oh, this must be intellectually interesting. And she says we've betrayed our ability to tell a happy story. And then we take a look at the film Amelie, in which it's a happy story. I mean, she faces problems, in many ways Amelie is her own problem, right? If you if you know the film, Amelie needs to self-actualize. And she keeps sort of hobbling herself and denying herself full access to the joy of life. But it's a joyous movie, and it's about getting access to full access to joy in your life. And so, I challenged my students with Ursula K Le Guin’s short story in which she's saying, we have a nasty habit of celebrating anything that's dark as somehow more intellectual than things that are light. If it's light and happy, well, that might be for kids, you know, and if it's dark, and brooding will the fan it's fodder for, you know, our serious intellectual questions, and so on. And I'm not saying nothing dark is interesting. Of course, there's some great pieces out there are ultimately pretty dark. You know, I used to use the Godfather in that class a lot. And then you know, it's a great example, it has a fair amount of darkness in the film. And it's extremely intellectually interesting. But there are some modern texts, films, and otherwise, they really do celebrate joy. I think Le Guin is right, we have a habit of discounting happiness as somehow trivial, or less worthy of our intellectual attention. But it is a very good class, in part because students are usually willing to go along on the more difficult reading, because each time they're rewarded out the other end with an application to something that might be a computer game, it might be a film. In that class, we use several films, so a lot of times, it's a film in that or a piece of a TV series. We now do Sherlock, the BBC Sherlock in that one, and Firefly and, and that sort of thing, too, because there's been a lot of critical material produced in the last 20 years on on those kinds of things, too. It's a great fun class. And a great example of the kind of thing that really feeds the English new media major at DSU.

Brittni Shoup-Owens:

I think one thing I took away specifically for that, from that classes, I can now not watch a movie without thinking about it, analyzing it and being like, okay, or like, Elliot, my husband, and I will be watching something on Netflix. And I'll be like, oh, did you realize they're doing this because of this. And then at the end of the episode, I'm right. And so, it's a great class, and I really, really enjoyed it,

Justin Blessinger:

And you can't turn it off. And that's not always a good thing. Right? Right, when you actually want to. But once you get good at pattern recognition, you're always analyzing, you know, it's sort of like, I do a lot of proofreading for the composition classes that I teach. And it's a curse and a blessing because of course, every sign you run across every menu that you pick up, you know, it's always teasing your ability to use apostrophes or something like that. Right? Yeah.

Jen Burris:

And why is it important to develop these pattern recognizing abilities?

Justin Blessinger:

While pattern recognition works in so many different ways, you know, I work on my own cars mostly. And that that comes from my time in Montana, and the remoteness of where we lived. But there was also just an element of personal worth. You weren't much of a man, if you couldn't fix your own car, you know, you might not be the best one for it. Maybe at some point, you're like, you know, what, I got enough money to pay some bills to do this job. It's a nasty job. I don't want to do it, but you need to know how it was just sort of an expectation. And you know, just by way of example, recently, our primary car started making a terrible sound, you know, it was clear something very serious was wrong with it. And the process by which you diagnose that kind of thing is pattern recognition, you know, you start to say, okay, it has this interval, and it has this sort of percussive sound to it. And it only emanates from this one area of the engine. And, you know, you break out what we lovingly call the Redneck stethoscope, which is really just a piece of hose that you put to your ear. And you can isolate exactly where, well down to a few inches in the engine, it's coming from and son of a gun, it was coming from under the timing cover, which means that it was a timing chain issue or one of the guides for it. And that meant open heart surgery on the engine, and I was like why can’t it ever be something easy? But that's pattern recognition, you know, trying to diagnose what's wrong with a computer program, you know, removing whole chunks of the, of the code at a time. And, and running those components to make sure they function properly. You know, you're narrowing the field, because you're able to recognize a pattern. There is really no such thing as a major that doesn't in some way, you know, really make probably significant use of pattern recognition. It just so happens that you know, with English, every word we speak, every word we write, is part of the pattern that we study. There is nothing that's really off limits, you know, we can study the rhetoric of computer programmers, we can study the use of language by historians, you know, it, everything is grist for the mill for the English major. For that reason, there's no possible way to be exhausted to or to reach the end of what we study, there's always new material. And there's always something fun and exhilarating, to be studied. So, I think that that's one of the things that when students recognize that, that it, it can feel overwhelming, of course, you know, and maybe even a little sad, there's no way to master at all, in any field, but especially in our field, where, you know, there's 1000s years of the printed word. And that's just the beginning. Because in the last, you know, 20 years alone, we've like tripled the amount of printed material, probably a great deal more than that, but we've exponentially increased in the amount of printed text to be read, the internet's full of it. So yeah, there's no end to the rabbit trails that you get on, there's always new material.

So, in my study of General Beadle, I just found out that University of Michigan has all the minutes from the Literary Society of which he was a member, and ultimately was president. While he was probably a member of the Underground Railroad, he has this sort of oblique reference to a student, he says, who was a writer for the Underground Railroad was recruited by abolitionists while he was an undergraduate, and ultimately, you know, brought on to deliver messages up and down the Underground Railroad. And he knows the location of everything in his story. He knows which houses they stopped at, he knows which abolitionists were involved in and what they were charged with, when they got caught by authorities. I mean, he knows so much, that he's clearly sort of winking at his own audience saying, okay, it was me, but I can't say that because it was illegal at the time, you know, to help a slave escape was illegal, sadly enough, even in the north. It was made illegal. So, you know, slavery wasn't allowed in the north, it was had been banned by them in the north, but he couldn't help slaves escaped, it was the most peculiar thing. But this Literary Society I've really started to think was a cover, at least during that time, it was a cover for abolitionist activities. Because he mentions having given a speech that was blaming the Mexican American war for having an increase in the number of slaves. And so it wasn't, you know, noble in that sense, certainly, they were the cause of it may have been, but the outcome was that they were, they were more slaves after the war. So that's what caused the abolitionists to start recruiting him as the speech that he gave for a Literary Society. And so, the fact that those archives exist, I'm now very eager to go out to Michigan and get access to their archives, and just spend some time nosing around in the minutes of a student secretary for a student organization from 1857 to 1861. I have no idea what it is, maybe it's going to be terribly disappointing, and hence, it's going to be very brief. But at the time, a lot of secretaries took copious notes. It's possible that Beadle’s speech is part of that record, you know, that somebody actually asked him for his copy of the speech and put it into the record. You know, the old clubs used to do meticulous record keeping. So, if we get lucky, maybe we find out a great deal more about us, you know, secret society, using an English club as its cover for helping slaves get across to Canada because they're so close. They're at University of Michigan, in Ann Arbor, they're really, if you made it that far as a slave. You could just about taste your freedom at that point, ‘cause you were getting really close because just from there to Detroit, and then you're across the river and you're into Canada. So, it must have been well exciting for The people who are helping is one way of saying it, but it was greatly dangerous for them to. But there there's a great example, you know, Beadle was entered as a as an English major, that's when he went off to college. That's when he wanted to, to study first. And there he is helping with the Underground Railroad as part of really what he's doing. Why? Because the study of literature reminds us of our own humanity. Of course, you would be provoked to think more seriously about what freedom means. Does that mean that every English major made the right decision? No, certainly not. But it doesn't surprise me that he did. It doesn't surprise me that some time with great works. He was also raised in a Quaker church, so certainly that they were famous for their abolitionist activities. So that had to have had a major effect on him too. But it doesn't surprise me that an English major at that time would have been thinking seriously about the big questions of the day, what it means to be free.

Brittni Shoup-Owens:

You know, you mentioned how literature can resonate with you. And I remember many times, not just in college, but in high school reading a book or I was an avid reader. I'm too busy now to read. I try to make time but anyway, that's beside the point. And um, you know, you mentioned this, just random pieces can resonate with you. So, I'm kind of curious what pieces over the years have resonated with you personally.

Justin Blessinger:

Well, first of all, you need to give yourself permission to take a break, when you've got little ones at home, oh, my goodness, I used to play a lot more computer games. And then of course, along came children. And, you know, they tend to get into trouble if they're not being supervised so you can't put on headphones out. It's not okay.

 

Brittni Shoup-Owens:

 Mine just started crawling. So..

Justin Blessinger:

You have some things that are keeping you pretty busy, but I know you'll return to it when the time you know, permits you again, especially because you'll be and I'm sure you already are reading to your little one. But pretty soon you'll be reading more sophisticated texts together, right? Oh, goodness, you know, there's been so many that have spoken to me. But I think one of the first that I felt was really transformative was Sherwood Anderson's Winesburg, Ohio, which isn't read nearly as much today, but in its time was just hugely important. And Sherwood Anderson kind of became a sort of, I don't know, kingmaker sounds dismissive in some way. But he helped many other authors find success. And in particular, he was a great help to Ernest Hemingway, you know, so he was much younger Hemingway was much younger than Sherwood Anderson. But Anderson was writing about these stories that are taking place in Winesburg, Ohio, and it's the small town in Ohio. And the main character is a sort of newspaper man who's thinking about going off to college and to the big city, and so on. And he's really sort of encountering something that was really significant for America. At the turn of the last century, as so many people were moving away from the rural places and moving towards those urban centers. I remember I was an FFA in high school, a lot of people don't know that about me. But you know, the Future Farmers of America and I owe them so much. It was great, because I would have probably not traveled nearly as much if I hadn't been, so I got to see Kansas City because I was in FFA and so on. But in FFA, I remember some startling statistics about how demographically, the United States it changed something close to 90% of Americans were involved in agriculture in some way, at the turn of the last century in the 1900s. That were involved right selling grain counted, you know, running a mill counted, but in some way involved with the business of agriculture, and that by the turn to the next century, so by the year 2000, it was fewer than 8%, fewer than 7%. And it's far fewer even today still. So Anderson's story about this young man who's got some great intellectual gifts, who's seeing his own little town through the lens of somebody who reads and thinks. And he's telling the stories of the people of Winesburg, Ohio, and they're all he uses the word grotesque, which doesn't mean in his usage, the way we mean it today. But it does mean we're all transformed and sometimes harmed by the world around us. That we're all maybe scarred is a good word for it. And so, he uses that word again and again and again to describe how people are made grotesque by the pain in their life. And each story is so tender. There's a story of a teacher, and it's called hands. And the schoolteacher just loves his job. He loves the little ones. He loves encouraging them. He's a wonderful teacher and his hands they're sort of always flying about when he's talking. And he loves to ruffle the hair of this boy, while talking to them, he loves to pat this child on the back while encouraging them. And so, the hands are always used, and they're almost like birds flying. And then there's an accusation made against him because of his fondness for children and for teaching children and dad beats him up. And from then on, the hands are tied against his chest. And he's wounded forever, you know, and no doubt because of the accusation alone. And how successful as a teacher can you ever be once an accusation like that has been made, right. It's such a tender telling of the story, because you feel the ache, that this person's passion for teaching how good he really was, and how expressive he was. And all of that is quashed by this one bad, you know, day where there's an accusation made, there's no proof to it. But you know, he's, he's harmed, irreparably by it. And there's a moment in in, it's called an adventure, in which a woman runs out in the rain stark naked in the middle of this tiny little town, because it's just so beautiful, the rain and the dark and so on, and she can't stop herself. She just does it. And that, of course, somebody yells out the window and sees her and somebody makes fun of her, and she's crushed. She's like, what am I doing, and sees she shrunk again, and she, you know, goes back into the house and locks all the doors. And it's such as deeply ashamed of this expression. My goodness, King David, danced naked, according to one of the one of the stories in Samuel. And so, you know, being inspired to dance in the rain is a metaphor we use even today for how moved we are by beauty in our in our lives. And so, and yet to really do it, boy, that's not something we're all actually willing to do. And she does, and she pays dearly for it.

So, I remember just sort of reading this catalogue of grotesques as, as Sherwood Anderson calls it all of these people who are scarred by a combination like usually love, but love that's, it's gone awry in some way. I think it was really transformative for me here was somebody who seemed to understand thoughts that you've had, maybe only in your secret self, and here's somebody telling you your own truth back to you, which I think is what the greatest poetry and the greatest literature always does. It's not a truth that you can't fathom that you'd never considered. It's something that you somehow in your secret self always knew to be true. And there it is expressed to you in words that you didn't have for it.

Brittni Shoup-Owens:

I think that's one of the most beautiful things about writing is, you know, he can just train you can translate to other people, and you don't know how it's gonna affect them, or how they'll resonate with it. And I think that's why I'm so drawn to writing feel and reading because, you know, you can escape in a book. And then it might not be what about what you're going through, but something in that first chapter or wherever it might be, might hit you in a way that you're like, oh, maybe, maybe this is how it's, is for me right now. And I just got to accept that or something like that?

Jen Burris

So, I'm talking about the different use of the word grotesque, how has our language kind of changed? And even in the technology sector, where we've started using words in a different way, you know, you're going on your weekend at Netflix binge, or items like that, that are just kind…

Justin Blessinger:

Part of one part of every English major I've ever known, has been a sort of deep love for just words themselves. And in part, that's because I know I don't want to want to get too poetic here. But I love this time of year because of the lilacs. Right, we're just getting the leaves on them right now. So, we're not even that close yet. But once you see the leaves coming out, you know, it's imminent, those gorgeous fragrant blossoms that they only last a week, maybe a week and a half. But you go outside, and you haven't even seen the blossoms yet, but you know somebody in the neighborhoods lilacs are starting to open up the gates that powerful and and that fragrant and I think it's a little bit like that we recognize that words change. That, you know slang is of course a well-documented mechanism by which it happens, but technology has a lot to do with it as well, because words change. They change a little more slowly than the blooms each spring. But because they do that, we kind of hold them in a in a in a different precious place. Those of us who study English, right? We love words, just for being words, right? The fact that the Oxford English Dictionary exists is such a testament to that, you know, it's like 22 volumes, if you actually look at the printed form of it. Speaking of digitization, there's one that everybody is grateful for the digitization of. Because to have an actual copy of the OED was, yeah, that took up a significant amount of shelf space. But somebody wanted to write a dictionary that traced the word origins for every word that English uses and tracked how it changed over time. So, you could look up the word weird, and find out that well, a weird was actually a noun 1200 years ago, where there was a word that meant a force of the supernatural that would shape your life like fate, kind of. And so, this is why Shakespeare has the Weird sisters. Right? They're there. They're the three fates. He's evoking the t ancient Greek mythology of the fates. It's not because they're strange sisters, though they are. Right. It's because of that word, the were weird sisters. And that word, of course, is easily tracked for how that one has changed. And it's easy to understand why you might at first say, well, the weirds did it. The words changed the world. And you can say, well, that look at that tree. It's all deformed. Maybe the words did it. And from there, it's a short jump to say that's a weird tree. Right? And so, it becomes an adjective, then sometime later, still meaning something that is so odd that certainly Surely, it's supernatural. And then eventually, it becomes a sort of the progress of almost all words is to move from a more sacred usage, a more profound usage into the flippant and into the casual. Like, that's sort of the only way they move. We sort of always need new words to describe our experiences of the profound or of the supernatural, perhaps, because they always end up diminished. Think of the word awesome. 50 years ago, it was still reserved for something that inspired terror in you because it was so great, so amazing, but terrifyingly amazing. And, and now we use it to describe gelato, you know, (laughter) that, that gelato is awesome. And we do it so casually, right? But of course, then the word loses a lot of its force, when you can apply it, you know, to something as trivial as your visit to the shopping mall. And so I think that's one of the things that drives home, how precious each word really is. It's a living thing. And it is active for a short time. And of course, there are words that just simply disappear out of use, sometimes for really strange reasons. But the cultural habit of using a meme is a great example. We all now use it in a very specific way to describe content that especially sort of amuses or speaks to the historical moment in some way, you know, but it was really a word that was to describe patterns, you know, being able to recognize patterns in the culture. And so, you might say, well, this is this has become a pattern in our culture. It's, it's a meme. And it's derived from the word mimetic, you know, something that imitates something else. And indeed, in our English classes, we talk about how art is usually leaning heavily towards either a didactic purpose, a purpose to teach us and make us better, or a mimetic purpose, one that simply tries to hold up a mirror and say, this is this is how you guys are, but it's not necessarily trying to teach, it's just saying, I want you to see it. So, you know, that's, that's another example of word that it's almost delicate. They're almost fragile because we turn around and a word that we used just a few years ago, you know, has changed maybe even radically, I'm always putting my head my hands when my kids refer to me as a boomer (laughter). Like for goodness sake, that's the baby boom, right after World War Two, do you know when I was born, you know, but it doesn't matter, because now it simply means anybody older than the millennial, right? It just, it's a dismissive way to describe. So, Gen X gets lumped in with the boomers somehow, right? But it's a good example of another word that you know, describes the fragility of a word, a baby boomer, even 10 years ago, if you read the word Boomer, it meant specifically that generation and already, you know, it's being used to describe somebody who's kind of old and you know, not hip anymore.

Jen Burris:

And going beyond that, would you say that our, like, human desire for storytelling is something that will keep English evolving along with technology, and it will basically, it's something that won't ever go out of fashion, so to speak?

Justin Blessinger:

it right? Yeah, you've said it. You've said it very well. There's no possible way. I mean, it'll change how we do it, of course. But you know, storytelling as a habit, think about the very first forms of it, maybe even in our earliest human form, when we didn't have much for language, you still wanted to hear from Thag how he killed that mammoth. And so, he's going to act it out, he's going to jump around the fire and pick up his spear again, and, you know, put on a little drama for us

Brittni Shoup-Owens:

Or etch it on the cave wall.

Justin Blessinger:

Which ultimately leads to writing, you know, it's symbolic communication. But built into that storytelling is a whole host of things that you wanted transferred, you wanted the little ones to thrill, and wow, Uncle Thag killed a mammoth. And he did it with that tool, you know, and so you're transferring not only the skills necessary to do it again. But the cultural values that celebrate somebody who can do that, you know, somebody who's been able to raid a neighboring village and come back with a whole lot of grain. So, we don't die there on die, but we don't die. My tribe is going to make it by goodness, right? So, there's a lot more going on inside what we call storytelling. There's, there's a beating heart there of an entire culture and a value system. It's beyond that. And so how could it possibly ever go away? Unless human culture itself goes away?

Brittni Shoup-Owens:

That's a great answer.

Jen Burris:

Okay, well, we'll wrap things up here. Brittni, do you have any last questions or comments?

Brittni Shoup-Owens:

I really enjoyed this. This is my first time ever on a podcast. So, it was very, very fun. I'm looking forward to doing more.

Justin Blessinger:

It was a delight speaking with both of you, thank you so much.

Brittni Shoup-Owens:

 Thank you for visiting, being a part of this.

Justin Blessinger:

 Anytime happy to do it.

Jen Burris:

Yes. Thank you so much, Justin. And thank you, Brittni, for coming into cohost. Thanks to Spencer wrap, our sound designer and engineer. Thank you. And thank you to our executive producer and editor Jake Hoffer. Thank you for listening to Cyberology. Be sure to subscribe.

Jen Burris:
Welcome back to Cyberology Dakota State University's podcast for sharing and discussing all things cyber and technology. I'm Jen Burris, and I'll be your host. We welcome back again, Dr. Gabe Mydland as cohost for our final episode on artificial intelligence.

Gabe Mydland:
Thank you for having me.

Jen Burris:
And our special guest today, an expert on the societal and economic impacts of artificial intelligence is Dr. Jack Walters, Professor of Management and coordinator for the Masters of Business and Administration program in the College of Business.

Jack Walters:
Good afternoon.

Jen Burris:
Hi, jack, do you want to tell us a little bit about yourself?

Jack Walters:
Well, I've been at Dakota State for almost 16 years, and about four years ago started to get interested in not just the rapid growth of artificial intelligence, but the economic impact of it both positive and negative. And I've been collecting resources, articles videos about it since then, and it's just an endlessly fascinating topic.

Jen Burris:
You mentioned the positives and negatives. Why don't we start with the benefits?

Jack Walters:
Okay, the benefits, it's probably not right to use the word unlimited, but I want to position it near-unlimited. So just to give you a few examples, there are no already in use artificial intelligence systems that can outperform physicians in making diagnoses of patients.

Jen Burris:
Wow, that's fascinating.

Jack Walters:
So just think about what that would mean if we could just make a quantum leap in the quality of diagnoses? What would that mean for people's long-term health, that's just one example. The potential of it is just amazing just to revolutionize maybe everything. So that's the
positives,

Jen Burris:
Let’s look at some of the potential negative impacts.

Jack Walters:
When we've had huge leaps in technology in the past, they generally added to the economy, and they didn't take too much away. So that helped to boom, the economy of the United States of the world for over a century. When you look at the matching of artificial intelligence to robotics, it's almost inevitable, that it won't just add some new jobs and some new industries, but it's going to take employment away from a bunch of people. So, my perspective on it is I'm not against any of this technology. This is not one of those anti-technology arguments, but we need to be prepared for it. It's going to be dramatic.

Jen Burris:
Okay, where do you see the timeline on that? I know a lot of people predict like 2030

Jack Walters:
I would say definitely by 2030. But maybe that decade between 2030 and 2040 is going to be where we think we understand what the economy is like in 2030. And then we may have a completely different understanding of it in 2040.

Jen Burris:
Okay, how do you see that impacting jobs as it moves forward?

Jack Walters:
So people who work in the field of artificial intelligence would have a more complex view than I do. But my rule of thumb is, if what you do is repetitive, it can be done by artificial intelligence. So everybody says, Okay, well, that's factory jobs and certain other routinized jobs true, but it has a broader impact. For example, a knee surgeon who's doing a lot of arthroscopic knee surgeries, that work in a holistic sense becomes very repetitive. It's routine and many surgical establishments now to video record those surgeries. Well, if you take those videos, and you run them through existing artificial intelligence software and processes, you could begin to develop a machine-enabled arthroscopic surgeon. So, it's not just the person who's getting an hourly wage, it's all the people whose work is repetitive. And that's an awful lot of people.

Jen Burris:
Yes, it is. Gabe, do you have anything now that you want to ask?

Gabe Mydland:
Yeah, you know, I'm kind of curious. As a student of history. Also, we've gone through these changes in these transitions before. I'm wondering, in your reading, have you seen where we've learned from any of these shifts, and how to better prepare ourselves for making the transition from you're used to doing this as your job that now this is going to be done more efficiently, and you're going to have to move into something else. I think of the coal industry. For example,
we talked about bringing these jobs back when we know that we can provide the same kind of power from the sun from the wind, and we've got a whole rustbelt of folks whose families for generations were working in the mines. Is anybody planning for this kind of transition?

Jack Walters:
In my opinion Not enough. There's a concern in the fields that surround business and organizations that if you look across society, certain other major segments have had giant leaps forward – medicine, technology, even there are some improved governments, if you look at historically, the world is getting to be a better place, democracy wise and so forth. Organizations do not seem to be progressing at that rate of speed. And we see the same mistakes, read the paper, people do the same unethical things, they do the same illegal things, they make the same judgment mistakes. It's worrisome, because if we charge forward and use the same logic, so to give me an example, starting 30-40 years ago, businesses decided they would like to offshore their labor to lower-wage environments. And they did that. And it's had profound effects on the economies of places where their jobs went in their economies of the places where they left from. That could happen with artificial intelligence and robotics, where there is a cost advantage somewhere and the business, people say, hey, let's just go to that it's faster, shorter, more reliable, whatever their argument is, lower cost. And then that could just sweep like a plague across the employment patterns of the country. That's my big concern that we need to prepare exactly what Gabe was saying, we need to think what would it mean if people who drive long-haul trucks don't have jobs? Because there's a machine doing it?

Jen Burris:
Does that kind of play into the basic income talks about potentially paying people a flat level every month?

Jack Walters:
It's a very interesting question. One of the things I tried to do on this topic, I'm not an economist. And so I try not to get too far down the path of saying, well, maybe that's, you know, monthly Basic Income argument or other things like a Spain just went to a four-day workweek as a sort of standard model. All those things are out there. But I don't feel like it's my spot, to say, here's what we should do. I'm just here to say there's going to be employment effects. And somebody's going to have to figure out what to do about them.

Jen Burris:
Do you think that raises concerns for pretty much every industry?

Jac Walters:
Yes. So, I've collected all these things. And we tend to think that, well, there must be some industry or some profession that's not affected. But here's an example. There is already significant development of artificial intelligence in teaching. So, you know, all the folks like me and Gabe, who are teachers, we'd love to hear no, we'll never be replaced. What do you mean, this is a completely unique and creative job, there is already an artificial intelligence college professor. And then you also have to think it's not the end result we're looking at. It's a walk between here and there. One University did a fascinating experiment, they did an online course. And they replaced a TA with one of these expert systems that you just type your questions and it's a natural language system, and it figures out what to do. Students could not tell it from the human, this is the early days, you know, of this. So how sophisticated might that be in three years from now?

Jen Burris:
And do you see that impacting different areas in society, not just employment?

Jack Walters:
I do. To go down a completely different path. One of the things that are of great concern to ethicists and others is the development of robots that could take the place of soldiers or other military personnel.

Jen Burris:
That sounds terrifying.

Jack Walters:
The part that's terrifying about it is how easy it makes it to go to war. So, you know, if you're a president of the United States or the leader of any other country, and you're considering a military conflict, you're thinking I am going to be responsible for the deaths of XYZ number of people, but if it's just machines, and they can be bought or replaced or repaired, it's a different war decision. But of course, there's always collateral damage in war, right. So, it's not just the combatants that are killed and injured in a war. And that's the part that is very frightening to some people. That's really a concern.

Jen Burris:
Do we kind of already see that with drone strikes?

Jack Walters:
To some extent we do. And, and one of the things that for many obvious reasons the military doesn't talk about, but drones are probably now able to largely be flown by AI. Everything that's done in that kind of work is recorded. So you know, there's a bunch of drone pilots in Rapid City. So they record all those flights, and then you put it in a neural network or that kind of thing. And back into what worked, what didn't what was done and what's associated with what worked, that creates an artificial intelligence or a machine learning algorithm that can do the same kinds of things that human pilots do.

Jen Burris:
Wow, that only will probably further develop in the next 10 years.

Jack Walters:
Yeah, I think it's going at an exponential rate, but it's not just linear. Well, we did this in June. Last year, so this year in June, we'll do XYZ. It's much faster than that. It's just unlimited, as we've already said, in both positive things that it can do. I mean, just wonderful things it can do. And then it's also, to some extent, has very big downsides that we haven't thought through.

Jen Burris:
And do you think that the quickness of developments in AI can lead to some of those problems because it's just moving at a very accelerated pace, which might leave openings for these issues to crop up and get missed?

Jack Walters:
I do think that one of the things that are of concern, when I mentioned that decade, 2030, to 2040, there's a lot of big, big, big, big, big brains on the earth who have expressed concern about the development of the ability for machines to design machines. Up until now, people design machines, and then people design the software that machines run. But we are approaching a period of time in which some of that work could be done by machines. This is where the ethics questions really boil over. Because there might be a higher efficiency operation model or design that a machine would do. But it's not ethical. It's not the right thing to do for people.

Jen Burris:
Because they're not the sentient beings that we are.

Jack Walters:
Right, for example, it might have decided in this recent virus thing, we should not treat these people, they're too old, or they have too many preexisting conditions. And it's not doing that to be cruel. It's doing that out of some optimization function. But that's often not how ethical decisions are made. So that's a big concern.

Gabe Mydland:
So the plotline of just about any really good science fiction novel is coming true, where machines take over and humans become subservient. If I may, I, I'm curious about your perspective on this, because how do we as a society approach this? How do we make sure that the applications that are being designed are in the general interest not in a specific interest? And that we're still in command, if you will, of how this is used? And what it's used for? Is that an individual responsibility? Is it a government responsibility? Is it both? I was at a speech Condoleezza Rice gave at a forum in Sioux Falls. And one of the things that she talked about was this whole idea that we're becoming more and more efficient with machines. And it's becoming a better way of doing things. But she brought up the point that employers who displace employees with artificial intelligence-driven machines have a responsibility to their workforce, to help them transition into something new, whether it be training, or some guidance on what kind of transitions I can make, or is this something that we should do as a society as a whole?
I mean, I don't know where you come down on that.

Jack Walters:
Yeah, it's a fascinating set of questions. And I really think there's going to end up being lots of people doctorally trained in all those topics that you mentioned, you know, there's really going to have to be deep knowledge in the creation of educational programs about that. So, when you look at Dr. Rice, she was probably, I'm guessing, making the argument out of an ethical framework, that there is a sort of moral obligation that if you displace someone with a machine that you should help them. My personal view, based on my knowledge of business history, that won't be enough. If you want that it'll have to be done by regulation, I think that regulation will be highly controversial. So those are the kinds of issues that surround that. In the larger context. I think higher education has a huge role to play, maybe K-12 education has a role to play as well. When we have been developing a proposal for a program about artificial intelligence in organizations, one of the things that have come up and up and up, including a meeting I was in today is courses in ethics. And how do you train people? Because it's not going to be one of those where robots run everything, and we're just has-beens or bygones. We're still going to be running it. The question is, how will we run it? And what rules will we implement? For example, there will come a time within 10 years, as Jen was talking about when human workers and AI workers are working side by side. Well, that's a whole new realm of human resources and human resource law. What happens if there's a dispute between the human and the artificial intelligence? What happens when one or the other makes a mistake? How are things held accountable, just on and on and on? Those kinds of things are going to be happening soon. And so there has to be some kind of large-scale three-dimensional understanding of where we're headed and how much things are going to change.

Jen Burris:
Do you think it's possible with all the industries that this will affect basically everyone at some point that we can find new jobs or new areas to get everyone reemployed somehow?

Jack Walters:
I'm very sorry to say I don't believe we'll be able to replace all the jobs that are lost. There will without question be whole new categories of jobs, whole new ways of doing things. But when I look at how many jobs will be lost, I don't see how it's possible to recover them all. And then I'm very reluctant to hear people say, ‘Well, people need to get different training, and they need to be, you know, re-skilled,’ the scale of that task is just almost incomprehensible. We're really talking about gigantic numbers of changes.

Jen Burris:
Do you have any personal input or feelings that you would say to these people, as they're making these advancements and considering society as a whole?

Jack Walters:
The interesting thing is my particular focus is not on technology development. In other words, fine with me, if that continues apace. That's how it's been throughout history, new technology has always supplanted old technology. The audience I want to speak to our people that make policy at the government level, and people that lead organizations, I really think that's where we have to talk about this. And one of the things that could be considered this could rewrite international economic competition and cooperation because right now, there's a bunch of just to pick on an easy sort of pin, a bunch of iPhones are made in the east. Why is that? Well, the wages are lower there. Well, if you shifted to an economy, where they're mostly constructed by machines, then the machines can be here. And then people say, why that's not adding a job, the machines doing the job, but the transportation, the logistics, the supply chain, that all that stuff that goes with having a business entity is here. And so that could just change how we place labor, broadly defined in the world.

Jen Burris:
It’s a lot of stuff to take in.

Jack Walters:
it is it's really, really big, really big.

Jen Burris:
And you have any positive stories that you've seen in AI as potential impacts in different areas?

Jack Walters:
I have. So, here's one, and this one's probably tilted a little more over to the robotic side. But it's so much of a very hot topic right now, in the news. There is existing, a working prototype of a robotic traffic cop. When someone is seen violating and speeding, running a light, whatever the current model is driven by a human, but the car is outfitted with a robotic traffic cop. So, they’re pulled over, the car that carries the robotic cup puts a stop stick under the back of the car. So that's a thing of like a board with nails in it, and it can extend and go under the wheels of the car. So, if the car tries to speed away, it can't go fast, because it's going to have holes in the tires, the tire will be deflated. Then this little robotic police officer moves up from the back on some kind of a bar, stops at the front window, and has a camera, a microphone, a speaker, and a little printer in it. Then the robotic cop tells the person what they have done wrong, and has a conversation with them. And then if it issues a ticket, the ticket comes out of a little printer, and they take it. Look at what the news is we have two horrifying cases going on one where police improperly shoot civilians and the other where civilians shoot police, which is pretty much always improper. This is the kind of thing that we're talking about, it doesn't matter if somebody shoots the robot, they can get a new one, and the robot is not armed, it's not going to shoot anyone. So, you can sort of solve a kind of hot topic problem right now with that kind of device.

Jen Burris:
Eliminate some of those inherent risks…

Jack Walters:
There are just scores of those kinds of examples of positive improvement, better service, better quality, less danger, then there is you know, a whole other side of what will that do to employment and array of occupations.

Gabe Mydland:
So, I'm at risk of repeating myself, and I don't mean to but in your collection of all this information about all the things that deal with AI. Did you come across information about different groups who are prepared to sit down with policymakers and leaders of organizations to talk about ethics? Are there training available and business programs that are being developed to address this new world that we're approaching?

Jack Walters:
being developed? Yes. Existing? Not so much. We're definitely seeing now rapid across the country and probably across the world as well, ideas for training, understanding, seeing the limitations of the technology, seeing the benefits the technology, but right now, we're not to the point where there is, for example, a group of people who are expert in that who was serving as some kind of advisory board or, you know, NGO or something like that, as it regards these issues.

Gabe Mydland:
I wasn't aware of any, you know, I mean, like the President's Council of Economic Advisers, for example. They kind of take a look at what's going on and try to draw attention to certain things. And I didn't know if there was anything similar to that. Maybe not at that level, but maybe even in the private sector. I haven't heard of that.

Jack Walters:
Yeah. There are troubling cases. For example, Google is one of the leaders in the development of artificial intelligence. Well, they have had, for reasons that are not immediately clear, it is their private information. But certainly important parts of it are in the media, where they've dismissed a couple of people that were key in their ethics development effort for AI. Well, then that led last week to the resignation of one of the biggest names in all of AI in the world from Google. And so it's troubling. It's like, is this gonna be our history where there's this constant back and forth and contentiousness and stuff like that? Or will we lean the other way of like, we got to do this ethically, or we're going to be sorry, you know, that's the concern I have is, could we go in that direction?

Gabe Mydland:
Sure. Ideally, we'd probably like a balance where both sides are at the table. And obviously identifying the areas where they agree, and then identifying where they don't agree, but what they can work together on.

Jack Walters:
Yeah, I'm hopeful. And this is probably pretty quixotic. But I wish that we could make a finer distinction about the transparency of things. I totally understand that Google's in a competitive business, and one that's likely to become more so and so they want to be private. But there's a lot of these things that get put under the bushel of competitive and private information when really, we ought to understand maybe those people got dismissed for a reason that has nothing to do with what we think. But how would we know no one will say, and so it's troubling.

Jen Burris:
Do you see cyber ethics then, or AI ethics being a big part of new college programs like the degrees that are coming to DSU?

Jack Walters:
I really do. And I think that there's a group of my colleagues here at Dakota State who are involved in this very deeply in developing programs. And I think they all see the importance of the teaching of ethics of embedding ethics in almost everything we're doing. And it's got to happen that it can't go forward in this kind of agnostic context, that would not be the right way.

Jen Burris:
And do you think that that'll happen kind of across the board with these new degrees in the country and that that might maybe level some things out if all of these new up-and-coming workers in AI have some ethical training?

Jack Walters:
Yes, I think there's a possibility for it. What I would like to see is where the ethics of the development and management of AI have the same role that professional accountancy has. A CPA, for example, is honor-bound and legally bound to certain principles of ethics, even if that's not what their client wants. And that's where we need to be with this that, yes, there's going to be companies that develop stuff, and it's in their financial and economic interest to do something that cuts corners, we have to have people trained and licensed and ready to say nope we can't do that. That's not the way to go. That's going to be a concern.

Gabe Mydland:
It's a step in the right direction. But I don't think it's the only answer. I think we all have to be alert. And we all have to be involved. And we all have to step up when something's not right.

Jen Burris:
Gabe are you kind of saying that by studying ethical stuff, they could circumvent it?

Gabe Mydland:
Just saying no, I think it's important that we study this stuff. I think it's important we test people on this stuff, and that they have a certification that says they understand it, but I don't think that's where it ends. In my view of where we're at, not just with AI, but in society in general, is that we don't have enough people involved in the process. We have, for example, fewer than a majority of the people in South Dakota, who are registered to vote. And yet in the last election, we're all celebrating that we had a 70% turnout. Well, if 45% of the people who could vote are 70% of that, that's still a minority. And that's part of the problem is that people don't have a voice or they're not exercising their voice. Things like this that are going to disrupt families, lifestyles, communities, they think that would be enough of an incentive to be involved.

Jack Walters:
If you really want to put yourself in the tumble dryer and turn it on, consider this – the solution to some of those very serious societal issues? Ai. (laughter)

Gabe Mydland:
It's a conundrum, isn't it?

Jack Walters:
The thing is you look at this like we could do a thing. It would take a lot of work, but it could be done with artificial intelligence. Who has been complained about who's a practicing licensed psychologist, then collect all kinds of data and have it back solve against that? What about that person? Is there a pattern? Is there anything that would explain it as something that could then be used as at least a warning signal, if not a predictor, this is just write down the core of what AI is good at. And so you could solve some of those kinds of problems with it. But then you're also advancing its place in society when you do that. It's just really

Gabe Mydland;
It is a conundrum. I mean, it really is.

Jack Walters:
And then if you know, just to throw one more out there, this really worries me. There are plenty of groups in the world, most of them are not governmental. But some are, many of them are individual groups or terrorists or whatever. None of everything we've said about ethics means anything. It's in their interest to create something that has no ethical subroutines, or guards or ability to be stopped. And that's really scary. There are already tools out there that would make those existing prototypes extremely dangerous if they were not controlled.

Jen Burris:
Do you think that that would spread quickly, kind of in a criminal world, so to speak? Would they be sharing their nefarious advances with others?

Jack Walters:
I think they would. An example, this was not a terrorist thing, it was an artistic thing. It shows you what we're talking about. There's a robotics company called Boston Dynamics. They're one of the world's leaders in the development of robotics. So there was an art teacher who got one, somehow they make a dog, a mule, and a human. And they got the dog, the dogs really popular, and it's about the size of a medium to large-sized dog. So they got one, and they strapped a paintball gun on the back of the dog. And then they connected the dog to the internet. And they would let people log on and steer the dog around, and then fire the paintball gun at walls and make art. This was the whole point of it. But first of all, really ticked off the people from Boston Dynamics who already have military contracts. But also, it raises that specter of what if that was a real gun on the dog, and you just walk it down the street and fire the gun? It's just there are too many of those kinds of questions that have not even been addressed at all.

Gabe Mydland:
No, I was just gonna say I'm sleeping better tonight. I know that. (laughter).

Jack Walters:
Nothing to worry about at all.

Jen Burris:
Is there anything that you think the average person should be doing or focusing on as these advances are made?

Jack Walters:
Yes, I think we should be honest and forthcoming with people about the kinds of work that they're doing that is either easier or more difficult to be automated. And then that helps people to choose careers that help people to start on a path and so forth. The more variable your daily work is the much, further along, it'll be before anybody is trying to automate. There's a whole class of those kinds of jobs. But if what you do each day is repetitive, that's at least a concern that sooner or later it will be in someone's financial interest to try to automate that. And that's where the pinch will come in.

Jen Burris:
Do you think that can also apply to creative areas? I know they've had AI create songs before…

Jack Walters:
Yes, there's fun stuff out there about creative work by AI. But we're nowhere near the level of sophistication. Too many people, when you say artificial intelligence, they think of Mr. data, you know, or these kinds of fictional characters that may be midcentury before there's anything like that, but it's crazy of us to think, Oh, well, that's way out there. The stuff that will affect employment and jobs is very close by. So, we should be thinking in terms of what will we do? What other ways can those people have meaningful employment? And what structural changes might be needed in light of the large-scale changes that AI and robotics will bring?

Jen Burris:
Okay, anything else you can think of that we might have missed on this topic?

Jack Walters:
No, the only thing is, I'd look back and restate something that I sort of said in passing. I think if you listen to this, especially if you just listened kind of with part of your attention. You think, oh, look, that guy. He's just against all this stuff. On the contrary. Not only am I not against it, but I also don't think you can actually be against it. I know people who say I don't like it that Walmart has all those self-checkouts Well, too bad for you, those things work, they're never going away. And the idea that we should pull them all out of the stores so we can give the cashiers back their jobs. That's never going to happen. So that's just a tiny drop compared to all the other things that could be done in this way. So, what we need to do is lean into it, not fight it and got rejected, not deny it. But say here is something that's coming, and we should adjust.

Gabe Mydland:
And that's the hard part. It is, yeah, people find it very difficult to change. Hopefully, that won't be the part that gets AI because that's what I help people with.

Jack Walters:
well, and Gabe is a professional Ph.D. trained psychologist, that's the job I'm talking about is forever down the path, you know, because it's almost every case is unique. And there's a lot of sort of unstructured decision-making that goes on. But my goodness, look around at the jobs that people do in large numbers across the economy, they're not like that.

Gabe Mydland:
Well, and I even can see how it could be applied to my profession, quite frankly, you have a finite set of variables, and you just plug it in. And I'm replaceable. So yeah, I think it's a challenge for all of us. I do think it brings great things, but nothing comes without a cost.

Jack Walters:
Yes. And the benefits are going to be astounding. I think that in medicine that we talked about, but also in many service-type fields where people are given advice or are given support by various things in professions, lots of that's going to get much better. And that's great. That's wonderful. But like we said, the change in the employment structure, not only in the US but of practically everywhere is going to be profound.

Jen Burris:
So it's really about adaptability in society.

Jack Walters:
It is and that's a wonderful point, I'm glad you brought it up. The country that is most adaptable is going to be the leader in this and the ones that are least adaptable or most resistant, are going to be behind.

Jen Burris:
Well, I found this topic very interesting and would love to revisit it sometime with you. I'm sure you have a plethora of knowledge that we didn't cover here today.

Jack Walters:
It is such a great thing to be involved with the scholarly, wise, and intellectually wise because my goodness, next week, there'll be some new blockbuster thing that we didn't know about this week is just amazing. MIT released this is about three weeks ago, most of the learning models in machine learning and artificial intelligence are trained models. So you get a neural network. And that's, you know, that's a software thing. And you put data in and it associates outcomes with inputs. But you have to do that a human has to do that. MIT released three weeks ago, the first AI that can do it on the fly. It takes the data and starts making generalizations from the data on its own. That's the science fiction version of AI that we've had for almost a century. But that's where we're headed in reality.

Jen Burris:
Scary and exciting.

Jack Walters:
Very exciting, and somewhat scary too.

Jen Burris:
Well, I'd like to thank Gabe for cohosting again. And jack, thank you for being our guest. My pleasure. And our sound designer Spencer. And thank you all for listening to Cyberology. Be sure to subscribe.

Jen Burris:
Welcome back to Cyberology Dakota State University's podcast for sharing and discussing all things cyber and technology. I'm Jen Burris, and I'll be your host. Today we welcome back Dr. Gabe Mydland as a cohost.

Gabe Mydland:
Thank you, Jen.

Jen Burris:
And this episode, we will continue our series on artificial intelligence. And so we have a special guest with us today. Darrin Dutcher. Gabe, would you like to do a little introduction for Darrin?

Gabe Mydland:
Darrin is one of the students in my honors section for EPSY 210 lifespan development. And in the class, I invited students, they could take a test, or if there was a topic that we were covering, that they were interested in, they could come to me with a proposal to do some sort of project. And Darrin reached out, and he said, you know, what I'd like to do is put together a research poster. And I said, Wow, that that sounds great. And we'll talk a little bit about how that involves artificial intelligence, along with what we're talking about in the class, and how Darrin put that together.

Jen Burris:
Awesome. Darren, do you want to tell us a little bit about yourself?

Darrin Dutcher:
Oh, sure. Born and raised in California, going to Dakota State University. I'm a cyber operations Network Security Administration major. And yeah, I look forward to this podcast.

Jen Burris:
Awesome. So can you start by telling us a little bit about what inspired your idea for this product,

Darrin Dutcher:
I've always kind of enjoyed psychology, philosophy, and I enjoyed going with concepts of like, can AI truly come to a sentient being? So, I always had a fascination with AI from the start, and talking about how children develop I thought would be great to kind of be like, oh, compare a kind of a child to AI and how they both develop because I feel like there's a lot of similarities with how they develop.

Gabe Mydland:
And of course, we're talking about in the cognitive sense, what Darren chose to do, if I may, Darren, how children begin to understand how to categorize things. This is even before they begin to speak, that they begin to recognize, for example, say, a family pet. It has four legs, it has a tail, it has whiskers, and their family pet happens to be a dog, but on a playdate with another child who has a pet that has four legs and a tail. He learns that this is not a dog, he's corrected, this is a cat. And he starts to understand and distinguish the differences, even though he can't articulate them. And in talking about this way that children learn to assimilate and accommodate. Darren came up with the idea of how that parallels with how we train if you will a computer. Am I saying that right?

Darrin Dutcher:
Yeah, I mean, how we kind of like train a computer AI program, in the simplest way is just giving a bunch of pictures having human self-identify it sit, you know, like the little recaptures there, like, are you a robot click on all the stop signs, and that that helps to process data. And say Okay, these are stop signs. And they basically relate them say, okay, there's an octagon, and all of these pictures, it's all red with lettering there. And it relates it by pixels instead of just by the word stop. Through that AI starts to kind of like process and say, these similarities helped to create a full picture helps to create a pattern.

Gabe Mydland:
And so assimilation is taking something and trying to categorize it with what we already know. And accommodation is when we recognize that it doesn't quite fit. So, we've got to change the way that we think about this new thing. As I understand it, I learned a lot about AI from Darren's project that very much like a human being learns to accommodate to meet the needs of a new situation, that that's how we train, if you will, a computer to distinguish and notice that there are differences. And now we have two things rather than just one thing. And it's a process that repeats itself over and over again. And of course, our knowledge then accumulates.

Jen Burris:
So expanding the AI's definition of something. You show a bunch of pictures of dogs, and then you throw a cat in there, and it says, it's the dog again, and then you recalibrate it kind of?

Darrin Dutcher:
you recalibrate it, you can kind of train it and show it some pictures of cats, because for the most part, what you do is you'll have a bunch of different pictures. And you'll have people select stuff. So, one of the experiences that I got to go to was this one place at the University of Maryland, College Park. And they were working with the classification of imaging. So, they had a setup with a camera, and whatever the camera was looking at, it would say, Oh, that's a monitor, that's a computer, that's light. And it would just put a box around that stuff. It's basically getting fed a lot of information. People put boxes around items like this is what light looks like, this is what a computer looks like, this is what this looks like. And through that, it's basically able to say, okay, all of these boxes have this similarity to it and tries to basically say, okay, that similarity is what this image is. So, it's sophisticated in a way, but it's not as sophisticated as humans at this current level.

Gabe Mydland:
Sure, there's a distinguishing of the different features. It's kind of a fascinating process when you think about it, but it takes time. And it takes a little bit of direction, sometimes from someone else, but very rapidly. Children, and we're talking about toddlers, starting at approximately nine months of age, the more and more exposure they have to new things, they're assimilating and accommodating quite rapidly. And with a computer program, it's much the same.

Jen Burris:
So, it's kind of like expanding categories?

Gabe Mydland:
Exactly.

Jen Burris:
How did you go about researching this project once you had the idea?

Darrin Dutcher:
I would say the first thing that I did was hop on to my trusty friend, Google. And I just looked at some scholarly papers. And looked at machine learning algorithms, because there's a bunch of different machine learning algorithms for image classification. And I chose one of like 20 different machine learning algorithms. I looked at each one individually, and chose one that seems very rudimentary so, it's somewhat easier to explain. Once I found the method, I just looked into that method. okay, let's see how a child takes an image and processes it. So, I looked up some scholarly articles of how a child processes images and read through them, then I went over and kept looking at, oh, these two are pretty similar, although it's not like oh, yeah, they both look at the same image and know exactly what it is. Or they get told what the image is, the child looks at this image and is told, oh, that's a dog. Well, then a computer has to get told what a dog is, too. But then it goes on a lower level, it looks at the pixels, it looks at the very small details of it. It requires a larger test size than humans because it requires multiple, this is a dog, that's a dog, this is a dog to basically create that category. It can create the category on the first go, but it wouldn't be too precise.

Jen Burris:
Were there changes to the children's levels as they got older compared to AI?

Darrin Dutcher:
It's around the same if you boil everything down, children get told what an object is, and then they keep learning or they go look in their book and see Oh, This is what a butterfly looks like, this is what a fish looks like, once you kind of learn what a fish is, then you maybe look at some other fish and you're like, this is a cod, this is a salmon. There all these different types of fish. And you see that there are subcategories to the category that you create, which can be the same thing with AI.

Jen Burris:
Okay, so is the timespan kind of similar in the learning experiences? Or does AI move a little bit slower because it takes more information?

Darrin Dutcher:
I would say that it actually moves faster. Because you can feed a bunch of numbers to an AI, you can feed a lot of information to an AI very quickly.

Gabe Mydland:
The real advantage, of course, with technology is, is the processing speed. And I'm really speaking out of school here. So Darrin, correct me on this, but my understanding is that a lot of the ways that we've designed a computer are really a representation of how we understand how our brains work, particularly in the area of memory. And you address this in your assignment to I mean, first of all, we have to attend to something, if we don't attend to it, it's lost. And then it moves into short-term memory where it can stay for about 20-30 seconds at the most. If, if we practice and rehearse that information. If not, it's lost again, it moves into something called long term memory, which, up until recently, most psychologists degree was infinite. But now there's been some studies that say, well, it's pretty large, but it's not infinite. Again, if the information is processed, if it's related to other things that we understand and know, well, there's a good chance that we can retrieve it later. The advantage of technology is those things are stored and can be retrieved depending on the user, if they remember, for example, a file name or something like that. But that processing speed that Darrin was referring to, being able to take in all kinds of information, and organize it and connect it to other things, is so far superior to what we can do as humans.

Jen Burris:
but does it lack the ability to make inferences that we humans can quickly do? Once we've learned things?

Gabe Mydland:
Right. And I think the parallel is that with humans, the more experiences we have, the more likely we're able to categorize and distinguish between different things. I think the same thing would be true with AI. The more it's exposed to and directed by humans, the more it's able to make those distinctions.

Darrin Dutcher:
Yeah, I know, some of my friends. Last year, they did a really cool project. And it was on adversarial AI networks. And that is basically having one AI fight against the other AI. And how they implemented it was they had one AI that creates medical record codes. And then they had the other AI identify which ones were fake and which ones were real. In the beginning, the one that is making the fake medical codes would send it to that and it would automatically get detected. But then as they kept doing larger and larger numbers, and I mean, hundreds of 1000s of test runs it started to get where this one AI couldn't tell like a fake medical record was and what a real medical record was.

Jen Burris:
When you're talking about pitting the two AI's against each other, does that help you kind of find flaws in things too? Maybe in a cyber-attack for example?

Darrin Dutcher:
I have been looking at implementing AI into cyber, which hasn’t to a large scale been done. There's a lot of companies that are like, Oh, yeah, we have AI in our cyber technology. It's not really AI for the most part. It's either like a framework or an API. To let's say, implement AI into this, it gets pretty complicated because each attack can be different. You can have something that it normally exploits be extremely secure and can exploit that. So, you have to look for a different way in you can probably tell the AI, hey, look at all these ways in and then do that. But to my knowledge, I don't think there is a fully automated AI program that can find the vulnerabilities, exploit them and then be like, Oh, yeah, these are your vulnerabilities that your company has.

Jen Burris:
And you mentioned framework and API's, can you explain for our listeners what that is,

Darrin Dutcher:
frameworks and APIs tend to be mistaken for AI, it's more coded and stuff. So it's not learning as it goes. It's just this is how it's programmed, this is what it will do.

Jen Burris:
So an API is more like completing a task over and over again, versus expanding upon that, as it learns?

Darrin Dutcher:
That's the TLDR (too long didn’t read) of that.

Jen Burris:
Gabe, what was this like, as his professor in learning about this new topic and comparing it to early childhood development?

Gabe Mydland:
Well, again, fascinating, you know, I use technology a lot. And I like using technology. I don't know how it works. And I love all the things that it can do for us in enhancing the quality of our life. I'm interested in psychology, but I recognize that there's a lot of things, other subjects, other topics, other disciplines, that have a lot of overlap with what I'm interested in. And what Darren was able to do was to really show the overlap with if you will, Ai, or even on a broader scale computer science in a really meaningful way, with how we understand how our brains work. And of course, psychology is the study of not only behavior but mental processing. And, of course, I understand that computers are our best representation, or maybe are a representation of how our brains work. But to see how it actually can do more than just what you were talking about earlier, doing a command that we've programmed it to do. But to go beyond that, and to think about the fact that. I guess I understood this, but this was really tangible to me that you can actually have a computer begin to make those distinctions to begin to make decisions based on what information you provide to it, on its own. It's amazing. It's kind of scary. But it's also, I think, going to lead to some really exciting developments, that due to the limitations that we have, as humans are going to enhance the quality of our lives.

Jen Burris:
As you put together your research poster how did you choose what to highlight?

Darrin Dutcher:
I looked for a lot of the similarities between AI and people and development. Basically, I'm wanting to make sure these are similarities instead of going for a more advanced AI that may not share as many similarities or may use a more advanced process. And I also want to find a simpler process. So I could put it in easier terms to understand because as I know, Gabe tries to adapt to computers.

Gabe Mydland:
I’m simple. I'm a very simple person. It's alright Darrin.

Darrin Dutcher:
I mean, you do better than my grandma. (laughter)

Gabe Mydland:
I got Grandma beat. But yeah, I think he wanted to make the project accessible to a broader audience than, say, his peers and colleagues. And I think he did a really nice job of providing an overview of the parallels between the two processes. It's just a really fine piece of work. And I'm really proud of what he put together.

Jen Burris:
Did you get any feedback from anyone else besides Gabe?

Darrin Dutcher:
I ran it through my roommates. I'm like, hey, do you mind looking at this real quick? Or do you have a second? And I just turned on my computer and let him look at it. Tell me if you can understand that if you have had nothing to do with this concept or that or is it easy to understand and follow? Even if you never worked with image processing before. Even if you've never worked with memory or AI or people or that I just hand it to him. He checks through it check spelling and that because Spelling's a different thing for me.

Gabe Mydland:
I'm really disappointed you didn't share it with your grandmother, but okay. (laughter)

Jen Burris:
Okay, Gabe, you have any questions left?

Gabe Mydland:
No, I just, I was really pleased that Darrin saw this as an opportunity. And again, this is something new that I'm doing in my classes, but it's really proved to be very fruitful and helping students to see beyond just the domain that they're interested in. And seeing how they can expand the things that they're passionate about, from looking at things from a different perspective, from a different discipline’s viewpoint

Jen Burris:
that's nice. I bet it's also beneficial for the students as well to kind of get outside of their comfort zone may be a little bit.

Gabe Mydland:
Yeah, I think, it encourages them to demonstrate their mastery of the content in a different way. And I think that students would like to show that they understand the information, but in a different format, like a poster. I've had students make videos, I've had them do a podcast, they've come up with these ideas. And I've kind of told them what I'd like the parameters to be and they just take off. I really do think, and I try to tell students this, it's your responsibility. It's your job, it's your task to find how what I have to offer is linked with what you're passionate about, they're adding to their expertise about something that they care about. And they're just so excited to see this, this connection that they didn't realize exists. And it does, everything overlaps, to think that they're different disciplines or silos is incorrect. It's really more of a Venn diagram.

Jen Burris:
I'd like to thank cave for cohosting. Again. Thank you for having me and Darren, for being our guests. Thank you for having me. And of course, our sound designer Spencer. And thank you for listening to cyberology. Be sure to subscribe.

Jen Burris:

Welcome to Cyberology Dakota State University's podcast for sharing and discussing all things cyber. I'm Jen Burris from the marketing and communications department at DSU, and I'll be your host. In this episode, we'll be talking about artificial intelligence with Austin O'Brien.

And today, I'm happy to introduce you to my co-host, Dr. Gabe Mydland.

Gabe Mydland:

Hi, Jen.

Jen Burris:

Hi, how are you today?

Gabe Mydland:

I'm doing great. And artificial intelligence seems to fit my personality very well.

Jen Burris:

So how so?

Gabe Mydland:

Mainly because I don't have real intelligence. How about that? No, the topic, I think mirrors our understanding of how the brain works. And of course, psychology. The courses I get to teach are about behavior, and of course, mental processing, and how the two influence each other. So, I suspect what we're going to learn here today, I'm going to learn here today too, is how our understanding of the way the brain works, informs how we use and create artificial intelligence. So, I'm really excited to be here. Thank you.

Jen Burris:

And we're happy to have you. Let me introduce our artificial intelligence expert, Dr. Austin, O'Brien. Austin is an Assistant Professor of Computer Science in the Beacom College of Computer and Cyber Sciences. So why don't you tell us a little bit about yourself?

Austin O’Brien:

So just like you say, as an assistant professor, this is my fifth year here. So, my background in computer science, my bachelor's and master's degrees come from there. But my Ph.D. was actually in computational science and statistics. So that's kind of a departure where it's really a way of looking at a lot of the machine learning algorithms that we know today. So, I didn't really, you know, know that at the time going into it, but it's kind of worked out really well. And so, I've been working at DSU, with these new courses that we got with artificial intelligence we've been working on, really trying to get folks interested in which has been really easy to do, students have been kind of jumping all over the courses that we've been starting to provide. So yeah, really excited with the way that AI is starting to take off, you know, here at Dakota State University, and also in other universities in South Dakota. It's just something that's really taken off, and I'm really excited to be a part of it.

Jen Burris:

Awesome. So how about you start off by telling me a little bit or telling us the listeners a little bit about what AI is?

Austin O’Brien:

Sure. Yeah. So artificial intelligence, you know, the whole idea, at least from a computer science perspective, is trying to get a computer to behave in a way that's intelligent like you would expect a person to, and you know, that that's kind of really the end goal. You know, we talk about, you know, different things in artificial intelligence that we have now. But I wouldn't say that we're anywhere really near the end goal of where we want, we'd like to go, you know, that the whole sci-fi idea of some self-entity that's able to walk around behave, think to react to its environment, you know, things along those lines. Right now, we're kind of at the stages, where we're doing mini partitions of that. And the whole goal someday is to kind of get this all working together. But that's kind of the idea. So right now, you know, the way that we're working with that, it's typically, there are these different facets of artificial intelligence, machine learning is one that's become very popular lately. And really, there's a, you know, the reason for that is really, because of computational power has really started taking off and data collection is a huge thing. You know, there's obviously a lot of controversy one way or another about, you know, data collection, and all of that.

So that's something that we have to think about going forward when we're, you know, working on artificial intelligence, you know, the ethics of AI is something that is really starting to take a step forward in our line of thinking when we're working on these algorithms and things like that. So yeah, you know, just, you know, there's the old school, rule-based artificial intelligence where, you know, if something occurs, then the bot should do that, you know, that sort of thing. The problem with that is that it's you can't predict everything that could ever happen. So, it's hard to create rules for these bots, or for the software agents to actually behave in, you know, a very natural environment, right. So you know, that's where we're going towards machine learning more recently, where we're able to actually feed the software agents, lots of information, and through this information, you know, they quote-unquote, learn how to behave better, and so they have all of the different situations that we can try and feed into it so that it can learn to, you know, behave intelligently like we would like it to do. So, that's kind of where we're at right now, obviously, the folks who have all the computational power, Google, Amazon, all those folks, you know, those are the ones that you're hearing about. And that's really why they're really taking off, they have tons of data, they have tons of computational power. So, for researchers, you know, we do the best we can with what we got. So that's why there's, you know, a little bit of concern, I think, in the field of AI right now, just because these large companies are kind of privatizing this sort of thing. And researchers are just by the nature of being able to conduct this research, it costs a lot of money, to collect data, to store data, the computational power needed to run the algorithms, some of them need graphical processing units, or GPUs, which are just really expensive hardware to run. So. So you know, as far as that goes, you know, research is doing our best to kind of keep up with that. And because we want open artificial intelligence, we want it to be for everybody to use to understand. And understand, I think, is one of the more important things about it, too. So that's what we'd like, but so, yeah

Jen Burris:

So the hard part is, is that you don't have Jeff Bezos's net worth of...

Austin O’Brien:

No, No, I do not. I don't think the state does, either. But, you know, at the same time, a lot of these companies, you know, and I don't want to paint them in a dark light, either, you know, a lot of them actually supply, processing power for folks to use, there's like a free level, Google has a service, I think it's called the collaborative or collaboration, collaborate something, something along those lines. And basically, you can use their computational power, if you upload your data, you know, or there are a lot of free databases that you can use for certain projects and things like that. So, Google has theirs, Amazon has their web services, and Microsoft has their version. So, I think there is kind of a push to get it for not just researchers, but anybody who has the interest to get into that sort of thing. There is kind of this free tier for folks to get into it. But to do the really, really kind of interesting stuff. I mean, you need big, big budgets. And so that's something that we're working towards, we'll do a few more fundraisers (laughter). But we'll get there.

Jen Burris:

Gabe, I can tell you have some questions percolating there.

Gabe Mydland:

Well, I am percolating a little bit (laughter). I'm more interested in this vast spectrum of abilities that AI can tackle. Where do you fall on the spectrum? Where's your interest? And what are you working on?

Austin O’Brien:

Sure. So my interest is really kind of, you know, because like you say, there's lots of these different facets of artificial intelligence, like, the one that's probably being applied the most right now is probably AI for business. And really, it's using a lot of these machine learning algorithms to do numeric predictions like trying to predict housing markets, or, should you decline or accept somebody's credit application, you know, something along those lines. As far as what I'm interested in, it kind of takes a little bit of a turn. And it's kind of more of what most people think about when they think of AI is actually more along the line of an autonomous agent, or some bot, as you might say, that's able to behave in some environment. So, to give you just an easy idea is, it's called reinforcement learning, which is what I'm interested in. And it's where a computer basically, as I say, learns to behave in its environment in an intelligent way. So, the example I like to use is there, you can train a computer to play an Atari game or something along those lines. So just by having screenshots, basically, what we'll do is we'll digitize that screenshot, well, it basically already is, but feed those numeric values into a deep learning model. And deep learning, some folks might be familiar with the term a neural network, a neural net, and artificial neural net. And basically, it's kind of mapped after the way that you know, some scientists think that you know, the brain might work, we feed it some data, and it goes to some neurons. And then it kind of processes that data a little bit with what we'll call some sort of activation function and then it gives some output. And so, then that data kind of percolates through the neural net and gives us finally some output. So, this example with the Atari game, basically, as it will read the screen, the pixels, basically, each pixel has a numeric value. So, you know, the color scheme, RGB, red, green, blue, right, so every screenshot has a value, you know, between zero and 255. And so, with that, it just kind of looks at you know, what is the pattern of pixels represent? And then it just basically the output is what is the move it wants to make? So, for the easiest one Pac Man, up, down, left, right. So, you feed this information, and the output is basically going to be one of those four options. And so, what happens is, is that if it was did something good, we try to reward it. So, what's an Atari game, the easiest way to do that is just the score, the basic score, if the score goes up, Agent did great. If not, we find a way to punish it, if it did something bad, like maybe if it's Pac-Man, getting hit by the ghost, right? That sort of thing. Or maybe we can say, if it doesn't get any points after a long period of time, that's bad. So, we try to punish it if it doesn't do that. And it's just this, in this reward and punishment, it's just a numeric value, if it does something good, give it a positive number does something bad, take away some numbers, that sort of thing. Over a long period of time, in the beginning, it'll start by just kind of randomly making moves. And it doesn't really know what's correct. But after a series of rewards and not just instantaneous rewards, but mathematically, we can try and get it to get the culmination of rewards over time this value, and then the tries to get the maximum value that it can. And so, it just kind of learns, given what the screen looks like the pattern of the pixels, it'll learn well, if I go up with, you know, when that looks like the ghost is below me, then I get a reward, I live longer, get more points, those sorts of things. And so, through that, this whole reinforcement learning is where a computer learns to do something, well, whether that plays a game, whether that's a robot that can learn to walk, or something along those lines.

Gabe Mydland:

So, I'm struggling here, because I just watched this on Netflix, or excuse me, Sundance now is one of the channels that I've subscribed to, okay. And they were referring to the Norwegian chess master Morrigan. I can't think of his name. But to achieve that title is the Grandmaster of chess competed against a gentleman from India, who used a computer program, I think that was using AI to think of all the different possibilities, given the moves. And one thing I was amazed about, I'm not an avid chess player, but they said after the first four moves in the chess game, there are something like over 4 billion different possible plays, they could go from those first four moves. And, of course, the advantage to having artificial intelligence with a processing speed in the computer, is it can go through all those variations. And what you're talking about is not only anticipating what the next best move is but after that move what how a sequence plays out.

Austin O’Brien:

Right? Yeah, absolutely. So you know, as far as artificial in chess goes, You know, I think back to deep blue. I can't remember the year that was running. But basically, that was just a supercomputer, because there's a game tree essentially, is how that worked is, this is the state of the board. If I make this move, then it would kind of say, Okay, well, this is what the board would look like. And then it would try to cycle through all possible moves. And like you say, just the permutations, it's a huge number. So just as a supercomputer, just trying to do as many as it can, before its time limit was up and it had to make a move was how that worked. At that time, I want to say was the 90s. I can't remember exactly when that was but, with reinforcement learning, it's, it's a little bit different. Because basically, we found that, you know, that's not really tenable, there are so many moves, that even now with supercomputers, that that's not really probably the best way to move forward. But with this reinforcement learning, what we're trying to do is find, you know, where it learns kind of the relationships between the pieces, and it kind of gives this probability, you know, what I'm talking about this reinforcement learning, it's not always just a straight-up like this up is always best or down is always best, or, you know, talking about chess, moving the knight to this position is, you know, the best thing you can do. But what it'll rather do is give a probability, like, if I move this chess piece here, I have an 80% chance of winning later on and, and so on, like that. And so it's not nearly as much computation once it's actually running training, the agent actually does take an incredible amount of time and power, but that's done before the game. And once this neural net is trained, then it actually will say, it'll look at the board, and then it'll actually compute fairly quickly out of the different options that it might do. What might be the best probability of winning not just in so it's not just for that move. What's the best move in this situation, but just like you said, looking forward, what's the end goal of winning that?

Jen Burris:

So basically, it's strategizing?

Austin O’Brien:

Exactly. And that's such a good way to put it. And that's why we try to set those values, not the immediate reward, but the long-term reward. It's actually funny, we were talking about student projects. And a few years ago, I had a student project want to use reinforcement learning to do tic tac toe, right, a very similar thing. Tic tac toe, technically a solved game. You know, if you look at any board with X's and O's, there is technically the best move, you know, something like that. But the student wanted to say, Well, I want to use reinforcement learning to see if we can do something to get it to learn. And the problem was, is that he was using immediate rewards and not this long-term goal. And what would happen is, it would always try and go for a win, it would never go for a block. And it was kind of cool, but it's kind of neat, you know,

Gabe Mydland:

Just all offense?

Austin O’Brien:

Exactly, and that's what it was, is this all offense, because it wasn't thinking it wasn't strategizing, it was just thinking, what is this immediate reward, the closest thing I can do to winning, but at the same time, not actually thinking about the other player or anything like that. So it's kind of really cool.

Gabe Mydland:

So I'm kind of curious, this reinforcement learning with AI, you know, obviously, with a game of game chess or an Atari game, as you mentioned, but what are the applications in business? In the world of commerce?

Unknown Speaker

Sure, the whole idea of reinforcement learning is, to take in what is the environment? So, like, for Pac Man the screen for chess, it's the board, let's just go into the world of the stock market. Sure. So, what are the stocks and not just the stocks, you know, they're moving, but also what is happening in the world, like, I'm trying to think long term, like. So again, this is kind of above and beyond what's really out there now, but it's kind of looking at the end goal. So, you know, the idea of looking at the stocks, looking at the numbers, you know, they're going up and down. And so that's a lot of what people are working with right now. But we know that the real world affects these stocks in a dramatic way. So adding more AI, so natural language processing is the idea of a computer being able to understand language, so maybe being able to, you know, basically read articles from different internet sources, perhaps, and kind of see where maybe different companies have had great success with new announcements, or something's gone wrong with scandals, or whatever various other things, but be able to use that and also look at those stocks and be able to decide well, buying and trading, what is going to yield the highest value after and you can specify maybe a long period of time, maybe a shorter period of time. But what it will try and do, essentially, is just basically, you know, you'll start training it by you know, at first it won't know what's right and what's wrong, it'll start making random trades. And if it does things wrong, well, then the algorithm, the way that that works, is it tunes it to do you know, this, you saw this environment, you made this move, and that was bad. So, try this, try something else, essentially. And it just would try to there you go try to trade maybe a different stock, maybe for a different amount different times. And then so as far as you know, commerce goes, being able to have that live data is something that would allow it to actually function very well. But I guess, you know, reinforcement learning, just kind of coming back to that, it's just being able to look at your environment. You got to be able to feed it that data, so it can make a decision. And it can only make good decisions if it has seen similar situations before. So that's how it learns. It's not always automatic, it has to train and learn after a long period of time.

Jen Burris:

So they have to learn from their wins and their losses.

Austin O’Brien:

Exactly. Yeah. It’s kind of neat because there's a lot of libraries, coding libraries that are available. And students will do that for fun. It's make-believe they're not actually making trades or anything. But that's what they'll do is they'll try to train reinforcement agents to do well on stock markets. And it's really interesting because you know, this AI, it's not just you don't just throw it out there and it just works or behaves in a certain way. There's actually a lot of tuning that's done by humans still. And so, like two different researchers trying to do the same thing might get two different reinforcement agents behaving in entirely different ways just because they how they train their agent, how they fit it data, how they treat the reward system, the value system, penalties, that sort of thing. So yeah, I think there's a lot going on.

Gabe Mydland:

I would assume that what data is made available for the processing to determine what's reinforce able what's punishable is key. I mean, so the human element is really critical. If you're reading only the Wall Street Journal, you're getting certainly a very good source of information, but you're not getting probably enough information right?

Jen Burris

And can that lead to bias in your AI?

Austin O’Brien:

Absolutely. And that's kind of the big thing, since we're talking about, you know, ethics and AI, that sort of thing. And bias can play a huge part in that. Now there's kind of there's the strict, you know, sense of bias, not like in human terms, but let's say facial recognition. One way that we talk about bias, not in the way you might be thinking, but let's say we the way that you would train an agent to recognize faces for, maybe a webcam, like it follows your face as you're moving around something. So, it has to recognize your face, or if you play around with Snapchat, all the different filters that they can do that sort of stuff. So, it has to recognize your face, essentially. So they have to train that agent with tons of faces, and whether they get that pictures from scraping the internet, you know, stealing from Facebook, or whatever, but they get all of these faces. So it learns what that looks like, well, if you only use faces, let's say just straight on looking straightforward faces right at the right the camera, then that's what it thinks a face is. As soon as anyone puts on sunglasses, it's going to get confused. And then there's also kind of the other bias that you might be thinking of if you're only, you know, training with Caucasian people, there's going to be trouble with folks of other races. And that's something that we really do have to think about when you're working with artificial intelligence - is the data that you have, does that create a bias? Because you really want to get encompassing of what it's going to be used for. And you don't want to want anything falling through the cracks that you don't think of. And data selection is a huge part of that. We have the phrase garbage in garbage out, if you don't have enough data, or it's not good going in your models not gonna work for you.

Gabe Mydland:

You’ve discussed the ethical side of AI. Are their structures are their governing bodies, or how does it work in the world of AI?

Austin O’Brien:

Right now, it is kind of a lot of self-policing there. If there is a central ethical agency, I'm honestly not aware of them. And so even if they are, then maybe they're not that effective. So, so not to be rude about it. But uh, one of the big ones that are pretty popular, there's an open AI is the company, you might be familiar with Elon Musk. And so he has his company. And so he started the whole company, the idea is that artificial intelligence could be open to everybody, so everybody could see how it's working. So we could see problems, whether that's, you know, bias or just unethical use of artificial intelligence. And one of the things that they came out with is this natural language processing agent. So being able to read text, natural language, which is normally very difficult for a computer just because of context, semantics, think of when people are being, you know, sarcastic, incredibly hard for a computer to understand that sort of thing. But they came out with a program here GPT-3 and it’s their third iteration of this natural language processing agent. They found that with just a little bit of a prompt, it was able to write entire news stories. So, you give a prompt, like this week in the news in the White House, and then you just feed that line into it. And they will write an entire news story, that would seem plausible, that sort of thing. And they became incredibly worried about obviously, you know, write fake news, artificial and artificial intelligence doing those sorts of things. So, what happened is Microsoft ended up purchasing it and said, people can use it, but the source code for it is not up for grabs anymore. Then when people use it, it's very limited. It's not kind of in the huge context like that, that what might have been made for in the first place. So you kind of have to license it out. And usually, it's these bigger companies that are doing it for you know, chatbots for their customer service things along those lines. So yeah, so ethics and AI are just such a huge thing. Deep fakes anymore, is something. Are you familiar with the idea of deep fakes?

Jen Burris:

Yeah, so I've heard a couple of recent news stories, the Tom Cruise videos were on TikTok, and then also a mother using their deep fake videos to threaten cheerleaders on her child's cheerleading squad. To try and like get them off the team, I think.

Austin O’Brien:

Yeah, so if you're not familiar, basically, what you can do is you can manipulate basically a video to you know, you can if you feed it, someone's face, you can have somebody else kind of doing the action, but you can put anybody's face on or vice versa, basically make a photorealistic video out of something that isn’t real, a deep fake. So that's just kind of a huge, huge ethical thing, a dilemma that we're looking at now because there are even just lots of websites where you can just upload a picture and it just looks like you're singing a song and I've seen a couple of them and it's crazy how realistic it is.

Gabe Mydland:

Well to continue with that. It was a couple of years ago after the 2016 election they were talking about Adobe had some software that not only did the visual but they were able to take President Obama who's you know, been recorded several 1000s of times, and type out a dialogue. And it not only visually looked like he was saying it, but it also sounded like he was saying it right. And being someone who is very active politically, I was like, Oh, my gosh, that's a lot different than a newspaper report or a journalist, you know, writing the story. This is what appears to be a person standing up and make taking this wild position, right? And who's to say he didn't?

Austin O’Brien:

There's a ton of research or a ton of grants, I'll say that people are saying, Can we detect deep fakes? The last paper I've seen was 96% effective at detecting a deep fake, and the way they did it was the reflection in their eyes, they could determine if the reflection was realistic to the environment that was around them, which is crazy. That's what I read.

Gabe Mydland:

Did it get to that level?

Austin O’Brien:

Yeah, the model that's able to determine if it's a deep fake, that's where it was able to pinpoint kind of, and that's just one method of doing it, the last one that I've read so far. But there's just a ton of money, kind of going into just being able to try to avoid that. Now, I think that we're all kind of being made aware of these types of situations, just like you say, and, you know, not just to kind of compound on kind of the scary stuff. But I remember, you know, just kind of being on the cybersecurity sort of thing in AI, I've been trying to kind of put those two together. One of them, you know, I know of is that you'll get a call, you know, one of the spam calls or something like that, and there's really nothing on the other line, they might say hello, and then there's really nothing. They're not even trying to sell anything. It's just kind of this weird nothingness? What they're trying to do is collect your voice, what does your timber, what does your tone sound like? And what they'll do is... they'll use that to call people in your phone with your voice, say, Grandma, I need a check for $1000 bucks for school. And it's crazy how they're able to do that sort of thing. So, when you're talking about ethics, we're there, we got to be able to get on top of this. So you know, there's the ethics side of making people aware of what's going on being able to teach students, you know, obviously where is the line, and then trying to get people to defend against that sort of thing. And you know, the research to be able to detect when things are going wrong. And along those lines. So, yeah, so AI and cybersecurity or even just security, in general, is just kind of the thing that's starting to come together very strongly just because of these sorts of things.

Gabe Mydland:

So we've kind of talked about the view of a dark side. You were talking about end games. And in it sounds like there's lots of promise with AI, what do you see is going to be something that AI contributes to our existence?

Austin O’Brien:

Sure. So really, the idea is trying to solve problems that we just would not have been able to come up with ourselves. And coming back to those reinforcement learning agents. The fun thing with those is that you find strategies that nobody else has ever really come up with. And so, if you think about that, you can apply that to any range of problems in any environment you can think of. So, there are a lot of environmental problems, the oceans are running hot, running out of fish, global warming, in general, picking up, you know, all this garbage that we're collecting, what can we do with it? So, there's artificial intelligence that can tell us how to create different chemical compounds. Here's an example. Like in chemistry, with recycling, maybe we're trying to break down Styrofoam cups or something like that, what can do that safely, efficiently, not give away nasty fumes, that sort of thing. Typically, in a lab, you'd have to work with these chemicals that can be expensive, time-consuming, and may be dangerous. With artificial intelligence, what we might be able to do is go down to that molecular level, they know how these combinations normally occur between different elements. And so maybe we can come up with a new compound to do that, without all of that expense of all this lab stuff. Something that comes up, you know, through the simulations, and then we'll try that in, in real life, and see if that can help us out. So maybe that's something we can do and, you know, take out pollutants in water, then, you know, agriculture is a big one that I would like to work with, too, with pesticides, you know, it's standard to just spray the whole field. And that can, you know, lead to drainage and cause issues there. Well, with artificial intelligence, we can get a bot maybe to do what we call strategic, micro, spraying something like that, where it can recognize a weed just zap it or do whatever it needs to do, and then go on and keep on moving. So we're not spraying mass chemicals, just little bits where we need to, or maybe just dig it up, Whatever you want it to do. So, there's a lot of different AI's working together with the reinforcement learning bot where it has to drive around, then you got image recognition, which is kind of its own thing where it can recognize that a plant is actually a weed and not your, your soybean plant and, and then dig it up and then, you know, return back before the battery dies, and there are lots of things that work together to make it work and so solving these huge problems that are coming up for us, you know, we got so many people on earth, you know, we only got so much food, how can we handle this? You know, it's this, these big problems, and maybe AI can help us out with whether it's in a pure application, or even maybe just coming up with new strategies, we haven't thought of.

Gabe Mydland:

Amazing. Exciting.

Jen Burris

That brings up a lot of ideas for an opportunity, and maybe a little hopefulness for some of the stuff going on.

Austin O’Brien:

Yeah. And that's the thing. Like, I know, we were talking about the bleakness of it. But you know, when I look forward, I don't think of it nearly as bleak as you know, we were talking about earlier. That's kind of with any tool that comes up in human history, there are people who are going to use it for nefarious reasons. But eventually, there's going to be either regulation, there are enough good folks, I think that are going to be willing to work to step in front of it and curb it where they have to. And so you know, when I think about AI, I don't have any of those apocalyptic worries, it's just, I don't really worry about it, I think we'll get way more good than bad.

Jen Burris:

So in the aspect of AI way down the road, you train it to try and solve this problem. And it is superseded intelligence. So it starts disregarding how that would impact humans or something is that something that is even plausible?

Austin O’Brien:

When we're talking about those bots, like you give it those values, you know, we're trying to reward it. So it reaches some end goal. Sometimes it doesn't necessarily know what the goal is, sometimes it just tries to keep getting value and value and value. But there's also you know, punishments. And so really, it comes down to because I was talking about that student who is doing tic tac toe, at the end of the day, there is still so much of a human element behind it at least right now, where I can't see a system such that any sort of hurt on a human whether physical, whether you know mental or anything along those lines, or just removing humans from the picture, I really don't see that happening explicitly. It would take somebody going out of their way to make it that way, to begin with. And that would just be odd. And it would take in to have something that could actually then affect people on a large scale, incredibly expensive, time-consuming. Like I say, all that computational power. So you're talking about governments and large companies are the ones that you'd have to worry about trying to do something like that. So that's why we have our great cybersecurity agents trying to stay on top of those sorts of things, making sure folks aren't doing gnarly things to hurt other people. And but as far as just kind of the average Joe, even if you have the intelligence and the know-how to build something like that, just the resources you would need to actually make it work. It's just not at this time feasible. So I'm not too worried.

Gabe Mydland:

So I'm thinking, let's say I'm a student, and I'm listening to this podcast, and I'm really excited about what I'm hearing about AI. And I'm thinking about coming to DSU to explore this. What does a student who's interested in in this field? What kinds of classes do they take? What's their program of study? Sure.

Austin O’Brien:

So you're working with computers, and there is a lot of programming going on. So I say computer science is kind of at your core, really doing programming algorithms, things like that. Especially with machine learning the backbone to a lot of it is statistics. So, I would say if you could learn as much statistics as you can, and then kind of run with that also just kind of math in general, neural nets, in order to work properly. You know, they use a lot of linear algebra, which is, you know, matrices and vectors being multiplied and added together. all that fun stuff.

Gabe Mydland:

I'll take your word for it. (Laughter).

Austin O’Brien:

And multivariate calculus, right. So to build those algorithms, you know, is kind of fairly math-intensive, I kind of equate it to like someone building a car versus being able to drive a car, we're kind of at the point where these software libraries I've kind of been calling them. Where these packages where you know, a programmer can still build these agents without having to know this intense math, the computer can kind of do that behind the scenes. For the most part, it's really good to understand. So if you have to tweak it can do that fairly quickly. But why we're doing these courses now at this undergraduate level is that's just kind of starting to become possible is where these students can really take off and build these agents without this incredible mountain of statistics and math behind them. So as far as that goes, you know as much math and stats as you can just really understand, getting back to your question there and then programming there. But after that, you know, we're working on building AI at Dakota State University as a full-fledged degree. And so, a part of that, though, is that we want people to actually apply it to all of these different problems. So really taking an expanse of different majors, applying it to music, applying it to psychology, applying it to agriculture, teaching. All of these different things are so viable, that we really want a diverse set of students who want to apply AI to other fields. And so we really want to see is kind of this AI for all this kind of the idea. So we want all sorts of students at all levels.

Jen Burris:

And you guys have an AI minor now, right?

Austin O’Brien:

Yeah, we started with a specialization. You know, we were just kind of just playing with the idea. We had students who were interested. So you have to do some courses. And then we'll have this specialization, said, well, let's take it another step further. So that's what we did. We built the minor those courses kind of really shot off. And so we've been working to see if a Bachelor of Science in artificial intelligence is tenable. So we've been working on building the curriculum. And as far as I understand, we anticipate offering that this next fall.

Gabe Mydland:

Wow, that's great.

Austin O’Brien:

Yeah, so a Bachelor's in AI, so we're really stoked for it. Yeah.

Jen Burris

And can you speak to any of the research that you might be working on in AI, or that's going on here at DSU?

Austin O’Brien:

Yeah, so just kind of going back to, you know, using AI, and then, you know, just because Dakota State is, you know, has a huge footprint in cybersecurity, I had a graduate student I worked with to build an agent that would do penetration testing. A penetration tester, they'll be hired by some company to basically try to find the faults in their security system, try and actually hack into it, and, that sort of thing. So trying to build an agent to automate that process, you know, going into, you know, computer terminals, automatically typing commands, findings, quote, unquote, sensitive files, that sort of thing. That's something that I've kind of been working on the last kind of few years or so with students. And then just kind of in my own time, eventually, I would like to really kind of get more into agriculture. You know, I really liked the idea of these bots working and, you know, self-driving tractors is already kind of a thing that's out there. My father-in-law has one of those, it's pretty fun. So working with other universities in the state, who are, who have kind of that agricultural, the resources to be able to do that research. It's good to be collaborative whenever you can. So that's really great, you know, to be in South Dakota, where it's the primary economic factor, as far as I understand. So those are kind of the things that I've just kind of been working on. And you know, as you say, just kind of that precision AG, but we've been hiring, you know, new faculty just over the last few years that are just super interested in AI besides me, some folks are doing research with looking at x rays, letting the computer look at x rays and determine what's the probability of cancer? Let's say like in the lungs or something like that. And we have others that are, you know, working on what's called edge AI. So, I was talking about, you know, how much computational power it takes to run these things? Well, edge AI is where actually the computation is done on maybe like a central server, and then it's beamed either via the internet, or a wireless network or something like that, to like a mobile device or device on a tractor.

Jen Burris:

So you don't need all that space.

Austin O’Brien:

Exactly. You don't need all that computational power, I can just kind of still run the intelligence side of it, without really burning the battery.

Jen Burris:

Those definitely sound like great things to be looking into, especially the ag in South Dakota.

Austin O’Brien:

So that's just what interests me. But like, the fun thing is about this is students come up with the best ideas. They'll walk up to me and just say, I want to do this. And I'm like, that's, that's cool. Let's go pursue it. So. whether it's just playing with games, or like I say, a lot of students are interested in the stock market these days. But with Sanford health, that sort of thing, we've developed a new relationship with them. So, there's just a lot that we can do there as well, with artificial intelligence in the medical community. So that's another exciting opportunity that's opening up for us. So now, a lot going on. We're stoked.

Gabe Mydland:

I'm excited about the idea that you talked about it, not just a background in mathematical computation and statistics and things like that, but you're looking and hoping for students across disciplines to jump in this and I hope that at some point, when we develop the curriculum even further, those kinds of classes might be an elective or two, that students can pursue not only their professional passions but add to that another dimension, where they're using and understanding how AI might be something that can assist them in their futures.

Austin O’Brien:

Right. Absolutely. So maybe like you say, if their bachelor's degree is in education, something along those lines, maybe they can do an AI minor and see how that can help them a little bit and at least like you say, understand what's going on. And, and then I don't want to talk too much about the major because it just hasn't been solidified yet. But really just, you know, the conversations we've been having is that we would prefer that these AI students actually pursue a minor outside of technology. Well, they can if they want to, but we really want students just from all over, because it really allows people to think of how to apply it to ideas that just haven't even thought of yet.

Jen Burris:

It kind of offers a diversity of thought in the AI industry, then?

Austin O’Brien:

Absolutely. And that'd be great. And that's where new ideas come up. And then you know, somebody uses AI for such and such problem in education and somebody else in some other program, whether it's, you know, maybe athletics or something along those lines, saying, Hey, I kind of see what they're doing there, I can kind of twist it a little bit to work in such a way with mine. So it just, it just opens up this idea of applying AI and all of these different ways that we just haven't thought of yet.

Jen Burris:

Sounds like an exciting area for new students to look into

Austin O’Brien:

Hope so yeah, yeah. Yeah. Yeah, we're excited.

Gabe Mydland:

If I  was only 40 years younger.

Austin O’Brien:

Oh, you could do it now.

Gabe Mydland:

Well, I guess I could.

Jen Burris:

Yeah, there's no timeline.

Austin O’Brien:

Four years from now, you'll still be four years older. Might know AI.

Gabe Mydland:

That's true. I can't wait to tell the wife (laughter). Honey…

Jen Burris:

Have her help you with the math homework.

Gabe Mydland:

Oh, yeah, definitely. Definitely, because you really freak me out on the math.

Austin O’Brien:

It's not so bad.

Jen Burris:

Well, anything else that you want to add while you’re here?

Austin O’Brien

Let's see. So, I want to make sure I plug the program, but I think I did that pretty well. But every time I’ve ever heard an interview about AI, there's always that doom and gloom, you know, that comes up. And I guess I just want to say the folks don't worry too much about that. It's way more sci-fi than then you think. And then even with the deep fakes and things that are real. And then the other things like that there are folks who are working to rein it in. So, I was gonna say, don't panic, because a lot of people do. And I think I think the future with AI is actually really exciting. I think it's gonna be a lot of fun. And I think it's going to solve a lot of these problems that were a lot of us are worried about, you know, the whole grand scheme of things. And I think, eventually it's going to help us solve those problems. People working with AI to help bring that together for a better future than we might have had without it.

Jen Burris:

Excellent. Well, I want to thank Austin and Gabe, for being here and chatting.

Gabe Mydland:

This was fascinating. This was great.

Austin O’Brien:

I appreciate it.

Jen Burris:

It was a learning experience for us all I think and I want to thank Spencer, our sound designer.

Jen:

Welcome to Cyberology Dakota State University's new podcast where we'll be sharing and discussing all things cyber. I’m Jen Burris from the marketing and communications department at DSU and I'll be your host. Today we'll be talking about cybercrime, which generally speaking, is considered a criminal activity involving a computer network or network device.

I have a couple of experts here with me. I'm excited to welcome my illustrious cohost for the episode Dr. Ashley Podhradsky. Ashley is a woman of many accomplishments and almost as many titles at DSU. She is an Associate Dean in the Beacom College of Computer and Cyber Sciences, where she is also an associate professor of digital forensics. She is the founding director of DigForCE, a digital forensics lab that is a regional resource for law enforcement agencies and businesses who have been victimized by cybercriminals. She is also the founder of CybHER a program with the mission of empowering, motivating, educating, and changing the perceptions of girls and women in cybersecurity. But that's not it. She's currently serving as Interim Vice President of Research and Economic Development here. Ashley, why don't you tell us a little bit about yourself?

 

Ashley Podhradsky:

Well, that is a big list. As you're reading it, I am excited to be here today and to talk about the work that Dr. Arica Kulm is doing in the dig force lab at Dakota State University. Connecting with students is one of my favorite things about being a professor. You get to learn their strengths, their interests, and watch them excel in their career. I met Arica when she was coming to study for her master's degree and then I asked her would you consider a Ph.D.? And I was thrilled when she said yes, today Dr. Kulm is the first graduate of our Ph.D. in Cyber Defense program and is our lead digital forensic analyst in the DigForCE lab. Creating that lab is something that is a big passion of mine because our field of digital forensics and incident response helps people, organizations, and the government here in South Dakota and beyond address cybercrime.

 

Jen Burris:

Amazing. And with that, why don't we have Erica talk a little bit about herself.

 

Arica Kulm:

Yeah, so thank you, Ashley, for that generous introduction. Like Ashley said, I came to DSU to pursue a master's degree, kind of a career change for me. And I found myself wanting to get into a field that was interesting and impactful. And I think that's what I said, I was looking for a job, that would be something that was interesting, and made an impact on people. And when I was going through the master's program, people would often ask me, like, what are you going to do when you're done? And that always made me uncomfortable, because I really didn't know. And I knew I'd liked forensics, I knew it interesting, but in our areas, so often, that leads to law enforcement, which obviously, I don't have a background in law enforcement. So, I really was fortunate to be here, right at the perfect time when DigForCE was being launched. And honestly, when I started the master's program, had no intention of pursuing a Ph.D. But same thing, perfect timing, the cyber defense Ph.D. happened to be offered right at the same time that I was finishing the master's program. And I thought, sure, why not. I had finished basically all the core classes and had the research classes and a dissertation left. And I thought, well, a dissertation doesn't seem so hard, which in hindsight, was a little short-sighted. But I'm now finished and fortunate enough to be working in the lab. And I get to come to work every day and do a job that I love doing and do it with people that I really enjoy being with. So, I'm very fortunate.

 

Ashley Podhradsky:

We're at the intersection in our field there, technology, and IoT, our wearables our phone has become such an integrated part of our life. And at the same point, people then do things they shouldn't do with those devices and investigating what data resides on the device where the device was, what the people did with it. That's what this field is all about. And having a person who's inquisitive and intelligent and can take those puzzle pieces and put them together and tell us a story, is what this field is, and Erica excels in that space. And fortunately, with her leadership and work, we've been able to help quite a few agencies investigate cybercrime that's occurred throughout our state.

 

Jen Burris:

is that something that happens quite frequently?

 

Arica Kulm:

It’s very frequent. So, it depends on what you consider a cybercrime. Because what we do in the lab is more host-based device forensics versus, you know, a network intrusion or data breach or that type of thing. And we can certainly do that. But what we've done up to this point is more the host device forensics at this point.

 

Jen Burris:

Okay. And can you talk a little bit about what the process is like with the host forensics?

 

Arica Kulm:

Sure, so you know, as Ashley said, we work with different agencies here in the state and some federal agencies as well. So, when they have a criminal investigation, where they've seized a device, they'll submit it to us. And it always has to come along with the proper paperwork. So, either search warrant or signed consent form. And we read that form to see what we're authorized to look for because it's not always a blanket consent to look for everything that there is on the device. So, if it's a drug case, we're looking at chat information, images, communications, that type of thing. And we extract the data in a forensically sound way and go through and look for whatever we're looking for. And then a big part of what we do is write very detailed reports, we write how we got the device, step by step, everything we did with it, and to document that, and then what our findings are. And so, a couple of things that we really need to be conscious of are being very detail-oriented and having very good writing skills. One of the last reports I did was, I think 50 pages. So Yeah, it can be

 

Jen Burris:

Wow time-consuming,

 

Arica Kulm:

It can be long and time-consuming. But you know, Ashley mentioned it being like a puzzle. Also starting a new case is a little bit like reading a new book, you know, you're reading through the case report to see what it's all about. And then you're going through that evidence to see what evidence that you have there matches up with what you're finding in the case report. And kind of like a book you're reading, sometimes it's super interesting and really kind of sad when you get to the end. And sometimes about halfway through you’re just like ugh, I just want to be done with this. Even though you can't, you know, you can't just shut and be done, you have to finish and do a thorough job no matter what it is. But that's what I would kind of equate it to also is kind of like reading a book.

 

Ashley Podhradsky:

Okay, so the labs working on a lot of drug cases, as Arica mentioned, but another example would be embezzlement and people that are in business together, perhaps one starts selling inventory online, and is cutting the partner out of the profits. So taking a look at the different sites they visited the different transactions they have on their machine or system and the communication that they had people document a tremendous amount of things that they're doing and being able to pull those pieces together to share what happened is all part of this space.

 

Jen Burris:

Do you think some people even document things that they might not realize are telling on themselves? Do you find that?

 

Ashley Podhradsky:

Oh, absolutely.

 

Arica Kulm:

Absolutely. I find a lot of screenshots. People that, they may delete the text message, but they've screenshotted the text, and it's saved as an image on their phone.

 

Jen Burris:

I have a lot of screenshots on my phone. Yeah.

 

Ashley Podhradsky

You wouldn't want someone going through years of your screenshots no.

 

Arica Kulm:

Even as an innocent person, I don't want somebody going through my phone.

 

Jen Burris?

Yeah. Okay. So, what is a standard day for you? When you're working on a case? Is it all research and reporting?

 

Arica Kulm:

It varies. I would say we average probably three cases being submitted a week. So, it's coordinating with those law enforcement officials, if they're gonna drop it off, or if it gets sent to our office. So, it's, you know, in taking the device and the information, we photograph everything, when we get it, we make sure that we have all the proper documentation. If there are any questions, then we communicate back with them. If we have any questions on what they're seeking, or what they're looking for, once we photograph it, then we're doing the extraction. So, we're working with the device itself. And then once we're done with that, then we're processing it and then looking through that information. So, it's, you know, some days are routine, and you're kind of doing the same thing, but other days, not so much. It's, you know, coordinating with those officials as well.

 

Jen Burris:

So, would you say there's a lot of collaboration in your department in DigForCE? Or is it kind of solo?

 

Arica Kulm:

it's pretty solo a lot of the times Yeah if we're doing research, there's a collaboration in which we have a research student employee that works in our office who is a great help to us. And he does a lot of our research for us if we have something that we're not sure, you know, we get a new device, we got a GPS device in last week that we haven't done before. So it's having him, you know, look up, he doesn't do the work on the device itself, because that's evidence but you know, he can do the research online and try to figure out, you know, what would be the first step to deal with? How would we handle that?

 

Ashley Podhradsky:

Yeah. And that's really helpful having our student employees who don't touch anything with the actual case, but we can say, here's this new router, or here's this IoT wearable device. Tell us what other people have found, what have people published about it so that way we can take that in and use that to advance the work

 

Arica Kulm:

And we're fortunate we have a student right now who's reliable, great communication skills, and just a great student. So

 

Jen Burris:

 it's a nice learning experience?

 

Arica Kulm:

Yeah, it's, you know, it's not always that easy to find. So, I'm very thankful for that as well.

 

Ashley Podhradsky:

I like to bring up one of the specialties that we have in the lab. And I'm gonna tee this up for Arica because she's not gonna say how awesome she is in this space. But the dark web is such an emerging part of our work. Criminals are using it to obfuscate their location, there are transactions that are occurring on it of illicit goods. And there are very few forensic investigators that understand how to find host-based dark web artifacts on a machine, whether that's on Linux or phone or a Windows-based system. Dr. Kulm’s dissertation was focused on this, she spent a couple of years honing her skills and understanding what data resides and how you can analyze it and use it in your case. So, she's gotten to the point where, you know, she is that national leader in this space. And you know, people have been using the dark web for a long time, but more people are starting to understand that it's being used and they're recognizing that they don't have the knowledge to properly investigate it. So that means that a lot of people are perhaps walking on situations that they could have been prosecuted on. So, I'd like if Arica could talk to us about her work on host-based dark web artifacts, and perhaps any anecdotes of cases where she has used that.

 

Arica Kulm:

Yeah, so my dissertation was on finding those host-based artifacts and creating a framework that investigators could use to assist in finding those because they're not always obvious and easy to find. So, I used Justin Nordine’s OSINT framework, and it's a clickable framework. So, you can go in and it's a yes, no. If it's this, then is it that type of framework. So, you go in the first question is, are you dealing with Windows? are you dealing with Mac OS? Or do you have a tails drive, which is a bootable operating system to access the dark web? And then as you step through each one, it asks a series of questions to help walk you through what you're trying to find on that system. And then as you get through, it gives you the artifacts that you can look for, and not all artifacts that are listed on the framework will you necessarily find on a system and there may be artifacts that are on a system that may not be in the framework, it's you know, obviously with any framework, it's a work in progress, but it is a good guideline to help investigators find those artifacts. And a way that I use to validate it was to have our South Dakota DCI ICAC (Internet Crimes Against Children) task force, some of the members from that go through and validate it and use it and actually use it in an actual case to find some of those. And one of the things we're finding, and we've had a couple of cases recently that the people that are using the dark web to go out and buy drugs are high school students. So even though I would say millennials and older may have heard of the dark web and think Ooh, you know, I don't know what that is that the kids know what it is.

 

Jen Burris:

I find it an interesting topic. But I know very little about the dark web itself.

 

Arica Kulm:

Right, it's this mysterious, we know it's there. And but maybe you don't want to talk about it.

 

Jen Burris:

What do you think that is?

 

Arica Kulm:

I would equate it to ICAC a little bit like, you know, it's there. But it's maybe a little distasteful, so you don't necessarily want to address it.

 

Ashley Podhradsky:

ICAC is internet crimes against children. It's a lot of the child pornography casework that's done.

 

Arica Kulm:

You know, it's there, you know, it's happening, but maybe not to the magnitude it is. And if we just don't talk about it, then it's not an issue, which we know that's not the case.

Ashley Podhradsky:

So when you go to Google, and you put a search term in, you're going to get page results that have been indexed based on those keywords, when you go to the dark web…So traditionally use a utility like Tor, the onion router to get on the dark web and pages aren't indexed, you have the dot onion link at the end. And so, they're alphanumeric, you're not going to know what it is. It's not cnn.com foxnews.com. It's 1AA7W8, you know,

 

Arica Kulm:

There are 16 characters or longer and not easily rememberable, you have to go look them up and cut and paste or type them in.

 

Ashley Podhradsky:

So, the point is to get where you're going, you have to know what's there. And people move their sites around so often so that way people don't find them. But there are some well-known marketplaces that have had a persistent connection. There are also some legitimate uses of the technology. So, the New York Times has a dot onion page, because people all across the world who might be in countries where that type of news is prohibited, can actually read it. And so, it was actually designed by our government for our citizens across the world to communicate anonymously. So, with that anonymous ability, people started realizing that, hey, I could do more than just send a message back to the States, I might be able to make a transaction and people can't necessarily trace. So when you log on here, in Madison South Dakota, it takes your web connection and it's going to pop it all the way around the world, for multiple routers – the onion – onions have layers, it's going to go from layer to layer, and then it's going to show that your exit node might be Russia, or it might be North Korea. So, it just depends. But the whole point is it's obfuscated your location through enough hops that we can't really tell where you're coming from.

 

Jen Burris:

And that makes it harder than to find the person committing whatever acts that they're doing, right?

 

Ashley Podhradsky:

Yes, absolutely.

 

Arica Kulm:

But it also encrypts that data along each step. So that not only is it obfuscated, it's encrypted, so you don't know what's inside that data until it gets to the very end. So, both of those things together lend themselves to criminal activity.

 

Ashley Podhradsky:

But it was created for data privacy, you know, we create a lot of things for good. And then people think, Hey, I could use that for the opposite reason. And so that's why we need people like Dr. Kulm who can do these types of investigations.

 

Jen Burris:

And what interested you in doing that dissertation on the dark web and getting further into that area?

 

Arica Kulm:

So, I've never heard of the dark web until Ashley, you'd asked me to do some research on it. I'm like, how can this be?

 

Ashley Podhradsky:

That was your first time?

 

Arica Kulm:

First time I had got into it. Yeah, I mean, I had probably heard of it.

 

Ashley Podhradsky:

I love that! Yeah, I was going on a long flight, so I asked her to put together a little read book with some new technologies in the space. Ah, I love that.

 

Arica Kulm:

So as I got into it, I'm like, how can it be that this is not traceable, this doesn't seem possible that you can leave no trace behind knowing what we know about forensics, that just doesn't seem possible. The traces are minimal, and like anything, depending on the sophistication of the criminal is what's left behind.

So, if they're sophisticated enough, there may not be much trace left, if you use something like tails,

 

Ashley Podhradsky:

Dr. Josh Stroschein and I  did a case down at the FBI in Omaha a few years ago about Deanonymizing Tor traffic. If there are misconfigurations, and settings within your browser, we can start to see the true IP. It's not masked like it was before. But those are always emerging situations because you don't know how a configuration setting will change the output unless you really dig for it. So, there are possibilities, but it's just it's not something that is absolute.

 

Jen Burris:

Okay. And when you find these traces, does it lead to more information about what's going on? Are you pretty solid, about being able to expand upon it once you find one in?

 

Arica Kulm:

You can, yeah, it can show you, you know, sites that were visited and what it can be images that were downloaded from those sites, and you'll find the dark web is full of can be full of malware. So, you may find traces of malware on that system. And you can find that they installed Tor, which like Ashley said, it's used for privacy. So, it doesn't in and of itself mean that they were doing something criminal, but that combined with some of the other things can often be an indicator of what they were doing. Yeah.

 

Jen Burris:

very cool. I think that that is something that a lot of people are interested in and don't know a lot about. So glad we covered that topic. Can you tell how the dark web might impact everyday people? Or if it does?

 

Arica Kulm:

I don't know that it really does if you don't go out and look for it. It'd be something for parents to be aware of, if you're a parent of a teenager packages starts showing up in the mail that you don't know what they are. Because that's that's how you if you're ordering something on the dark web, that's genuinely how they show up as US post office. So, you know, your student starts acting funny. And that would be something to be aware of.

 

Ashley Podhradsky:

Yeah, there's been a few different cases that we've done that have been started with a tip from the post office. Multiple packages being delivered, that goes to law enforcement. Law enforcement does their investigation, confiscates the devices, and turns them over to us to analyze. Brookings had a situation where someone was using stolen credit cards to purchase goods and have them shipped to their residence. So, you know, they might say, well, they just showed up. But once you look at the system, it says, well, they showed up because you bought them. You put those transactions on your systems. So, you know, there's tells and things like that as well.

 

Jen Burris:

And do you always know what you're looking for?

 

Arica Kulm:

Not always. Now, the last case that I had; it was not a drug case that it started out as –

 

Ashley Podhradsky:

domestic terrorism.

 

Arica Kulm:

Yeah, domestic terrorism. Thank you. That was a good word. And as I was looking for evidence of that, I started seeing some of these other indicators of dark web activity with weapons and drugs as it turned out, so it's not always what it seems.

 

Jen Burris:

I don't know how I would react if I were the one finding that information out. So how do you guys feel when you start to kind of solve the case so to speak?

 

Arica Kulm:

It just kinda depends on the nature of the case. You know, if we're in there, like I said, with a search warrant, where it's not always a blanket look for everything type of case. So, if we're in there looking for drugs, for example, and we find child pornography, that's something I have to stop right there. I can't investigate that for two reasons. Number one, we're not a law enforcement facility. So, we can't possess that any more legally than anyone else can. So, at that point, we stop and turn it over to the DCI investigators to do that. And the search warrant doesn't specifically say that's what we're looking for. So that's something that we can't do so and that would be a case where we have to just stop and then we don't always know what happens to that case. So that can be a little frustrating.

 

Jen Burris:

So, DCI would take over the case completely at that point?  

 

Arica Kulm and Ashley Podhradsky:

Correct. Correct.

 

Arica Kulm:

And that can be a little frustrating, not because we're turning it over to them because they're more than capable and we know it's in good hands then. But we don't always know what happens to it at that point.

 

Ashley Podhradsky:

The digital forensics lab at Dakota State University has dual leadership. We know that we need academic leadership for our DSU side, but we need law enforcement leadership through the DCI side divisional criminal investigation, so I work jointly with agent Toby Russell. He is a DCI agent and he's in our lab and works with Erica. He works with Erica more than I do on a daily basis. And he's has a wealth of information and knowledge. So, our funding through the Attorney General's Office supports the leadership between DSU and DCI in order to have this lab so that way the output and reports that we do are accepted in law enforcement. Arica goes and testifies in court in participates in expert witness testimony and does those things. So, through our funding through the attorney general's office and consumer protection in our partnership with the Division of Criminal Investigation, we're set up to succeed in this space and assist law enforcement in South Dakota in helping solve cybercrime. In addition to the digital forensic casework that Dr. Kulm leads with our partners in DCI, we also do investigations for consumer protection in the attorney general's office. So individuals, organizations, government entities in the State of South Dakota who has had a crime or cybercrime in that regard, so perhaps it is a scam where a business lost $50,000 on payroll diversion, perhaps it's a scam where someone was tricked into buying gift cards because the CEO asked them to and they need to figure out who that person was. So, we've helped in multimillion-dollar scams that have arisen here in South Dakota for, as I mentioned, individual people, businesses, private entities, and the state government.

 

Jen Burris:

Do you think that a lot of people don't hear about these things? Because when I think about South Dakota, I don't think about multimillion-dollar scams taking place.

 

Ashley Podhradsky:

Yeah, yeah, people aren't excited to advertise that they were scammed out of money because you might not have as much confidence in that business and their operations. And so, people think, well, we have our data breach notification law, we should know this stuff now. Well, not really, because there are certain parameters that have to be met. And a loss of a million dollars doesn't meet that threshold, because he didn't lose personally-identifying information. And so there are things that happen daily here in the state, unfortunately, but the biggest takeaway from all the cases that we looked at is, when you incorporate the human into it, you can usually stop it in its tracks. So it's very common for HR departments all across our state and country to get an email from someone or fax that says, I updated my checking account, please use this new routing and new account number to process my next payroll. Oftentimes, that form is found on their website so they can find out and it lists on their fax it here. You know, they might go up to LinkedIn and say, Hey, I know this executive works at this company. Here's the form that changes payroll, and here's where I fax it. And so, the number one thing people can do is just pick up the phone and say, Hey, Mrs. CEO, did you really change your payroll? And if it's no then don't process it. So, you know, schools are hit hard in that area, and municipalities are hit hard in that area. And it's all about trying to balance the scam with the quickness that we're set to operate. And when you throw in a virtual world of a pandemic, it further exacerbates that so whenever you hear something or get something in that context, just reach out and call the person don't email them back – “Hey, is this really you? Mrs. CEO?” Well, they’re gonna say yes, it might not be. And it likely isn't. So just as a tip always reach out.

 

Arica Kulm:

And I think CEOs can help themselves too, by informing their staff, I will never ask you to buy gift cards or that type of thing. And it's so easy it is to handle everything by email, and I get that, but the personal connection, just pick up the phone. And one of the things that Ashley you didn't mention that we also do for the attorney general's office is consumer education.

 

Ashley Podhradsky:

Yes, yes.

 

Arica Kulm:

You know, we do a lot of that out of our lab just to help consumers, you know, protect their private information, whether it's on social media, or just educating them about what they're putting out there. And just know what of your data that you are putting out there for people to then use to, you know, then further these scams.

 

Ashley Podhradsky:

You know, and that's the uniqueness that we bring to the table as an educational entity, is that our core is education. And yes, we can do the applied work, we can do the casework, we can help them solve what they need to solve. But on the reverse, we also are incorporating the human factor into it along with the education of, well, we're seeing so many of these Can't we just turn that around and help train people a little bit better? Can't we help educate our citizens in the state that you should not send someone you met on Facebook money? You know, those kinds of things. So, the romance scams that we see in our state, all of those types of scams that impact our citizens, just kind of breaks your heart. And so, as Dr. Kulm mentioned, we are incorporating those common lessons learned into outreach for AARP and things like that.

 

Arica Kulm:

And I think people see those on the news and think, Oh, my gosh, how could you ever fall for that, but it happens all the time. And I think people just get caught up in it. And sometimes they know themselves, and just can't either can't or won't get out of it, for whatever reason, and it's hard. And they're very good at it. They aren't making all this money scamming people because they're bad at it.

 

Jen Burris:

And do you think that scams are something that every citizen should be educating themselves about?

 

Arica Kulm and Ashley Podhradsky:

Oh, absolutely. Absolutely.

 

Arica Kulm:

Be aware of your own information that you're putting online. You know, we live in a world of social media, but you don't have to put everything out there.

 

Ashley Podhradsky:

You know, I have a seven-year-old daughter. And when I let her play on games like Roblox, it's always when I'm right there with her. So, she was in the kitchen, and I was making dinner, and someone sent her, and I tell her, you can play the games, but you can't chat with anyone. Well, a chat popped up, and they said, 'I know where you live.' And Chloe, my daughter, she's like, 'someone says they know where I live'. And I said, 'tell them mommy has a VPN. And that's not true.'

Everyone:

Laughter.

 

Ashley Podhradsky:

But you know, so I mean, it goes to that young of a group where, you know, parents might think, hey, well, this is a game, it's harmless. Well, it's not because you still have those games are not, when I grew up, it's not Nintendo words rudimentary standalone devices, it's heavily embedded communication systems. And they might look like a game. But your kid is can be chatting with someone that you have no idea who they are, you're inviting them into your world, and they can say anything. And so, you know, putting those safeguards on your kids to is important.

 

Jen Burris:

Something that some parents might not be aware of?

 

Ashley Podhradsky:

 Yeah, you can get proton VPN for free. All it does is it takes your connection and does obfuscate it. So, it takes those hops along so people can't see who you're where you're at. It takes the geolocation ability from the IP and protects you in an extra step.

 

Arica Kulm:

And even the paid versions, what $2 a month or something

 

Ashley Podhradsky:

I pay $10 for multiple devices.

 

Jen Burris:

So just another safety measure?

 

Ashley Podhradsky:

Absolutely.

 

Jen Burris:

Okay, well, anything else that you guys want to…

 

Ashley Podhradsky:

You know, when it comes to protecting ourselves in businesses and organizations, if you go to the consumer protections website, or call them at 1-800-300-1986, they can help anyone who has been scammed or has been part of a situation like this, and they can help get some hopeful resolution for you. But they have tips on their website that they change quite often, too. If you go out to the DigForCE website on the DSU site, we have different tips for social media platforms. So, if you want to lock down your Facebook or your LinkedIn or your Twitter, how do you do that? What steps do you take to make sure that your account is private and can't see your information that you don't want them to see

Jen Burris:

That's definitely a good resource. And I'm glad you shared that with us. Well, I don't want to keep you busy ladies all day. But I just want to thank you both for coming in Ashley for cohosting and Arica for being our guest. And Our sound designer Spencer Raap. Thank you for listening and make sure to subscribe to our podcast Cyberology.

DSU spot:

Technology's changing everything fast, faster than you can say Bitcoin or 3d printed meatloaf. But you know that what you might not know is a school right here at home is leading the future of cyber, Dakota State University. With cyber, we're solving real problems and transforming education. We're redefining possibilities for the entire state and beyond. Get to know Dakota State. The future is here. See how we're breaking through at DSU.Edu.

Cyberology is a monthly podcast about all things cyber and technology. Each episode will feature conversations with innovative, awe-inspiring, and expert members of the Trojan family.

As a cyber powerhouse on the prairie Dakota State strives to provide learning that integrates technology and innovation to develop graduates ready to contribute to local, national, and global prosperity.

Cyberology will help showcase the amazing talent at DSU.

You can find our Podcast on a variety of apps like Apple and Spotify.