1. Karl Kosko
  2. https://www.kent.edu/ehhs/tlcs/profile/dr-karl-w-kosko
  3. Associate Professor
  4. Design and Implementation of Immersive Representations of Practice
  5. https://xr.kent.edu/
  6. Kent State University
  1. Richard Ferdig
  2. http://www.ferdig.com
  3. Summit Professor of Learning Technologies
  4. Design and Implementation of Immersive Representations of Practice
  5. https://xr.kent.edu/
  6. Kent State University
Public Discussion

Continue the discussion of this presentation on the Multiplex. Go to Multiplex

  • Icon for: Zenon Borys

    Zenon Borys

    Higher Ed Faculty
    May 10, 2022 | 01:51 p.m.

    This is fantastic!  Really interesting and compelling.  I know it may be a ways off, but I can I also see potential uses for coaching interactions.  Often times when I'm coaching a teacher, we have rich discussions about what we noticed, because we noticed different details.  And this makes those details more accessible.  On a more logistic level, I find myself wondering about the audio.  Does moving the view angle also change/enhance certain sound directions?  I'm imagining instances where it would be great to be able to focus in on a small group discussion but have always had issues filtering out other background sounds.  Great project!

     
    1
    Discussion is closed. Upvoting is no longer available

    Karl Kosko
  • Icon for: Karl Kosko

    Karl Kosko

    Lead Presenter
    Associate Professor
    May 10, 2022 | 02:02 p.m.

    Great thoughts and questions! 

    • For audio, we have begun using a separate microphone to record spatial, or amibsonic, audio (Zoom H3 VR is the model we use). Using Adobe Premiere, we add in the audio and that allows for it to be spatial. Our earlier 360 videos does not have this, but our newer recordings do. If you'd like to hear a comparison of with & without the spaial audio, go to this page where we have examples side by side. You will need to wear ear buds or headphones (or you may not be able to hear the differences in sound as well). 
    • I know of a few folks that have used 360 cameras with preservice teachers in coaching-type situations. We did a special issue on Extended Reality in Teacher Ed (in JTATE) and Weston & Amador (2021) used 360 video in a similar way (https://www.learntechlib.org/primary/p/219535/p...). Buchbinder et al. (2021 - same issue) recorded preservice teachers working with small groups of students and this may give some ideas as well. 
  • Icon for: Lorna Quandt

    Lorna Quandt

    Facilitator
    Asst. Professor, Educational Neuroscience
    May 11, 2022 | 08:38 a.m.

    Hi team! I really enjoyed this video and I'm intrigued by the idea of using 360 video to support teachers' observational behaviors. I would like to know more about what patterns of noticing you see in new preservice teachers, compared to more experienced expert teachers. What are some of the differences you see in the data, and how do they connect to real classroom experiences and interactions? For the eyetracking study, do you pre-define ROIs and then calculate time spent in each ROI, or is your approach more observational? 

     
    1
    Discussion is closed. Upvoting is no longer available

    Karl Kosko
  • Icon for: Karl Kosko

    Karl Kosko

    Lead Presenter
    Associate Professor
    May 11, 2022 | 09:00 a.m.

    Hi Lorna,

    We've looked at preservice teachers at a few different points in their program and have observed a few patterns (some more informal some published or in review). Much of this supports the prior scholarship on teacher noticing and eye-tracking with video generally. First, the less experienced (i.e., experience with actual students), the more people tend to look around (almost as if they are in awe of being in a classroom). Oddly, many of these individuals report a higher sense of presence than others with more experience (not that the latter reports low presence, but the former reports a sense of 'hyper-presence'). As folks gain more experience, they tend to slow down their head (or Field of View) movements and begin focusing more and more on teachers and students, before transitioning to students particularly. One interesting observation we made (just by looking at the x- and y- coordinates of the gaze data) is that experienced teachers look lower with their eyes than novices. This is because they look at the children's work more whereas novices who have gotten into being more student-centered look at the children's faces more than the tables where they are working, writing, counting, etc.

    For the eye-tracking, we are working to combine observational with a form of AOI (areas of interest / regions of interest if you prefer) that uses machine learning. Right now, we have it where we can distinguish between teachers and students, but are fine tuning the AI to differentiate between specific students. This is also important down the road as we hope to use this with multi-perspective 360 video (so we can see if they are looking at the same or different students at different positions in the classroom). Our plan is to use a few different statistics with the student gaze distributions (including the Unalikeability coefficient - we find it to be slightly better than Gini, Hoover, or Theil). For using eye-tracking in an observational way, we are in the midst of qualitative analysis of those videos and hope to provide a better description of what the statistics tell us (based on the qual). 

    I will note that we have used pre-defined AOIs when we analyze 360 viewings with the field of view (no eye-tracking). For example, our platform gathers this data for when people watch on their laptops at home. It's not as precise as eye-tracking, but does give a very nice approximation (particularly when considering most of a person's eye gaze tends to be within a particular region of the center of their field of view). 

  • Icon for: Dan Roy

    Dan Roy

    Facilitator
    Research Scientist, Interest-based Learning Mentor, Learning Game Designer
    May 12, 2022 | 01:28 a.m.

    It sounds like you have both a headset and flat screen interface for viewing the 360 videos. Have you noticed any differences in looking behavior across those hardware differences? I wonder if you'd see more looking around while wearing a headset due to the movement being more fluid and natural.

    One issue I've seen with 360 videos in general is the IPD (interpupillary distance) of the camera recording the videos doesn't quite match that of the viewers, making the world feel distorted (too big, too small). Have you noticed that issue in your testing or found a way to manage it?

    I wonder if the increased presence you noticed with the novice teachers may be in part due to age or comfort with technology. I wrote a report that summarized some of the research on this called Learning Across Realities (https://education.mit.edu/publications/learning...).

    Have you considered approaches that are more interactive than 360 video, or would that introduce too many variables into your study?

     

     
    1
    Discussion is closed. Upvoting is no longer available

    Karl Kosko
  • Icon for: Karl Kosko

    Karl Kosko

    Lead Presenter
    Associate Professor
    May 12, 2022 | 08:01 a.m.

    Great questions! 

    We tend to notice more movement with the flat screen than the headsets (see Kosko et al., 2021 in Journal of Teacher Education). We have one paper we are finishing that does a similar comparison and confirms this a bit. Part of it is that there is a greater sense of immersion with the headsets than flat screen and we do not tend to look around as haphazardly if we are more immersed. The big question for us has been how much the benefit of the headset matters (regarding cost, etc.). No clear answer yet there, but technology advances may make the question moot.

    We haven't noticed anything with IPD, but none of our 360 videos are 3D. I wonder if that is an aspect to consider. 

    Regarding presence, we had wondered about this but there aren't any observable differences across all the data we have collected (we did gauge perceived tech savviness, prior use with VR, etc.). Tech savviness sometimes came out as a factor but not consistently across studies. 

    Regarding approaches other than 360 video, yes they are considered but some of the costs need to come down to make them a viable alternative for us. There are scholars doing great work with VR and digital agents, and we are dabbling with holograms, but nothing significant to report just yet. 

     

     

     
    1
    Discussion is closed. Upvoting is no longer available

    Dan Roy
  • Icon for: Marcelo Worsley

    Marcelo Worsley

    Facilitator
    Assistant Professor
    May 12, 2022 | 07:02 a.m.

    This project sounds really fascinating. The platform for teacher learning reminds me a the DIVER project from many years ago, but not with the option for much more immersive viewing experiences. I am curious to know more about the features of Praxi, and the types of annotations and noticing practices that it supports.

    On a completely different strand of thinking, I wondered about also capturing teacher affect through electro-dermal activity (using Empatica E4) while teachers are viewing classroom video. This could give teachers a window into their own affective responses when they see different things in the classroom.

    Lastly, really enjoyed seeing the eye tracking data, and the skeletal tracking overlays. How is skeletal tracking used in this project?

     
    1
    Discussion is closed. Upvoting is no longer available

    Karl Kosko
  • Icon for: Karl Kosko

    Karl Kosko

    Lead Presenter
    Associate Professor
    May 12, 2022 | 08:07 a.m.

    Lots of great thoughts here that overlap with some of our own wonderings.

    We are setting up a separate server for Praxi so we can offer access to more people and are going to submit a Large Scale version of our current grant to hopefully widen access further. 

    In terms of annotations, we began to steer towards machine learning as a main feature because of how we have used 360 in our courses, but eventually hope to incorporate annotations similar to what you may see with ThingLink and similar platforms. 

    We had thought about electro-dermal activity and some other physiological data. We are in the process of analyzing some physiological data, but nothing currently with skin (beyond electrodermal, I know changes in skin temperature has come into account in some scholarship).

  • Icon for: Darryl Yong

    Darryl Yong

    Higher Ed Faculty
    May 12, 2022 | 05:24 p.m.

    Thanks for your interesting work.  Have you observed differences between the ways that beginning and experienced teachers notice?  What about teachers who have a demonstrated success with teaching marginalized populations of students?  Do they notice different things?

     
    1
    Discussion is closed. Upvoting is no longer available

    Karl Kosko
  • Icon for: Karl Kosko

    Karl Kosko

    Lead Presenter
    Associate Professor
    May 13, 2022 | 11:39 a.m.

    We are working on data from an eye-tracking study now that shows the differences between experienced and novice. We also see differences between preservice teachers with and without certain field experiences (essentially upper elementary vs not as that relates to how they look at children's fraction reasoning).

    As far as differences between folks who have background teaching marginalized populations - we have not looked at this, but it would be an excellent study. I could see how the technology (and Praxi as a research tool) could be used to examine differences in where people turn the perspective, how they frame students, etc. Although we did not conduct a study to look at such factors, we did find evidence of gendered noticing (unexpectedly as it wasn't a focus of the study) in our Kosko et al. (2022) paper in Computers & Education.

  • May 15, 2022 | 02:22 a.m.

    That's very interesting! I enjoyed the video presentation.

    I was wondering if you have assessed or explored the correlation between students' eye-tracking and behavior to their learning gain in the classroom. 

  • Icon for: Karl Kosko

    Karl Kosko

    Lead Presenter
    Associate Professor
    May 16, 2022 | 08:55 a.m.

    Hi Zohreh,

    That's an interesting question. Currently we are focusing on teachers' professional noticing, which is a skillset involving being situationally aware to the point of attending to certain events and actions in-the-moment, interpreting what is occurring with professional knowledge, and deciding how to respond. This is also something we work on in our teacher ed program. We do see an association between how well someone engages in noticing and their physical actions when viewing 360 video. We do not have longitudinal data with eye-tracking though. 

    The key association we see across our work with either field of view (FOV) or eye-tracking is that teachers who discuss students' mathematical reasoning will focus on students more (for longer durations) than teachers who discuss students' procedures (but not their conceptual reasoning). Those who talk about more generic pedagogy are often the messiest group in terms of how they go about viewing a 360 video (at least in terms of FOV - we are still analyzing eye-tracking data). 

    With eye-tracking studies, there are typically very small samples because of the difficulty in collecting the data (and expense of the equipment). We have been using the Neo Pico Eye headsets which, although expensive, are MUCH cheaper than standard eye-tracking equipment. Our platform, Gaze XR, allows for multiple headsets to be used at once (our data collection sessions in April resulted in sessions of 4-7 at a time). Our next challenge is getting this logistically implemented into a methods course. 

     
    1
    Discussion is closed. Upvoting is no longer available

    Zohreh Shaghaghian
  • Icon for: Kelly Costner

    Kelly Costner

    Higher Ed Faculty
    May 17, 2022 | 02:12 p.m.

    Karl--So great to see you involved in this truly innovative approach to learning about how teachers learn. And thanks for making your alma mater proud!

    I think I'm understanding through your team's video and the discussion here that Praxi was developed and is currently being used as a tool to investigate, essentially, how teachers learn to teach through what might be termed "metainteractions"--interacting with video in which they see themselves interacting with students/content/pedagogy.

    What I'm wondering now (and don't think you've mentioned) is whether this might eventually be a tool for teacher candidate or inservice teacher use. In other words, what promise does this tool have for direct use by teachers?

     
    1
    Discussion is closed. Upvoting is no longer available

    Karl Kosko
  • Icon for: Karl Kosko

    Karl Kosko

    Lead Presenter
    Associate Professor
    May 17, 2022 | 04:12 p.m.

    Hi Kelly! Great question. Currently we are focusing on 360 video cases only (and not self-recorded videos). This is primarily because we are being conservative on server and storage capacity. Eventually we hope to support including recordings of one's own practice. With that said, I know of several folks that have used 360 video in this way. Kaltura, as an example, supports 360 video (single-perspective) and ThingLink does as well (the latter having better annotation tools for 360). Of course, neither of these support multi-perspective 360 nor do they report data on "where" someone looked. 

    So to get at the heart of your last question - eventually we hope this tool can be used for both 360 video cases and 360 videos of one's own teaching. We would like the current lineup of tools to be combined with others to support such discussions. However, some of this will take time (primarily for capacity). We are definitely hoping to work on this in a large-scale version of this project though!

  • To post to this discussion go to