It is always important when you create something to tell others not only about what you have created, but how you create it. My job in WA is all about sharing how I do things, so that I can upskill the VET sector of WA.
With the recent ETL523 assignment 1 not only did we create a group wiki, but each student were required to create an individual digital artefact that was created in Web 2.0 technology. When I read this I knew that I wanted to create something that not only would work for my assignment but could be used either in part or whole as a support to a training session that I would run later in the year.
I brainstormed my initial ideas in hard copy, which is fairly common practice for me to do when I commence a design project as I am a visual person. This storyboard gave me a solid starting point to what I wanted to include in the group wiki and what I wanted to include in my digital artefact. After reviewing my ideas I knew that I wanted to create a video that would be embedded into a Nearpod activity. This meant combining two Web 2.0 tools and a huge amount of film and edit work to make it all happen. The reason behind this choice was simple; my philosophy when creating something that I will be using later for training is that whatever I produce must be done simply without too much high tech so that a VET lecturer can also do this.
Once I knew what I wanted to do I roughed out a very brief high level storyboard that showed the shot list, still images and screen grabs I needed for the video, rough ideas for the script and the outline for the Nearpod content and how it would all look together. I then created individual storyboard for each of the different sections of the digital artefact so that I knew I would be able to work to a plan. This was critical for me as my personal life was all about dealing with a family death.
I had to be clear what simulation software I wanted to showcase and how as another team member was doing a digital artefact on a similar topic and create a filming schedule so that I could coordinate various people to ‘appear’ in the footage as well as organise access to various businesses and school that were using the simulations. I filmed simply on my mobile phone the video footage I wanted to use incorporating many different shots and angles to give me good editable footage I could cut together. I opted not to use an external mic to capture the sound as I decided early on that I would voice over only and use footage to support the audio script. This decision meant that I would save time on having to edit audio footage and I could ensure good quality audio through the entire video.
For the voice audio track, and film editing I used Camtasia Studio. This is a low end video editing tool, but many VET organisations have access to this rather than Adobe Premier (which I could have used). Another alternative I could have used was Windows Movie Maker, which was installed on my laptop, but the edit would not have been quite as easy.
I recorded the audio script and saved out 25 audio tracks, which I would later import into my Camtasia Studio edit suite for bringing the final video together. I did start using Audacity for recording the voice audio, however my work computer no longer had the correct codec to save in a cross platform file and I could not get my this laptop back to our ICT department for them to load it for me so Camtasia was my fall-back position.
Once the voice audio tracks were completed and all the film footage was completed, the various screen grabs were taken and still shots were saved to my computer I commenced the film edit. As I knew exactly what shots went with which voice over it was a fairly easy edit to complete, probably only taking roughly 23 hours to complete to final production rendering stage.
I uploaded the final version into YouTube, which I had to set to ‘Public’; otherwise Nearpod would not be able to locate it when I go to link it. What I have not yet completed and it is so very important that I will go back and complete this next week is to upload a transcript for accessibility. This means that I have to create an audio transcript document (usually I do this in Notepad)
It does mean that I have to sit with the YouTube open and set accurate time codes, but it is very important. YouTube now has the feature where you can do some of this in the system, which I will play around with when I am doing the audio transcript. I do have a written script, so this should be a relatively painless process, but time consuming.
After the video was complete and uploaded I could then set about constructing my Nearpod content and activities. Nearpod is brilliant if you have not used it before, so easy and quick. It allows you to upload videos, sounds, images and presentations. I created my content in Microsoft PowerPoint and uploaded, this was so simple and easy. It then meant that I could play about with the content and be able to reorganise the order around the internal Nearpod features of quiz activities and the YouTube video.
If you are interested in screen grabs for any of this process I have created a Sway that showcases this which are found here.
Any further questions about the digital artefact then please do not hesitate to ask!
Nearpod activity access https://s.nearpod.com/j/CVEJZ
Simulation isn’t futile YouTube link.