Monday, January 21, 2013

Password Needed

The blog you were trying to view is password protected. If you feel you are reaching this message in error, please contact me at sam.vanhorne@gmail.com.

Friday, July 27, 2012

You clicked on a link, so you read this, right? (Or, some thoughts on learning analytics.)

Most recently, I sat through a series of webinars about learning analytics that was sponsored by EDUCAUSE as well as the Bill and Melinda Gates Foundation, and IBM, too. At my institution, there is some interest in learning analytics, but administrators are still feeling this out and trying to determine what the purpose of learning analytics will be.

My initial observations of some the issues are how LA, which appears to have the purpose of modeling an individual's performance and then taking some action based on that model, handles complex statistical procedures. Not the most exciting topic for a blog entry, so I'll be brief.

1. What is the process for model selection in an LA model that encompasses a variety of variables? One question that is important to address is the process of model selection. That is, when researchers develop a prediction model, we use a procedure to select the variables that become part of the model. This procedure can be an automated procedure in a software package (such as stepwise model selection) or a deliberate procedure of testing one variable at a time (by hand) to determine the first variable in the model, the second variable, and so on until you have the final model. In LA, how does this happen? Is one model applied to the data regardless of the quality of fit? Is it transparent?

2. We must be careful to avoid saying that prediction models predict an individual's exact performance. There is always uncertainty. Statistics--especially inferential statistics--is a science that embraces error and uncertainty. That is, a prediction model always has some degree of error--and even a good model predicts the average response of some outcome variable. If I say that my prediction model indicates that this student should receive the GPA of 2.9, then I'm assuming that every student is going to have the average outcome. Some will get higher GPAs, some will get lower. If we do LA, we need to make a commitment to helping our colleagues understand the nature of error.

3. In LA from automated systems like LMS's can we be sure of the data we're analyzing? It is tempting to assume that a page view in a content system means that students read that page. Perhaps they did. As researchers, we may need to develop research methods that enable us to learn about the quality of the interaction in a system like an LMS. This can be done with some kind of instrument that asks students more about how they interacted with some electronic resource. Surely, some systems are catching up to this by enabling more interactivity (such as e-textbooks that allow highlighting, note-taking, and sharing of questions with instructors), but we should be skeptical of each variable we consider in a LA procedure.

Friday, June 29, 2012

E-text Unconference (Day 1)

I graduated from the University of Illinois, where this unconference is being held, about 17 years ago. I jogged through campus this morning and saw that most of the businesses that were here are gone--except for Espresso Royale, Lando hair salon, and the textbook stores. I even worked at one of these textbook stores that would clear amazing amounts of money during textbook rush.

So it's interesting to be at a conference at which there are people who are trying to develop e-textbook platforms and others who want to know where the business of textbooks is headed.

During day 1 of the conference, there was a short presentation by a speaker (a consultant) who talked about the business model for publishers. He argued that the more e-textbooks contain personalized content, the more they will see a business value. It was not clear why. Would publishers want to control the kinds of annotations that students add to e-textbooks or would they use that data to market products? (The speaker referred to this as "differentiated content.") The speaker acknowledged research (from Daytona State College) that showed that students were not enthusiastic about using e-textbooks again after using them for a semester. (One finding of this research was that students preferred renting paper textbooks.)

We voted on which 7 topics to talk about in small groups. Winners were student receptivity, technology platforms, large e-text initiatives--and I forgot the other four. Accessibility was not one of them, and one person in the crowd spoke up and asked where that could be discussed.

At the student receptivity discussion, we talked about a variety of ways that schools have tried to measure students' acceptance (or avoidance) of e-textbooks. There are many librarians here--one says that it is logical for them to work with e-textbooks because libraries have existing relationships with publishers. One large school has conducted a big pilot with an e-textbook provider and found that the majority of students did not want to use an e-textbook again. Also, he said that most of the students (and instructors) did not take advantage of the annotation tools in the e-reader (and there was no relationship between e-text usage and students' grades). They are doing the pilot project again in Fall 2012 because they want to try the system again now that it has been improved. Lots of discussion about how students like paper textbooks, like sharing them with classmates, like selling them back. One finding was that students who did not like the e-textbook felt that it was too distracting to use an e-textbook on their laptop, where they were tempted to facebook, surf the web, chat, etc. There is concern that one e-reader provider would like institutions to use their assessment instruments. Some schools have run into trouble trying to ask their own questions.

So the people involved with companies who develop e-readers have had mixed results...some very enthusiastic people are here, working on their own e-textbook initiative at UIUC. Self publishing of textbooks is big at UIUC. At the table where they talked about initiatives to publish e-textbooks locally, it was all UIUC people. And they did not all know each other. Some UIUC faculty, who have written their e-textbooks, are here to discuss their processes. One chemistry professor has made an online app for chemical equations, which is integrated with his online e-textbook and is working well for him. UIUC seems exceptional in this area of self publishing.

The UIUC undergraduate writing program has developed an e-textbook that they sell to their students. It is an HTML 5 e-textbook with annotation tools, and is accessible. They wanted to build an accessible environment because the companies that make e-readers have often not paid enough attention to accessibility and just put "pictures of textbooks" in their systems that do not meet accessibility standards. To purchase the writing e-textbook (used by all students in undergraduate rhetoric), students go to the textbook store and purchase a scratch-off card with a code that they use to access the e-textbook. They must buy this because the entire course "happens" in the e-textbook--my understanding is that without the e-textbook students cannot complete assignments. They need their own. They can download an epub version that they can use offline, but they won't have access to their annotations. Essentially, there is one e-textbook, and students' annotations live on a database. Each time they log-on, their annotations are pulled down to the e-text they are viewing. (When the developers were building this, they asked a professor if she would want to receive an e-mail each time someone created a highlight. She said no way.)

There isn't much research about this initiative. The head of undergraduate writing program won a national award for this innovation in instruction. They have collected a bunch of surveys that they are still analyzing.

More thoughts to come...

Monday, April 09, 2012

Taking Time to Reflect on Learning in MITx

Last week, I decided to return to practice problems in week 1 and complete some of the problems that I did not have time to finish. (Each week I do as many practice problems as I can until working on the homework is all I have time for.) I was happy to see that I could answer these problems (which were about node analysis and voltage divider circuits, two relatively basic concepts) without too much difficulty. I am learning, I thought to myself. I need to spend this time going over what I have learned and making the abilities my own, so to speak. In MITx I am always pushing ahead as diligently as I can and not spending time reflecting on my learning.

The designers have been changing small things in the course. It's interesting to see the changes, such as making the current text in the transcript bold when it is spoken in the video. The word "SPEED" now appears next to the buttons for changing the speed of the videos. Other students make suggestions for changing the interface or adding an interface for mobile users. Not me--I'm usually coming from behind…

Which reminds me: three cheers for Wolfram Alpha! When I needed to compute derivatives for the incremental analysis, I only need to type the equation into the computational engine. (My calculator is not good enough for this.) Students were suggesting a variety of tools--such as a math program you can download from Microsoft.

Still, you know you're in deeper waters when you can't solve the problem on your calculator that was sufficient for any other class you've taken. I'm uneasy that I'm avoiding brushing up my calculus--who am I kidding?--whole new coats of math are needed here! I was able to get a decent grade on the homework, but left the last problem untouched because I could only get through half of the lectures for the topic. I am starting to feel the effects of not having the prereqs. I wonder if MITx will have to enforce a certain prior level of knowledge before letting people take a course. Surely I'm not the only one with these problems...but for the moment this may not be a main concern.

The first exam was announced: it will be in the last week of April, and we have 24 hours to complete it once we begin. I've already marked the calendar and told my wife--who, by the way, did a bunch of my chores to give me time to complete MITx work.

Monday, April 02, 2012

Completing Labs in MITx: Thank Heaven for Trial and Error

Well, I solved Lab 3 with help from my wife and a hint from the discussion forum, and some late night persistence. In this problem we needed to build a series of digital gates that are connected to three voltage sources. Professor says an Intel chip has a billion gates, but our lab involved three, so we start small. Really small.

In Circuits and Electronics, we must complete a weekly lab, an assignment in which we must build a circuit that satisfies certain parameters. Part of the assignment involves getting the appropriate configuration of resistors, wires, and other elements, and another part of the assignment means getting the right values for these elements. It’s a rather safe way to do a lab in basic electrical engineering; I know the only shock I’ll receive is how many times I get the answer wrong.

A lot of my problems with this lab were due to my own ignorance about how to use the circuit sandbox. First, I did not know that you could cross wires in a diagram, which, it turned out, was necessary for the solution to the problem. Of course, crossing wires is acceptable because in an actual circuit, a wire can go over or under a different wire. In my crude diagram, you can see that the red voltage source (the circle) needs to be connected to the red gate (the diamond), so the wire must go over the other gate. I finally found a discussion posting that mentioned that you can cross wires, but another person said that this is in the directions in how to use the sandbox, so I’m to blame for not reading all the directions….One weakness I have is that I am spending more time trying to understand equations than trying to understand the common-sense rules for putting these things together.

But a funny thing is that I was able to get the correct answer by using trial and error to determine the resistance values for these gates. This seems rather unscientific to me, but one hour before the deadline makes me happy for any advantage I may get. And yet, wouldn’t a situation in an actual lab be much much different? For example, at one point, I put the resistance value of a MOSFet so high that I shorted part of the circuit. Now, in the sandbox, who cares, right? But surely this kind of guessing could be dangerous in a real circuit.

One thing I don’t like about the circuit sandbox is that I use “backspace” to delete elements after I’ve selected them. If I hit the button more than once, I go back in the web browser and lose my circuit.

Wednesday, March 28, 2012

Where's the show-your-work-for-partial-credit button in MITx?

I stayed up late on Sunday trying to complete the second lab, which--how do I explain i?--involved building a circuit with a set of resistors that produced a certain maximum and minimum output voltage. At one point during the day, I had posted a question in the forum about a concept involved in the problem (the concept of superposition) and some nice people directed me the textbook to read more about it.

But I also came across an answer that included a link to another posting that, I assumed, discussed the concept in more depth. Well, it included a possible answer to the lab. This is another difficulty I have with MITx: avoiding posts that include explicit answers while I am looking for guidance on concepts. I can't unsee those answers, unfortunately.

The Honor Code forbids the posting of answers to graded assignments, but people still post these things. Sometimes people post equations that include every step for solving a problem and only leave out the answer. Really, as a participant in this class, I must keep my own honor code and be judicious when I consult the discussion forum for help with concepts. I phrase questions to make it clear that I just want help understanding a concept.

So my dilemma was whether to use the information I saw--which I would not have understood on my own--or go back to trying the lab by myself. After my poor wife stayed up trying to help me, I decided to give up and take the 0. They drop the two lowest lab scores anyway.

Still, this got me thinking about how MITx does not allow partial credit. I had tried out several different possible answers, and was able to get the correct minimum output voltage. Shouldn't that count for something? The circuit is either correct or incorrect. Perhaps this is a feature of large, free online courses: there aren't the teachers who can scan through thousands of answers to see if the answer deserves partial credit. But I can't help but wonder why it is helpful for students to get a perfect score on a lab or no points at all. Could the labs be broken into several steps in which students could receive credit for completing part of the system correctly?

Today, I looked up the answer to the lab and then posted a question in the discussion forum about the circuit diagram. In ten minutes, I had received two answers from people, and I think I understand my mistake now. And this is what I wanted all the time.

Friday, March 23, 2012

Week 2 in MITx: Feedback in the System

So homework 2 has proved to be quite a bit more difficult. There is no easy question like there was in the first homework. Now I must apply the Thevenin Method to solve a circuit with input and output voltages. Never did I think turning on the light could be so complicated (it actually is probably a simple circuit). Today, my iron broke and I briefly thought about trying to take it apart to find the problem. But I'll probably just get to Target for a new one.

What is the best kind of feedback that you can receive in a massive online course? Lately, I get the red "x" a ton as, there's no denying it, I have a harder time in the class. (Reminder: your answers are assessed immediately by the system: green check means yay!, red x means grr.) The red X propels me to keep studying…I try to think of it as x marks the spot you need to study more.

Feedback has been studied extensively--a significant meta-analysis of feedback was published in 2007 that indicated that cues had the highest effect size. Simple praise had a very small effect size. And feedback about a correct answer had a greater effect size than feedback about incorrect answer--this is particularly difficult feedback to get in a large online course in which I sometimes don't fully understand the answers I'm getting.

The research says to seek out cues, and I could use more cues in my work in MITx. I seem to be able to find them by consulting the discussion forum at just the right time, with a posting that does not give away the entire answer.

More thoughts about peer tutoring in MITx. I posted my first question (about how to identify parallel resistors) and shortly received several responses. In the system, I was able to choose the correct answer--and I was gratified to see that I could choose more than one correct answer. This system of allowing people to accept answers helps to close discussion threads that might continue even after they are useful to people. Being able to accept more than one answer shows that MITx values multiple perspectives and different ways of explaining a concept.