Well, this is deflating. Here I have been extolling the teaching benefits of various widespread technologies, many of which are not particularly new, in the grand scheme of things (Blogs, about 10 years old. Video games, at least 25 years old).  I have also been accused of not giving traditional universities or professors credit for being great innovators.

Many of them are. But these survey results would seem to indicate that the vast majority are not. They use their Learning Management Software (at least the gradebooks and syllabus features) but when it comes to anything else, they don’t bother.

Many folks on Twitter asked “How do we change that?”

Here’s some thoughts.

“The first step may be teaching the teachers how to use those tools” said @Hutchirish on Twitter.

I actually disagree.  I think that the beauty of the current generation of social media is that anyone who’s motivated can pick up the basics of these tools in a couple of minutes. Remember, we’re talking about a highly educated, intelligent group of people here. There are 70 million blogs out there and 120,000 are being created every day. There are 500 million members of Facebook and several million on Twitter, and almost 190 million households will soon have videogame players. How many of these millions of people had to be explicitly instructed in the use of these technologies?

I don’t think professors aren’t using these technologies because they don’t know how. The reality is a little more complicated.

@Chanders writes, “You’re prepping to teach a class you’ve never taught . You’ve a month. What should you do: learn social software or class content?”

I actually think this is a real question, not a rhetorical one; and it’s a much broader problem with university teaching in general. I think it’s far more important to know how to teach than to necessarily know all the content the course is supposed to put across, because teaching is not supposed to be about feeding forward information.

Whereas pre-K-12 teachers, in order to be certified, receive at least nominal instruction in the practice of instruction itself, from what I understand it’s possible to get a PhD or an MFA or even just be a graduate student on the path to a degree, and get stuck in front of classes full of undergraduates with zero instruction in how to teach. Nor, from everything I understand about how the professoriat works, are professors generally subject to any kind of organized professional development on teaching practice. I don’t remember my parents going to weekend workshops on how to conduct writing seminars at Louisiana State University.

Nor generally do they get rewarded directly for how well their students perform or master information (although they may catch flak via student evaluations if their students don’t LIKE them). In fact all of the work they do to prepare for their classes, if they are lucky enough to be tenure track, is considered to be taking away from their real work of publishing and scholarship. (If they are adjuncts, the work they do to prepare for their classes is likely to be done on the train or during downtime on their shifts at their second job!)

Consider that this system, which doesn’t prepare professors to teach, doesn’t particularly reward them for teaching well, and prefers that they do other things rather than getting better at teaching on their own time–consider the arrogance of the fact that this system considers itself the best kind of teaching there is. Once you get through 12 years of being taught by folks trained and certified in actual pedagogy, only then you are deemed worthy of studying with real scholars–ie, people who know only the content of their own field. And it is held to be self-evident, by the professors themselves, that whatever they come up with to do in their classrooms with no oversight or instruction about how to do it–is a far, far better learning experience then some stupid research-based learning software or some dumb award-winning video lecture by the most fascinating and brilliant expert in the world. Do you prefer wandering into a random regional theater and sitting through an original play by the local Corky St. Clair or Netflixing a five-star film of your choice?

So, the reason most profs don’t use blogs, or wikis, or Google Docs, or videoconferencing or video games or even clickers in their lectures, is probably because they don’t see why they should, they’re not being supported to learn why they should and they wouldn’t be rewarded for it even if they did. It’s an exceptional minority who will endeavor to innovate and be excellent under those circumstances.

If you want to change that you probably have to change the circumstances.

I came to Vancouver on Sunday to give a speech and wandered into a delightful bookstore called MacLeod’s. It’s a classic–crammed with piles and piles of used books from fancy leatherbound volumes to mass-market 1970s paperbacks. I picked out 3 books inside of five minutes and I knew the longer I stayed, the more I would end up buying. I seek out bookstores wherever I travel, because they just make me happy. I come from a family of writers and my parents’ house has so many books in it that a contractor once told them the weight was causing a structural problem.

All of which makes it ironic, I guess, that I just wrote a piece for the New York Times arguing, in the words of Judy Baker, “The traditional printed textbook, homogenized, vanilla version, is basically the Hummer of higher education.” It would be preferable both for teaching and for costs, for professors to draw on and contribute to free and open digital repositories of learning resources. In the more than 40 comments to my piece, the objections to this point of view can be summarized as follows:

1) Who is going to reward the creators of these learning resources? To the extent that it’s part of teaching practice, it should be covered by their salaries. If there are more specialized curriculum experts they can be paid for that, as part of the “unbundling” of teaching functions.

2) Printed books still have advantages over online versions.

2a) There’s still a digital divide and poorer students or those in foreign countries may not have access to high-speed internet or appliances.

The latest Kindle costs $189, which is as much as a single textbook can cost. High-speed Internet access is rapidly becoming a requisite for participation in modern life and a college education is no exception.

2b) I just don’t like reading on a screen.

This is probably the toughest one for me. I love paper books too. But I don’t think students should be forced to spend $1000 a year on them.

Lately the NYT has been alive with stories and commentary about college students cheating using amazing new technological techniques like CTRL-C and CTRL-V.

I came across Goodhart’s Law in my web wanderings several weeks ago and it’s been knocking about in my mind ever since. Basically it states that when you attempt to pick a few easily defined metrics as proxy measures for the success of any plan or policy, you immediately distract or bait people into pursuing the metrics, rather than pursuing the success of the policy itself.  The mythical example is Soviet factories:

“When given targets on the basis of numbers of nails produced many tiny useless nails, when given targets on basis of weight produced a few giant nails.”

This is hard stuff because it’s human nature to want to distill big complicated goals down into a few easy to understand numbers, and it seems efficient from a change-making perspective as well. Yet we can all see the bad outcomes from an overreliance on the numbers: Police districts (ok, on The Wire) manipulating murder cases to come out better on COMSTAT ; School districts and states lowering standards and encouraging learning disabled kids to stay home on test days, so they look better under No Child Left Behind tests. I also see how it works in my own life: I have a log on my iPod nano of how many times I’ve used the stopwatch to time my regular 2.8 mile run over the bridge. But then I started to turn it on when I go to the gym, or on lazy days when I only run half as far, because it makes the stats (number of times I worked out this month) look better.

In the case of college cheaters, we methodically train students for years to define their worth and their tasks in school extrinsically by grades and test scores (see No Child Left Behind, above). Then we give them boring assignments–test questions that aren’t updated from year to year, and papers that don’t require introspection or individual response. Then we pretend to be shocked when they respond just like Stakhanovites in a Soviet factory, turning out more and more of shoddier and shoddier product.

The answer is simple: we’re measuring the wrong things.

Remember the Woody Allen joke? I cheated on my metaphysics midterm–I looked into the soul of the student sitting next to me.

If professors were looking into students’s souls, and truly asking students to look into their own souls, then cheating might be less of an issue.  Would you still turn in a shoddy, cut-and-pasted paper  if it wasn’t just between you and your professor–your work was out there on the web for friends and family members and future employees to see?  What if it was a collaborative project where you were responsible for other team members’ grades as well as your own? The interpersonal stakes are certainly raised then. Or what if the topic of the class was one that you chose to study, one that was close to your heart? What if there was real trust and a bond between you and your professor?

I really liked what Alfie Kohn had to say about this on the Room for Debate blog, and I plan to download one of his books.

Here’s the links from my presentation:

Speak! The Miseducation of College Students

Tim O’Reilly on Education as an Open System (vide0)

Tim O’Reilly: Government as a Platform (e-book)

Open Courseware Consortium

iTunes U


Academic Earth

Khan Academy



Flat World Knowledge

Open Learning Initiative

“Online Programs Push for More Interaction”, Wall Street Journal, June 30, 2010

National Center for Academic Transformation




Excelsior College




I thank the folks who asked me for these resources and who told me they got something valuable out of the presentation. I also thank John Fontaine, Jonathon Lunardi and the rest of the team at Blackboard for inviting me and treating me very, very nicely. Still, I have to say that I was kinda disappointed by the first question that I got, the gist of which was “Why should we, the technologists, be charged with making change in higher education? Shouldn’t you be talking to the students & the faculty?”

Well, yes, I do talk to students and faculty (visiting at least six campuses this September/October), and parents and college counselors and anyone else who will listen, but technologists have special tools and capacities to make change and so therefore it is both your responsibility and your opportunity to do so. That was the thrust of my entire presentation so it was disappointing to note that it didn’t come across to everyone, as evinced by Tweets like this one and this one.

I’ll quote Josh Kim from Inside Higher Ed on this point:

“Technology will be one of the essential factors if we hope to bend the educational cost curve…The leadership within our institutions, the presidents and provosts and deans and chairs etc., should be asking the CIOs and the academic technology directors about how we can increase productivity. And people in educational technology leadership positions should be making this our number one priority. We all need to participate and succeed in bending the educational cost curve.”

I’ll be first to admit that the speech I gave yesterday was not the best speech I’ve ever given, and it’s unfair to characterize an audience by a few naysayers. Still, I have to contrast this attitude with the glowing, excited reception that took place at Sakai a few weeks ago. I have to wonder if the most important reason to advocate openness is the difference in outlook and culture between a community of developers and a group of clients of a product/service.

Update: to expand on my last sentence per request: it’s the vending-machine analogy. If a university as a whole, and CIOs in particular, are labeled “clients” of a service like Blackboard, isn’t it more likely that they’ll conceptualize “technology” as a service to be consumed, a set of tools that either works or it doesn’t to do a predetermined group of things. It’s a very narrow, external-locus-of-control way of thinking about the role of tech in higher education. Two different audience members expressed this attitude to me as coming from faculty, namely “if I put my syllabus up on the web, I’ve “done my job” as far as technology goes.”

On the other hand,  if the institutions and the CIOs are engaged in developing the set of tools, ideally they’ll be thinking actively about the return on that investment and the possible ways to use tech to reinforce all kinds of institutional goals, not just those that are predetermined or pre-defined.

Obviously, this is just an ideal, and the Sakai folks talked to me about resistance from faculty too, and I don’t want to be accused of idealizing openness, but there it is. That’s the difference I’m alluding to.

Tom Robischon (pictured) writes:

“My enthusiasm is fired in large measure by my history of pursuing educational alternatives over the 47 years of college teaching.  College had been such a transformative experience for me–at a state college derisively called a cow college because of its agriculture specialization.  But I had the unusual opportunity to choose about 65% of my courses, and I came to love learning–for its own sake.  That’s what led me to go into academic life after I received my doctorate from Columbia.  (It was in philosophy, and “philosophy bakes no bread,” we were warned in grad school.)  I wanted the institutions where I taught to produce opportunities for students to also go through self-transformations.  Some of my students did, but I wanted every student in the school to go through it, but higher education, I was to discover, was more thought than action.  I used to think a warning should be posted outside faculty meetings: “Abandon Hope All Ye Who Enter Here!”"

And a college counselor at a private high school writes:

“I concur that higher ed has been undergoing a sea change and will continue to do so for a number of years. As somebody who has worked inside and along the ivory tower since 1986, the never-ending transformation has been very interesting.

For the past couple of years, I’ve advised my students (I’m currently employed at a private high school) that it is not so much where you go but what skill set you develop during your time in college. Internships, research and study abroad are the experiences I push for each and every student. Of course, they look at me and say, “Nice, but I just want to get in and THEN I will figure it out.” So much for planning ahead. Sigh. . .”

I’ll be writing for Good Magazine online about once a month.

What is a meter? Most people would be satisfied with an answer of “about three feet.”
Official definitions, however, turn out to depend on some pretty fanciful, abstract, and far-flung benchmarks: one ten-millionth of the distance between the Equator and the North Pole; or, more strangely, the distance between two lines on a metal bar made of 90 percent platinum, located somewhere in Paris.

The more we look into any standard form of measurement, from the length of a second to the value of a dollar, the more we can see that it’s based on an arbitrary consensus where everything is defined in terms of something else. And yet that doesn’t mean we can do without them. In order to operate in the world we have to trust that we all mean more or less the same thing by reference to these standards.

In the world of higher education, the equivalent of the meter is the credit hour. John Meyer, a venerable sociologist of higher education at Stanford University, was the first to point out to me just how strange a convention this is: “The idea is that your value in the mind of a rational god is a standardized thing in this world.”

Because in fact, there’s nothing standard about the content of a credit hour, in terms of how it’s actually spent. As an undergrad I whiled away pleasant classroom hours discussing Emily Dickinson, while my friend, an engineering major, spent grueling sleepless nights grinding out problem sets, yet we both earned equivalent credits towards our degrees, give or take a few distribution requirements.

In the United States system, you can earn undergraduate credits by writing basic five-paragraph essays, practicing ballet, interning for a political campaign, or building a web site—and legitimately so: These can all be valuable learning experiences. Conservatives love to make fun of undergraduate seminars where the coursework involves watching Lady Gaga videos or porn or Mexican telenovelas, to say nothing of the lazy professors—we’ve all had them—who replace lecture time with movie time. And yet somehow, if you play your cards right, all those credit hours, however you spent them, add up to a degree that can be your most important passport to a better job and a better life.

This type of pleasant chaos is now coming under greater scrutiny. In June, the House Education and Labor Committee held a hearing to try to better define a “credit hour.” And while it may seem esoteric, this is a federal matter because federal financial aid, both grants and loans, is such an important factor in funding undergraduates. And members of Congress are especially concerned that certain for-profit colleges, which soak up more than their share of federal student aid, may be inflating the definition of a “credit hour” in order to keep customers—I mean, students—happy.

But the idea of creating a standard definition of a credit hour—the proposed definition is “one hour of classroom or direct faculty instruction and a minimum of two hours of out-of-class student work each week for approximately 15 weeks for one semester or trimester hour of credit”—quickly collapses into absurdity when you think about the diversity of experiences that are accommodated under the “credit hour” blanket, not to mention the possibilities of innovative uses of technology.

For example, the Open Learning Initiative at Carnegie Mellon created a statistics course that was a blend of online practice with a specially designed tutoring program and in-person instruction. The course met twice a week for eight weeks, versus four times a week for 15 weeks in a conventional course. The computer-assisted students learned more and retained the information just as well as the students in the conventional program, but they did it in one-quarter of the total classroom time. Should they get one-quarter of the credit hours?

These difficulties don’t mean, however, that we should throw up our hands at the possibility of ever coming to a better consensus on the definition of a credit hour. (Besides cracking down on credit inflation, another very good reason to do this is to improve transferability of credits, since half of students start out at community colleges and at least 60 percent transfer at some point in their career.)

One possible answer is to promote more visibility into exactly what goes on inside various classrooms—something that sites like Academic Earth, the Open Courseware Consortium,  and Einztein, which show lecturers at top universities at work, can do. Another is to promote publishing students’ work to the open web, as is done at UMW Blogs, and greater discussion and collaboration amongst students at different universities, as can happen on study network sites like StudyBlue.

It may not be possible to measure the content of a credit hour more accurately, but we can certainly be more precise about it.

Anya Kamenetz Interview, Sakai Conference 2010 from Michael Feldstein on Vimeo.

Michael Feldstein has put up my Sakai Conference keynote (starts at 23 minutes in) as well as a short video interview that’s embedded above.

Sakai, if you don’t know about it, is a major open-source learning management system (an LMS is like an enterprise software solution for universities, providing platforms for electronic gradebooks and web pages for courses and that kind of thing; Blackboard, whose conference I’m speaking next week, is the much-maligned Microsoft of LMS companies).
His post, as usual, is an excellent, generous and detailed response, especially to the educational tech and instructional design challenges posed by the university functions I identify as content, socialization, and accreditation. Consensus: much work remains to be done.

I was really excited to get the opportunity to address this audience of educational CTOs and CIOs, and I’m very excited to be speaking to a broadly similar audience at Blackboard’s developers’ conference for their API, which is called Building Blocks. My message to both of them is somewhat like Valerie Casey’s message to the crowd at South By Southwest Interactive: You tech people have real tools at your disposal to solve real problems in the real world, so take charge! She was speaking about interaction design and its impact on sustainability. I am interested in how educational technologists can address the issues of cost, quality and access in higher education.
I was talking about this to a good friend who does open source web development. He was talking about working with a team of quants for one of their specialized clients (not Dance4less.com, but picture a similar niche retailer), and how amazing it was to get detailed realtime feedback on every single design change.
(For a university, examples: Should we start registration in April or January? Should math study groups meet once or twice a week? Do students learn Spanish better in 3-week intensives or 12 week semesters? Do first-generation students do better in their own dorms?)
“It’s so weird that universities are the birthplace of all this amazing technology and yet they don’t eat their own dog food,” he said.
Yeah, it is weird.