Computers and Practice — Transcript

Bill Hart-Davidson
8 min readMay 20, 2018

Computers and Practice — Transcript

Slide 1:

Dear Computers and Writing colleagues,

Can we, in the words of Alan Iverson, and more recently Casey Boyle, talk about practice? I’m especially interested in the idea of using computers to guide practice. To help learners see and understand what their practice looks like and how to improve it.

Slide 2:

In my own life, as I’ll show you, I often use technologies like my mobile phone, my laptop, various kinds of “smart devices that can communicate with my computer or phone” and web services of various kinds to understand my practice sessions.

These are the kinds of questions I often have: Am I doing enough of the right things to get better? I have these same questions, by the way, as a teacher when I think about my students. Am I giving them enough of the right kind of practice in order to improve?

Slide 3:

Chris Gallagher has been asking the Rhetoric & Composition community to turn its attention, once again, to writers’ behavior. To focus more on what writers do.

Today, I’m asking the Computers and Writing community to do that too. As a community, I don’t think we’ve been so concerned with using computers to guide practice. We’re more often interested in using them as a medium for practice. Like in multimodal composing. And this is great.

I think we can do more. We could use computers to get students to understand their writing practice better, leading to improvements.

Slide 4:

Today we now have at our disposal some remarkable technology for seeing practice. We can use technology to attend to Frequency, Intensity & Quality. And these can help us — and our students — get better. A lot of this technology is pretty new.

To show you what I mean, let me take you back to the not so distant past…2011.

Slide 5:

These are clips from three different demonstration projects completed in 2011 (L to R): 2011 Michigander Ride, 2011 Bride Run, 2011 Ride2CW fundraiser.

These projects marked a turning point for my colleagues and me at WIDE — the team that has also brought you Eli Review, the Hedge-o-Matic, and several other writing technologies. But it was another app — Ridestream — that helped us focus on harnessing the streams of activity that digital services made available in order to see practice. Our goal, then and now, was to help folks use that data in interesting ways.

Slide 6:

Like all mad scientists, we experimented on ourselves. We went to Hell and back. Several times.

Slide 7:

What this work taught us is that we could fashion useful feedback from activity streams. Feedback that helps folks trying to improve in some way — to learn — and to better achieve their goals.

We saw, too, that the best way to do this was to help individuals understand how they were part of a larger social scene. How they compared with others, yes, but also how they could collaborate and share with others, working together to meet bigger, shared objectives.

Slide 8:

Our experiments with activity streaming coincided with others doing similar work. Here’s a timeline with a few key moments. It is hard to imagine now, in 2018, a world without Facebook’s individualized activity “timeline” isn’t it? But when we were building the RideStream App in 2010, Facebook still only showed a “News Feed” to everyone.

Not sure if that one has worked out as well as we all hoped it would…

Slide 9:

We learned a lot from our work on activity streams.

Activity streams support ad-hoc coordination, calibration and course-correction on the fly. Activity streams can be turned into feedback that affords a balance between managing a groups’ progress toward specific shared outcomes and allowing for individuals to improvise in the face of emergent challenges.

The picture here was taken by #Ride2CW coordinator extraordinare Suzanne Blum-Malley in 2011. It shows a dead end that was not represented on a map some folks were using to do route planning for the ride to Ann Arbor. When the photo appeared in the RideStream, others behind this group including myself were able to re-route to arrive at a shared destination. We learned from others’ experience.

Slide 10:

For me, using technology to improve my practice has been…well…let’s say I’ve probably gone where few have gone before…

In 2016 I participated in a NASA study connected to the Mars Mission. I did a structured workout every day for six months. And I was tested consistently and rigorously throughout that time. Every single day in fact.

I had access to state-of-the-art technology for measuring fitness. And I learned a lot about what I can do to structure practice to improve performance over time.

Slide 11:

Since then, I’ve been using the same techniques to improve my practice in all sorts of ways. The latest is music. But the basic ideas are the same.

Slide 12:

We live in amazing times. In many different areas of our lives, we can use technology to track our activity, see our practice, and make changes that lead to improvement. There are four key things we can become attuned to in particular that work across many different areas: repetition, frequency, intensity, and quality of practice.

Slide 13:

My colleagues on the panel and I are all talking about how we use our bikes to write — lets call them practice logs? Records of our performance that can be used in various ways.

Here are some examples of the way I use my bike, a “smart” trainer equipped with a Bluetooth transmitter and a power meter, my computer, and two web services called Zwift (a virtual world for riding and running) and Strava, a site that lets me record, upload, analyze, and share workout details.

With these I can see repetition and frequency (lower left), the intensity (upper right), and quality (lower left) of my practice. In short they let me know if my workouts consist of enough of the right kind of practice to improve.

Slide 14:

How well do they work? Last year, Strava arrived at the same number estimating my baseline fitness by tracking my workouts as a group of NASA scientists did. The reason Strava can do that is that they have millions of riders to compare me to. It makes up for the lack of precision in any individual measurement remarkably well.

And here’s a key point: for guiding my own practice, it is accurate, fast, inexpensive, and effective. You don’t have to train like an astronaut to get NASA-quality data in 2018.

Slide 15:

All of this work today is also informing how we have engineered Eli Review, our web service that helps teachers structure peer learning practice for writers.

So…today we can do things similar to what I’ve just shown you with cycling and bass playing…but with writing.

I’m going to walk you through a few examples. The goal is to get you intrigued to learn more about how we can attend to what writers do.

Slide 16:

Today I’m not going to go into all the work we’ve done to date to try and establish and validate our measures — particularly for ”intensity and quality” which are not nearly as transparent as repetition and frequency of practice are.

My colleague Melissa Graham Meeks & I have talked about these elsewhere and are publishing this work. So if you are interested, please let me know.

For now, let me say that the graphs you are about to see show repetition, frequency, and intensity in one view. The intensity measure is not just one indicator. It is a combination of several weighted factors. These are what you see in the table here.

The measures we take are normed, so the performance of an individual that is highlighted has some context in broad terms — the population of writers they are part of — as well as in their immediate peer learning group, their classmates.

At this point, we’ve done projections like these in about 60 different classes of various sizes. Hundreds of students at various types of institutions from K-12 to Law School. And in different disciplines. We are growing more confident that what we see is reliable and accurate.

With the views I’m about to show you, we can see a few interesting things. But the most interesting is whether a student has practiced enough for us to expect improvement.

Slide 17:

Here’s an individual student report. All the data here is cleared for use by an IRB and has been appropriately de-identified.

On the x-axis we can see 13 tasks spread across a 14 week semester. The bars show our measures of intensity. Blue is weighted slightly more than Orange but both matter and are part of a single exercise. The red line shows the harmonic mean of the word count among the bottom 30% of classmates. The green line shows the top 30%. In this group, we can see a very clear and consistent difference in every exercise between the the top and bottom 30.

Note that with this chart, we can see performance and change over time. The student shown here started fairly strong and grew stronger due to consistent, frequent practice that grew in intensity.

We can expect this student to improve.

Slide 18:

What do we think about this student?

Like our strong performer, we see very consistent practice week-to-week. This student is in the top 30% four four of our indicators, and in the middle group for three others as you can see from the table.

This is a pattern we’d expect from a student who started well, but cautiously, and then picked up the pace at the end. We can expect improvement here.

Slide 19:

How about now? Will this student get better?

This is a struggling student. Each week, they showed up for practice but did almost the minimum, hovering near that bottom 30% line the whole term. This student did not miss any practice, but did not work consistently at a level to be confident that they could improve.

Slide 20:

Here’s a feel good story. And it teaches us something about how our criteria of frequency, repetition, and intensity work together.

The student wobbles a bit at the start, sees the value of the work, and ramps up their contributions over time.

But notice that with each new challenge, we see that same pattern repeat. Progress is not, in other words, a smooth upward curve. It is recursive. This student has to have some confidence that the process works. Happily, it appears that is the case. The wobbly period gets shorter over time. The student is learning new habits. But we may need to be patient in order to see them translate to improvement in writing performance.

Slide 21:

By now, you can tell me what is happening here, right?

Here is a consistent pattern of behavior that indicates that a student is not improving. This student simply did too little to learn.

The grey bars that go below the baseline represent our effort to calculate practice missed. We borrow a technique from clinical trials called “intent to treat analysis” for this. The idea is to account for positive or negative effects in a group when an intervention is missed for some reason.

This graph shows that even when the student was present for practice, they missed out on the benefit because the effort was just not there.

Slide 22:

All of the information I’ve shown you is available today in Eli. You can get it all and make these same charts for your students. If you are interested in doing that, please do get in touch.

But here’s what I want to leave you all with…let’s do more of this. Pay attention to how our learners practice and use technology to help them practice more, more frequently, with appropriate intensity and quality, so they can improve.

Thank you!

--

--

Bill Hart-Davidson

Hyphenated, father, academic, juggler, cyclist, cook. Philosophy of life: give.