Showing posts with label internet. Show all posts
Showing posts with label internet. Show all posts

Announcing announce@list.mathed.net

"Maybe someday you could continue Jerry Becker's email listserv." --David Webb, circa 2015

Dr. Jerry Becker died on April 16 at the age of 85, leaving behind his wife, three children, many grandchildren and great-grandchildren, and a very useful set of email listservs. By way of my Ph.D., Dr. Becker is an academic great-uncle of mine, although I never had the pleasure of meeting him. I can't pretend to fill his shoes, but there is something I can do to continue in his tradition by providing a service similar to what he offered for so many years.

Today I'm announcing announce@list.mathed.net. (Self-subscribe here.) It's an email distribution list to share the kinds of things people used to share through Dr. Becker: job openings, conference announcements, requests for articles for journal issues, and other items of interest to the mathematics education community. Instead of sending items to me, subscribers to the list can email the list address directly and I'll moderate items along that appear legitimate and useful. I'll tweak things along the way and, if there's demand for additional lists or services, I can consider offering them. The service is provided by an international GNU Mailman host. The software isn't flashy, but it works and isn't going anywhere. I may not live to 85 to keep hosting the list like Dr. Becker did, but I'll stick around as long as I can and I won't be surprised if Mailman sticks around that long, too.

So who am I and why am I doing this? I'm the mathematics specialist at the Colorado Department of Education, and prior to that I was a high school math teacher and a Ph.D. student at CU Boulder. Regardless if I was working in practice, research, or policy, I've been interested in the organization of education communities and how they communicate ideas. This includes professional organizations, Twitter, the Global Math Department, and forums like MyNCTM. Under the mathed.net domain, I've blogged and maintained a wiki, and at one time spun up an experimental instance of a social network using free software. In my current role with CDE I operate the CoMath listserv, which has been in existence since 1995, and I help edit the Colorado Mathematics Teacher journal. My advisor, David Webb, made the comment above somewhat offhandedly partway through my graduate school experience and it's stuck in my head ever since. Dr. Becker didn't leave his listservs to a successor, and the subscription lists they contained are (as they should be) the private information of his university. But that doesn't mean we can't try starting anew to continue the old.

I've long believed that if mathematics teachers and educators are going to all be part of a professional community, it's going to take many different related sub-communities in many different forms, using different technologies, membership structures, languages, and different target audiences. We're too numerous to all huddle under one hashtag, and too smart to think that our ideas could—or should—fit in one place. Maybe others will start email list services of their own to meet a particular need, or find other ways to communicate. That would be excellent. The more the merrier. We all have a part in this, and my next part is to moderate a new email list. So if you have something to share, or need to have things shared with you, I'll be at announce@list.mathed.net waiting for you to subscribe and post your messages.

Maintaining an Online Scholarly Identity

tl;dr: I prefer an ORCID + figshare + Twitter/Google+ combination over services like ResearchGate and Academia.edu.

By 2009 I was taking social media pretty seriously, and with all those links to find, save, and share, I put a lot of thought into social bookmarking. I was trying to figure out how to manage a bunch of overlapping options; at one time or another, I was using Delicious, Diigo, Pinboard, Google Reader, Facebook, Twitter, Google+ and FriendFeed. Some of those products had integrations (like sending links from Reader to Twitter) while simultaneously having overlapping functions (like following someone on Reader for the links you were also seeing them post to Twitter). New services popped up all the time, and others faded away or were killed off completely. It was kind of a mess, and it taught me some things about what I valued in my tools: (1) multiple small tools (but not too many) focused on being good at specific tasks could return greater total value than all-in-one solutions, and (2) openness is important, for financial reasons (both mine and the service), interoperability, and exportability.

The current landscape of tools and services for managing an online scholarly identity feels very reminiscent of those social bookmarking days. Part of that online identity includes general social media services, like Twitter and Facebook, but also a bunch of services for academics: Academia.edu, ResearchGate, ORCID, Google Scholar, figshare, SlideShare, ResearcherID, and Mendeley. I'm sure there are others.

My experience with social networks and bookmarking taught me I had three basic needs: establishing my identity, saving resources, and a place to follow and share with others. For my identity, I've used my own website as well as services like about.me. For saving resources (links, mostly), I eventually ditched both Delicious and Diigo and went with Pinboard. For following and sharing, I now stick mostly to Twitter and Google+, with less activity on Facebook and LinkedIn.

With a scholarly identity, I feel like I still have the same three basic needs: establishing my identity, saving resources, and a place to follow and share with others. In academia, your identity is often represented by your curriculum vitae and publication record, and there are ways of maintaining a CV online that go beyond just posting a PDF of the paper version. For saving resources, academics need a repository to save their slides, posters, handouts, pre-prints, and unpublished manuscripts. That leaves a place to follow and share with others. It could be an all-purpose social network, or it could be something more specialized. Here's how I see the services I mentioned playing out across these three needs:

My view of the online scholarly identity landscape

Right away, you see two services, Academia.edu and ResearchGate, making the all-in-one play. They also happen to be mostly closed, profit-seeking services, which has raised the eyebrows (and/or fists) of some academics. Mendeley is nearby but doesn't offer much as a social network. Google Scholar is a bit further from the center, as it lets you follow other academics and be notified about new publications, but offers no way to interact with other people. For the most part, these services — while certainly valuable in their own ways — go against my two criteria of simplicity and openness. Does that mean I don't use them? Actually, I have accounts and profiles on all four of these services, but I don't spend a lot of time on them and I'm very wary of the rights they want over my work. It's tricky to add a publication to your ResearchGate or Academia.edu profile without actually giving them the document, and if you manage to do it they'll hound you for the full-text version. So tricky is Academia.edu in this respect, I've decided to trust it with nothing. For more on the pros and cons of these services, I'll refer you to the post "A Social Networking Site is Not an Open Access Repository" from the Univeristy of California.

That leaves me seeking out tools in the non-overlapping parts of my diagram. Here's what I've come to use most:

Identity: This is probably the easiest choice of them all. My ORCID is like my CV, and its sole purpose is to give researchers a unique identity tied to their scholarly activities and outputs. (See this as a list of ten things, if you prefer.) My profile is filled with a lot of things I entered manually, but when I published an article recently with Springer I gave them my ORCID and got two things: (1) the article was added to my ORCID profile automatically via CrossRef, and (2) I got a little ORCID badge on the article that links to my ORCID profile. Over time, this system is designed to make sure that no person or system confuses me with another Raymond Johnson, and all my works are tied together. I do pay attention to my Google Scholar profile since it's such a widely used and useful search tool, but you can tell Google doesn't put a ton of resources behind it and I'm not sure how many people or services rely on the profile features. Impactstory is a really cool thing that belongs in this category (nearest the center), but it really operates on top of, not instead of, ORCID and other services.

Repository: This is a tougher choice because in addition to having a repository that is stable and secure, there are also intellectual property rights to think about. I have tried hosting files on my own website, as many publishers allow academics to do. That puts a certain amount of responsibility on me, and I have to worry about registering domains, fixing broken links, having a stable URL structure, etc. I'm moving away from that and recently added all my slides and posters to figshare. While figshare does give me a profile page, identity services are really not its thing. Figshare is about sharing all kinds of open-licensed scholarly outputs, including datasets, figures, posters, presentations, and documents. They give you a stable DOI that (I assume) will always point to your file, even if figshare shuts down and someone else takes over the repository. figshare does have its rough edges, and through following their social media activity I can safely say that most of their attention is on behind-the-scenes integrations with services like ORCID, ImpactStory, and APIs that are more for librarians than individual users. figshare also takes some commitment, as you have to choose one of a number of open licenses to post your work publicly (like Creative Commons BY; I wish other CC licenses were available, but they're not), and once you post something there's no delete button. It's not that you've given your property to figshare and they won't give it back; rather, you've licensed your property for the world to see and use and figshare is making sure the world can exercise that right. There are alternatives, like SlideShare, but they're owned by LinkedIn, more business-oriented, and not as open or integrated into academic services.

I currently have two documents that I'm not sharing on figshare and have instead chosen to use our university repository. I could put more on scholar.colorado.edu and rely on the fact that it will probably operate so long as the university exists, but I can't continue to use it after I leave the university.

Social Network: While I get notifications about publications through Google Scholar and ResearchGate, I don't interact hardly at all with other people there. For that, I stick with Twitter and Google+. And that's fine with me, really, as I only have so much mental bandwidth to work with anyway. Once in a while I'll look at the Q&A on ResearchGate but rarely do I see a conversation I really want to jump into like I do routinely on my regular social media accounts.

By going with an ORCID + figshare + Twitter/Google+ combination I feel like I'm getting (and retaining) more value than I would with a single service like ResearchGate. It's a fair amount of work, though, and I'd recommend people not try to maintain too many identities at once. My Academia.edu profile is nearly empty because fewer education researchers I know are there, and it's just too much duplicate work to maintain it and ResearchGate, and my ResearchGate profile still doesn't include everything I list at ORCID. I think the rarer your name is, the less need there is for you to maintain a Google Scholar profile, and you can probably settle on just one repository for public posting of your work. I have a lot of confidence that ORCID will be around for the long haul, unlike some of these venture-funded services that will have to make a profit or likely be shut down. It's not easy to tell who might go the way of Google Reader or FriendFeed, but as we saw with those services, something new came along to replace them and it was easier if we weren't too invested in any one tool.

Oh, and if all else fails, an up-to-date, ready-to-print pdf of your CV is still not a bad thing to have handy.

Education, Neuroscience, and Tangled Webs We Weave

I'm far from the first to point this out, but some of us in the education game hold some ill-informed beliefs about the brain and what it should mean to us as teachers. These are known as "neuromyths" and there's even an organization, the International Mind, Brain and Education Society, working to improve how educators use knowledge from neuroscience. A study by Dekker, Lee, Howard-Jones, and Jones (2012) in the Netherlands found that when given 32 statements about the brain, 15 of which were myths, on average teachers believed in about 50% of the myths. I doubt teachers in the United States would fare any better, given what I see about left brain vs. right brain, "learning styles," and "only use 10%" nonsense.

Even though there is more communication than ever on peer-reviewed brain research, a lot of that communication distorts the science and ends up spreading or creating new neuromyths (Howard-Jones, 2014). What does that distortion look like? I present to you two examples, where something I saw on social media referring to the brain ended up linking back to research with claims that looked quite different.

Example One: "Your Brain Grew"

Yesterday +Joshua Fisher  pointed out this tweet:
Being sensitive to neuromyths, I admit I poked a little fun at this tweet-length, out-of-context claim. Rightly, +Paul Hartzer called me out and suggested I search for some context, such as this:

http://tvoparents.tvo.org/HH/making-mistakes

I immediately went for the "growing evidence" link, which took me to this:

https://www.psychologytoday.com/blog/the-science-willpower/201112/how-mistakes-can-make-you-smarter

As this was a review of two studies, I dove down to the reference section and tracked down the research. The first, by Moser et al. (2011), had this abstract:

Abstract:
How well people bounce back from mistakes depends on their beliefs about learning and intelligence. For individuals with a growth mind-set, who believe intelligence develops through effort, mistakes are seen as opportunities to learn and improve. For individuals with a fixed mind-set, who believe intelligence is a stable characteristic, mistakes indicate lack of ability. We examined performance-monitoring event-related potentials (ERPs) to probe the neural mechanisms underlying these different reactions to mistakes. Findings revealed that a growth mind-set was associated with enhancement of the error positivity component (Pe), which reflects awareness of and allocation of attention to mistakes. More growth-minded individuals also showed superior accuracy after mistakes compared with individuals endorsing a more fixed mind-set. It is critical to note that Pe amplitude mediated the relationship between mind-set and posterror accuracy. These results suggest that neural mechanisms indexing on-line awareness of and attention to mistakes are intimately involved in growth-minded individuals' ability to rebound from mistakes.
This sounds familiar to those who know things about growth vs. fixed mindsets, and shows that growth mindsets are associated with some brain activity that we don't see with fixed mindsets. So maybe brain "growth" doesn't happen to everyone. The second article, by Downar, Bhatt, and Montague (2011), is even more neuroscience-y:

Abstract:
Accurate associative learning is often hindered by confirmation bias and success-chasing, which together can conspire to produce or solidify false beliefs in the decision-maker. We performed functional magnetic resonance imaging in 35 experienced physicians, while they learned to choose between two treatments in a series of virtual patient encounters. We estimated a learning model for each subject based on their observed behavior and this model divided clearly into high performers and low performers. The high performers showed small, but equal learning rates for both successes (positive outcomes) and failures (no response to the drug). In contrast, low performers showed very large and asymmetric learning rates, learning significantly more from successes than failures; a tendency that led to sub-optimal treatment choices. Consistently with these behavioral findings, high performers showed larger, more sustained BOLD responses to failed vs. successful outcomes in the dorsolateral prefrontal cortex and inferior parietal lobule while low performers displayed the opposite response profile. Furthermore, participants' learning asymmetry correlated with anticipatory activation in the nucleus accumbens at trial onset, well before outcome presentation. Subjects with anticipatory activation in the nucleus accumbens showed more success-chasing during learning. These results suggest that high performers' brains achieve better outcomes by attending to informative failures during training, rather than chasing the reward value of successes. The differential brain activations between high and low performers could potentially be developed into biomarkers to identify efficient learners on novel decision tasks, in medical or other contexts.
Now we're talking about some brain activity, but the results aren't so simple. Take-away? A group of doctors who performed well on a task had brains that appeared to respond better to failure, while low-performing doctors didn't. Also, don't overlook the last bit: This study is less about finding better teaching than it is about identifying biomarkers that indicate who might be more easily taught. That's an important difference — teachers don't get to scan kids in fMRI machines and only teach the best of the lot.

Example Two: Common Core is Bad for Your Brain

Last year Lane Walker pointed me to this claim in a post on LinkedIn:

https://www.linkedin.com/groups/Did-anyone-get-any-interesting-4204066.S.5912659047466680321

Curious (and very skeptical), I followed the link to find this:

https://peter5427.wordpress.com/2014/08/28/stanford-study-common-core-is-bad-for-the-brain/

That post was referencing this article on Fox News:

http://www.foxnews.com/health/2014/08/18/kids-brains-reorganize-when-learning-math-skills/

A search for the actual research took me to an article by Qin et al. (2014) with this abstract:

Abstract:
The importance of the hippocampal system for rapid learning and memory is well recognized, but its contributions to a cardinal feature of children's cognitive development—the transition from procedure-based to memory-based problem-solving strategies—are unknown. Here we show that the hippocampal system is pivotal to this strategic transition. Longitudinal functional magnetic resonance imaging (fMRI) in 7–9-year-old children revealed that the transition from use of counting to memory-based retrieval parallels increased hippocampal and decreased prefrontal-parietal engagement during arithmetic problem solving. Longitudinal improvements in retrieval-strategy use were predicted by increased hippocampal-neocortical functional connectivity. Beyond childhood, retrieval-strategy use continued to improve through adolescence into adulthood and was associated with decreased activation but more stable interproblem representations in the hippocampus. Our findings provide insights into the dynamic role of the hippocampus in the maturation of memory-based problem solving and establish a critical link between hippocampal-neocortical reorganization and children's cognitive development.
As I suspected, the neuroscience really had nothing to do with Common Core or how to teach math. It just found out which part of the brain became more active as children increase their ability to do things from memory. That should sound exciting if you're a neuroscientist, but pretty useless if you're a teacher.

Why We Have Theories of Learning

Like a predictable telephone game, you can see how research gets distorted as it morphs its way through news articles, blog posts, and social media posts. You could criticize me for not quite backtracking all the way to the source, as I'm only referring to abstracts and not digging deeply into the research described and cited in the articles themselves. To take that last step, frankly, requires more of a neuroscience background than I possess. I don't expect that of myself, and wouldn't expect a teacher to do that, either. Daniel Willingham wrote about this a few years ago, and acknowledged the role of institutions like schools of education to collectively make sense of such research and make it useful for teachers. There are people like Jo Boaler who are doing this work. I admire her for taking on the challenge of making complex ideas understandable and appealing to a wide audience of educators, and I'm sure every day she thinks hard about what messages she has to craft and how she has to craft them. It's tricky work.

My hope for teachers is this: When you hear claims about the brain and what they mean for your teaching, be skeptical. Avoid the possibility that you'll be fooled by the next big neuromyth. Realize that a lot of neuroscience relies on placing individuals in an fMRI machine and observing their brain activity while they perform a task. Is that cool science? You bet it is. Does this kind of research capture the context and complexity of your classroom? It does not.

Instead, understand and appreciate why education and related fields have theories of learning that don't rely on knowing what the brain does. In general, theories of construcivism don't go into detail about what's happening at the synapse level, nor do they need to. Cognitive theories use schema to theorize what's going on in the head, but no fMRI machines are necessary. Situated and sociocultural theories of learning gain their usefulness not by trying to look inside the learner's head, but rather outward to that learner's environment, the tools they use, the communities they participate in, and how culture and history shape their activity. So teachers, focus on that — focus on the culture of your classroom, how your students participate, and the learning community you support. Focus on how a carefully constructed curriculum, well-enacted, supports a trajectory of student learning. It will get you much further than neuromyths.

References

Dekker, S., Lee, N. C., Howard-Jones, P., & Jolles, J. (2012). Neuromyths in education: Prevalence and predictors of misconceptions among teachers. Frontiers in Psychology, (Oct), 1–8. doi:10.3389/fpsyg.2012.00429 Retrieved from http://journal.frontiersin.org/article/10.3389/fpsyg.2012.00429/full

Downar, J., Bhatt, M., & Montague, P. (2011). Neural correlates of effective learning in experienced medical decision-makers. PLoS ONE. doi:10.1371/journal.pone.0027768 Retrieved from http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0027768

Howard-Jones, P. A. (2014). Neuroscience and education: Myths and messages. Nature Reviews Neuroscience, 15, 817–824. doi:10.1038/nrn3817 Retrieved from http://www.nature.com/nrn/journal/v15/n12/full/nrn3817.html

Moser, J., Schroder, H., Heeter, C., Moran, T., & Lee, Y. (2011). Mind your errors: Evidence for a neural mechanism linking growth mind-set to adaptive posterror adjustments. Psychological Science, 22(12), 1484-1489. doi:10.1177/0956797611419520 Retrieved from http://pss.sagepub.com/content/22/12/1484

Qin, S., Cho, S., Chen, T., Rosenberg-Lee, M. Geary, D., & Menon, V. (2014) Hippocampal-neocortical functional reorganization underlies children's cognitive development. Nature Neuroscience, 17, 1263-1269. doi:10.1038/nn.3788 Retrieved from http://www.nature.com/neuro/journal/v17/n9/abs/nn.3788.html

NCTM's Grand Challenge

This post first appeared at CLIME Connections. I thank Ihor Charischak (@climeguy) for reaching out and encouraging me to think more deeply about these issues, and for letting me repost this here.

The Old and the New

NCTM has a generation gap problem.

What Dan was noticing at the 2013 NCTM Annual Meeting may not have been just about age, but age is a big part of it. During a session at the 2014 NCTM Annual Meeting, Jon Wray reported that the median age of an NCTM member is 57.5 years. 57.5 years! I personally have a fondness for NCTM veterans and enjoy the history of mathematics education, but a median of 57.5 is big when compared to the current distribution of teacher ages, where we see a median age closer to 40-42 and a modal age of about 30:

(Source: Ingersoll, Merrill, & Stuckey, 2014)

This age difference is noteworthy for NCTM because older generations, like those in the upper half of NCTM's membership, tend to be relatively loyal to their institutions. But that's not the case for younger generations that now comprise the bulk of new teachers. Millennials often fail to find relevance in institutions, or they share in Generation X's tendencies towards institutional mistrust. Claims like these are symptomatic of NCTM's challenge:

It's not that Millennials don't value the power of being organized — they just tend to use the internet and social media to organize rather than rely on help from established organizations. An increasing number of math teachers are using Twitter and other social networks to organize themselves in both less- and more-formal ways. There might be no better example of self-organization than "Twitter Math Camp," an institution-free math conference where attendees tend to be young, connected, and not members of NCTM. (Attendees also tended to be very white and male, even more so than for the profession as a whole. That's a challenge for TMC and our social networks.)

The degree to which NCTM understands the changing needs of its membership is not entirely clear. On the one hand, NCTM does have an organizational social media presence (Twitter, Facebook, and LinkedIn) as well as blogs and social media accounts for their teacher journals (TCM, MTMS, and MT). Yet, not so long ago, members of NCTM's Research Committee appeared unaware that such tools could be used for connecting with teachers. In a 2012 report, the committee's recommended strategies for reporting research to teachers focused on journal-based publications and conferences. There were zero mentions of the internet, the WWW, blogs, social media, virtual teacher communities, or anything that would have distinguished their recommendations from plans NCTM might have formulated in the 1980s or before. While the committee's recommendations for how research gets reported in their journals and at their conferences might be sound, an assumption that math teachers will be loyal journal-reading, conference-attending members is not. NCTM's grand challenge is not to refine how well it preaches to its choir.

Thankfully, NCTM is not monolithic and some clearly understand the challenge NCTM faces in being relevant to the various needs of young math teachers. Peg Cagle is one of the better-connected members of NCTM's Board of Directors (Jon Wray is another), and if you click through to see the replies to Peg's question, you'll see a lot about what teachers want and what they feel NCTM is currently providing.

Beyond Content

In 2010 Google's Eric Schmidt famously claimed that every two days we create as much information as we did from the rise of civilization through 2003. While the accuracy of such a statement is difficult to establish, there's no doubt that we are awash with content.

Included in all this content are materials for math teachers, such as curriculum materials, lesson plan sites, instructional videos, test generators, and other teachers' reflections on their practice. What's more, this content is cheaper, more abundant, and more accessible than ever before. When math teachers perceive NCTM mostly as a provider of journals and conferences, NCTM risks becoming just another (and more expensive) content source in a vast sea of content sources. The quality of NCTM's resources certainly helps their cause, but we shouldn't ignore the possibility that people sometimes settle for good enough when they can get something easily at low or no cost. For all its journals and all its conferences, NCTM's game can't be to out-content the rest of the internet.

The internet has spawned many disruptive innovations and NCTM is one of many institutions facing challenges in this content-rich era. Traditional news media is similarly challenged to attract younger subscribers/readers/viewers who are accustomed to using the internet as an abundant source of news coverage, much of which is localized, specialized, and free. We've seen traditional news organizations experiment with variations of familiar revenue strategies, such as targeted advertising and freemium subscription models, but some think it's time for a more fundamental shift in how news media serves the public.

One of my favorite thinkers on the future of news is Jeff Jarvis, a journalism professor, blogger, and podcaster. Recently, Jeff has been working to answer the question, "Now that the internet has ruined news, what now?" Jeff has partly given his answer to this question in a five-part series (1, 2, 3, 4, 5) on Medium as he writes his way towards a new book due out in November. At the core of Jeff's vision is a service-oriented journalism based on relationships, where content is just a means to that end, not the end itself. Journalists would position themselves to work closely with communities, privileging community knowledge instead of acting as the content authority and gatekeeper. Social media would be a key tool for building and maintaining these relationships, as Jeff describes in this selection from Part 2 of his essay:

Now we have more tools at hand that enable communities to communicate directly. So perhaps our first task in expanding journalism’s service should be to offer platforms that enable individuals and communities to seek, reveal, gather, share, organize, analyze, understand, and use their own information — or to use the platforms that already exist to enable that. The internet has proven to be good at helping communities inform themselves, sharing what’s happening via Twitter, what’s known via Wikipedia, and what matters to people through conversational tools — comments, blog posts, and tweets that, never mind their frequent banality and repetition and sometimes incivility, do nevertheless tap the cultural consciousness.

To be clear, Jeff isn't saying journalists should just be replaced by the public sharing of information. Journalists can add value to the community's knowledge by raising new questions, adding context, bringing experts into the conversation, fact-checking, and performing other duties long-associated with quality journalism. What's different, says Jeff, is that "simply distributing information is no longer our monopoly as gatekeepers and no longer a proper use of our scarce resources." Content doesn't go away, but it takes on a supporting role for journalists focused on maintaining personal relationships with their community and its members.

I may be overestimating the similarities between challenges faced by news organizations and by a professional teaching association. But where visions for the future are concerned, I think Jeff Jarvis's service-oriented, relationship-based model for journalism may also be a promising model for NCTM. When I re-read Jeff's essays and mentally substitute "NCTM" for "journalism" or "news," I start to imagine a different kind of NCTM focused on privileging and coordinating the knowledge and relationships of a community of math teachers, one in which journals and conferences are merely seen by members as means, not the ends.

What Now for NCTM

I may be guilty of armchair quarterbacking. I also may be guilty of underestimating how much NCTM members already feel part of a strong professional community built on relationships. During the same panel at which Jon Wray mentioned NCTM's median age was 57.5, he also proudly expressed that he thought of NCTM as a collection of members he could refer to as "we" or "us." That's great for Jon and like-minded members, but that's not where NCTM's grand challenge lies. The challenge is with those who see NCTM as an "it" or a "they," likely young teachers who only associate NCTM with conferences they might not attend and publications they might not read.

I do not profess to be an expert in relationship-building, nor do I believe there to be easy answers. That's part of what makes this a grand challenge. That said, here are a few ideas for moving forward:

  • Don't be faceless. NCTM's blogs and social media accounts are a good start, but to build strong relationships we need to associate with each other as individuals, not as product titles. For example, instead of a @MT_at_NCTM Twitter presence to represent the journal, NCTM needs the editors and authors of Mathematics Teacher to represent themselves online as individuals. The same goes for board members, NCTM staff, and anyone else who identifies with the organization. It's easier to build trust with a person than a brand, and in my two years of helping teachers develop criteria to identify quality resources, I still don't think any indicator of resource quality matters more to a teacher than to have a recommendation from an individual they trust.
  • Find teachers where they are. Perhaps a time existed when it would have made sense for NCTM to build its own social networking site, but that time has passed. We should leverage the networks that already exist and find the teachers there. Some math teachers already use social media for professional reasons and would be easily engaged by NCTM. Other teachers of mathematics, who may only use social media for personal reasons, number in the tens and potentially hundreds of thousands. They may or may not be NCTM members, or regularly interact with other teachers online, but they exist. NCTM needs to organize its membership so that we seek these teachers out, show them that we care, and offer our support.
  • Don't just push, listen. The most common behavior I currently see in NCTM's social media streams is pushing content. To again use @MT_at_NCTM as an example, instead of just pushing out a daily link to an article or calendar problem, show that you're listening to the community. Talk to teachers about what they need and want. Use the journal to respond to these needs and show the community that you're listening. When there's a new article to share, arrange for the authors to engage in discussions and Q&As around what they've written. Again, engage as individuals, and use the @MT_at_NCTM account (and likewise, the other journal social media accounts, blogs, etc.) to highlight and point people to these community interactions.
  • Build a thank you economy and know your members. NCTM should take a few pages from Gary Vaynerchuck's playbook and establish a "thank you economy" with its members. Gary's current business is helping brands with their marketing, focusing more on listening and thanking than with pushing and closing deals. The language Gary uses in his keynotes is NSFW and his message is bold. Here's a 10 minute version and hour-long version of Gary's talks. (Note that these are 3-4 years old but still sound cutting edge. On Gary's clock, that means the next big thing is probably already here.) Gary is a big believer in knowing your customers and using that knowledge to show how much you care. Imagine an NCTM that used social media to know more about you as a teacher — the subjects you were teaching, the textbooks you have, the length of your class period, nuances in your state and local standards, etc., and used that information to help you in ways very specific to your needs. That kind of listening and caring about teachers as individuals builds loyalty.
  • Play matchmaker. At both the AERA and NCTM Annual Meetings this year I heard someone say something like, "We need a match.com for connecting teachers who want to work together" or "We need a website that connects teachers who want to work with researchers." Along with knowing teachers well enough to match them with relevant content and material resources, NCTM should know enough about its membership to connect members with each other.
  • Guide teachers towards mastery. In a 2001 article in Teachers College Record, Sharon Feiman-Nemser discusses what a continuum of teacher education might look like if it began with preservice teachers and continued through the early years of teaching. This continuum would need mentorship and induction programs better than what we have now and, most importantly, someone to coordinate teacher learning across university and school boundaries. For math teachers, NCTM might be the organization that could make this happen. If NCTM knew the strengths and weaknesses of teacher preparation programs, and of individual graduates, and knew more about those individual teachers' needs and experiences, they could position themselves as the facilitator/provider of high-quality, ongoing professional development for teachers. Examples: Maybe I'm a new teacher hired to teach 7th grade, but I student taught with 11th graders — NCTM could build my 1st-year PD around video cases with 7th graders. Maybe my teacher education program was strong in its approach to formative assessment — NCTM could provide support in furthering my practice instead of starting back at the basics. Maybe I switched states for my new teaching position — NCTM could help me better understand how teaching math is different in my new place, and what's worked well for other teachers making a similar move. Yes, this is that big data stuff that scares some people, but I'm not sure the size of the data matters much when it leads to something genuinely helpful.

These are just some ideas. Others will have different perspectives on NCTM's challenges and possible ways to meet them, but I hope this either starts or adds to conversations about math teaching as a profession and we should value in our professional organizations. While I understand why some teachers aren't members of NCTM, I think math teaching is a stronger profession with a strong NCTM. It's a better "we" than a "they." This stronger NCTM lies in a new generation of math teachers, ones who I believe are willing to connect and collaborate as part of an organization committed to forming relationships with them and amongst them, not just providing content to them.

Nielsen's Reinventing Discovery (2011) in the Context of Education Research

As a Ph.D. student I've taken my share of methods courses, giving me skills in everything from ethnography to ANOVA. But as important as those things are, I've sensed that there are new research methods emerging thanks to technological advancements and online communities. Our lives are too data-rich and our means of communication are too plentiful to limit ourselves to the same methods for research -- and learning -- that we used just 10 years ago.

Even though I feel like I live in the thick of this revolution, engaging with teachers and researchers on Google+ and Twiter, I wanted a broader perspective on how researchers use networks to make new discoveries. For this I turned to Michael Nielsen's book Reinventing Discovery: The New Era of Networked Science. Although Nielsen is a pioneer in quantum computing, I hoped to find some ideas that I could apply to a social science like education research.

Nielsen uses a variety of examples and concepts to describe what works and what doesn't (or hasn't) in networked science. Instead of listing them here, watch this TEDx talk by Nielsen:


If that talk wasn't long enough for you, Neilsen held a longer talk at Google that is worth checking out.

As much as I like Neilsen's example of Tim Gowers's Polymath Project, I can't imagine a direct translation to education research. One of the beautiful aspects of mathematics is that it usually doesn't require conducting an experiment, interviewing subjects, sampling a population, or agreeing on a conceptual framework -- the kinds of things that make social science untidy and difficult. Frankly, if solving problems in education were structured like proving mathematical theorems, I think we'd be solving more problems and finding better solutions than we are currently.

Neilsen's story about Qwiki hits home for me. For some time now I've imagined creating and maintaining a wiki that essentially translates the contents of the NCTM's Second Handbook of Research on Mathematics Teaching and Learning into knowledge that teachers could access and use. Just like Qwiki, it's easy to get math teachers and educators to agree that this would be a great resource to have. Unfortunately, I'm not sure how a math education wiki like the one I've imagined would avoid Qwiki's fate. Without incentives for experts to contribute and maintain the site, I'd probably spend more time fighting spam than helping teachers.

Neither the Polymath Project or Qwiki offer a blueprint for a new kind of mathematics education research. Thankfully, Nielsen describes some general characteristics for successful networked science. First, in his chapter titled "Restructuring Expert Attention," Nielsen suggests networked science has these attributes:
  • Harnessing Latent Microexpertise -- The project must allow even the narrowest of expertise. A 3rd-year algebra teacher might not have the broad expertise of an experienced math education researcher, but that 3rd year teacher might have small elements of expertise that exceed that of the recognized experts.
  • Designed Serendipity -- The project needs to be easy to follow and encourage participation from a variety of experts. You want problems to be seen by many in the hopes that just a few will think they have a solution they wish to contribute.
  • Conversation Critical Mass -- One person's ideas need to be seen by others so they create more ideas, and the conversation around all the contributions keeps the project going.
  • Amplifying Collective Intelligence -- The project should showcase the fact that collectively we are smarter than any one individual.
Those are all great characteristics of any project. But what makes this any different than any traditional, offline project? Nielsen offers several suggestions. Unlike a large group project with clear divisions of labor, technology allows us to divide labor dynamically. Wikipedia certainly would not have grown the way it did if labor had been divided statically between a set of contributors. Also, networked science uses market forces to direct the most attention to the problems of greatest interest. Lastly, contributing to an online project rarely feels like committee work, and participants can more easily ignore poor contributions or disruptive members.

Projects like Wikipedia and Linux exhibit the above attributes, but Nielsen explains that such projects needed something extra in order to scale to thousands of participants. Nielsen describes these in a chapter called "Patterns of Online Collaboration," and they are: (1) being modular, (2) encouraging small contributions, (3) easy reuse of earlier work, and (4) signaling to what needs attention. When I look at this list and think of Wikipedia, I can see how well a wiki or open source software project fosters these patterns. But how do we build such a project in education? Given Nielsen's framework above, a project that would interest me needs three key aspects:
  • The content of the project has to be something that both teachers and researchers can contribute, such as a collection of math tasks, curriculum plans, or perhaps pedagogical techniques.
  • Teachers need to be able to easily use and modify each other's content.
  • (This one's the crux!) When teachers use content, there needs to be a way to collect and submit feedback about the use of that content, and that feedback becomes data that researchers can use not only to improve the content of the site, but to produce new and traditional reports of research.
It's that last bullet that's the hardest but most intriguing. There are so many places to get lesson ideas on the internet, but I don't know of any that collect data about the effectiveness of the lesson in a format suitable for research. Khan Academy claims to do this this kind of data collection internally, but KA is a closed project that lacks nearly all of the attributes Nielsen has described in his book. The project I want needs to be an open one, with all of its moving parts exposed and no more owned or identified with a single participant as Jimmy Wales is identified with Wikipedia. If you have ideas for what such a project could/should look like, leave them in the comments!

Settling slope and constructive Khan criticism

This was co-written with Frederick Peck, a fellow Ph.D. student in mathematics education at the University of Colorado at Boulder and the Freudenthal Institute US. We each have six years of experience teaching Algebra 1 and are engaged in research on how students understand slope and linear functions. Fred shares his research and curriculum at RMEintheClassroom.com.



Sal Khan (CC BY-NC-ND Elvin Wong)
The Answer Sheet has recently been the focus of a lively debate pitting teacher and guest blogger Karim Kai Ani against the Khan Academy's Salman Khan. While Karim's initial post focused mainly on Sal Khan's pedagogical approach, Karim also took issue with the accuracy of Khan Academy videos. As an example, he pointed to the video on slope. Specifically, Karim claimed Sal's definition of slope as "rise over run" was a way to calculate slope, but wasn't, itself, a definition of slope. Rather, Karim argued, slope should be defined as "a rate that describes how two variables change in relation to one another." Sal promptly responded, saying Karim was incorrect, and that "slope actually is defined as change in y over change in x (or rise over run)." To bolster his case Sal referenced Wolfram Mathworld, and he encouraged Valerie Strauss to "seek out an impartial math professor" to help settle the debate. We believe that a better way to settle this would be to consult the published work of experts on slope.

Working on her dissertation in the mid-1990s, Sheryl Stump (now the Department Chairperson and a Professor of Mathematical Sciences at Ball State University) did some of the best work to date about how we define and conceive of slope. Stump (1999) found seven ways to interpret slope, including: (1) Geometric ratio, such as "rise over run" on a graph; (2) Algebraic ratio, such as "change in y over change in x"; (3) Physical property, referring to steepness; (4) Functional property, referring to the rate of change between two variables; (5) Parametric coefficient, referring to the "m" in the common equation for a line y=mx+b; (6) Trigonometric, as in the tangent of the angle of inclination; and finally (7) a Calculus conception, as in a derivative.

(CC BY-NC-SA Raymond Johnson)
If you compare Karim and Sal's definitions to Stump's list, you'll likely judge that while both have been correct, neither have been complete. We could stop here and declare this duel a draw, but to do so would foolishly ignore that there is much more to teaching and learning mathematics than knowing what belongs in a textbook glossary. Indeed, research suggests that a robust understanding of slope requires (a) the versatility of knowing all seven interpretations (although only the first five would be appropriate for a beginning algebra student); (b) the flexibility that comes from understanding the logical connections between the interpretations; and (c) the adaptability of knowing which interpretation best applies to a particular problem.

All seven slope interpretations are closely related and together create a cohesive whole. The problem is, it's not immediately obvious why this should be so, especially to a student who is learning about slope. For example, if slope is steepness, then why would we multiply it by x and add the y-intercept to find a y-value (i.e., as in the equation y=mx+b)? And why does "rise over run" give us steepness anyway? Indeed, is "rise over run" even a number? Students with a robust understanding of slope can answer these questions. However, Stump and others have shown that many students -- even those who have memorized definitions and algorithms -- cannot.

(CC BY Amber Rae)
This returns us to Karim's original point: There exists better mathematics education than what we currently find in the Khan Academy. Such an education would teach slope through guided problem solving and be focused on the key concept of rate of change. These practices are recommended by researchers and organizations such as the NCTM, and lend credence to Karim's argument for conceptualizing slope primarily as a rate. However, even within this best practice, there is nuance. For instance, researchers have devoted considerable effort to understanding how students construct the concept of rate of change, and they have found, for example, that certain problem contexts elicit this understanding better than others.

Despite all we know from research, we should not be surprised that there's still no clear "right way" to teach slope. Mathematics is complicated. Teaching and learning is complicated. We should never think there will ever be a "one-size-fits-all" approach. Instead, educators should learn from research and adapt it to fit their own unique situations. When Karim described teachers on Twitter debating "whether slope should always have units," we see the kind of incremental learning and adapting that moves math education forward. These conversations become difficult when Sal declares in his rebuttal video that "it's actually ridiculous to say that slope always requires units*" and Karim's math to be "very, very, very wrong." We absolutely believe that being correct (when possible) is important, but we need to focus less on trying to win a mathematical debate and focus more on the kinds of thoughtful, challenging, and nuanced conversations that help educators understand a concept well enough to develop better curriculum and pedagogy for their students.

Khan Academy (CC BY-NC-ND Juan Tan Kwon)
This kind of hard work requires careful consideration and an open conversation, even for a seemingly simple concept like slope. We encourage Sal to foster this conversation and build upon what appears to be a growing effort to make Khan Academy better. Doing so will require more than rebuttal videos that re-focus on algorithms and definitions. It will require more than teachers' snarky critiques of such videos. Let's find and encourage more ways to include people with expertise in the practice and theories of teaching mathematics, including everyone from researchers who devote their lives to understanding the nuance in learning to the "Twitter teachers" from Karim's post who engage this research and put it into practice. This is how good curriculum and pedagogy is developed, and it's the sort of work that we hope to see Sal Khan embrace in the future.



*Sal's point is that if two quantities are both measured in the same units, then the units "cancel" when the quantities are divided to find slope. As an example, he uses the case of vertical and horizontal distance, both measured in meters. The slope then has units of meters/meters, which "cancel". However, the situation is not so cut and dry, and indeed, has been considered by math educators before. For example, Judith Schwartz (1988) describes how units of lb/lb might still be a meaningful unit. Our point is not to say that one side is correct. Rather, we believe that the act of engaging in and understanding the debate is what is important, and that such a debate is cut short by declarative statements of "the right answer."

References

Schwartz, J. (1988). Intensive quantity and referent transforming arithmetic operations. In J. Heibert & M. J. Behr (Eds.), Number Concepts and Operations in the Middle Grades (Vol. 2, pp. 41–52). Reston, VA: National Council of Teachers of Mathematics.

Stump, S. L. (1999). Secondary mathematics teachers' knowledge of slope. Mathematics Education Research Journal, 11(2), 124–144. Retrieved from http://www.springerlink.com/index/R422558466765681.pdf

Do Online Gradebooks Compromise Our Teaching?

Aaron Eyler recently raised the question of online gradebooks on his blog. While Aaron's concerns centered more on "what does a grade mean" and the easier-than-ever assumptions we can make by looking at a letter in a gradebook, I've been more concerned about the negative effects online gradebooks might be having on how we teach. I'm all for running an open classroom and I like knowing that parents and students are monitoring progress, but I believe online gradebooks have some negative consequences. For example:

1. Do online gradebooks discourage formative assessments? From my standpoint, once you assign a fixed grade to an assignment, the gradebook sees it as summative. (Even if the teacher doesn't.) Formative assessments are important tools in both assessment and instruction, and often can go on for days without deserving a grade in the gradebook. Unfortunately, we get external pressure to put everything in the gradebook, whether it be from parents or administrators who want to monitor progress or athletic directors needing grades to determine eligibility. Students can also lack motivation if they aren't seeing their grades change as they work.

2. Should online gradebooks show class assignment averages? Suppose you give an assignment to ten students and the scores are 90, 90, 90, 80, 80, 70, 70, 70, 0, and 0. (The use of zeros is another issue, but they were expected in my school if students didn't turn in assignments.) All the students who turned in the assignment passed, and half the class got a B or better (80% = B). But because of the zeros the class average is 64%, which was failing at my school. Sadly, more than once a parent or administrator would contact me and say I had failed to teach the students because "the class got an F on the assignment."

3. Online gradebooks (at least the ones I've used) only accept numbers as input, significantly restricting options teachers have for giving feedback. Butler (1987) performed a study that revealed that indicating the grade earned on an assignment had negative impacts on performance. If you have a choice between grades, feedback, and grades plus feedback, going with feedback only can lead to the most improvement because students will focus on the feedback and use it to improve. With online gradebooks, the grade received on any assignment is a click away, potentially rendering the feedback less useful.

Poor grading practices can have negative effects on both assessment and instruction. I'd be surprised to find a school not using an online gradebook, whether it be popular systems like Infinite Campus and Powerschool, or systems from smaller players like Go.edustar, SME, MyGradeBook, Thinkwave, and Gradeconnect. (A Google search reveals many more!) Each gradebook has its own limitations, but my three concerns above likely will exist in all of them. What are your experiences with online gradebooks? Am I underestimating the positives? Are there negatives that I haven't listed? I'd love to hear your thoughts.

References

Butler, R. (1987). Task-involving and ego-involving properties of evaluation. Journal of Educational Psychology, 79(1987), 474-482.

Managing the Flow of Information in Your PLN, and Why You Should Stop Buzzing Your Tweets

While I've blogged since 2001, I resisted social networking tools for a long time, probably the result of seeing my students use MySpace. To me it looked like Geocities, only more judgmental and vain. I finally joined Twitter in the fall of 2008, Facebook in spring of 2009, and last fall I took steps to separate personal and professional networks. Now I use all kinds of social services, and often find myself thinking more about how to use them than actually using them. Hopefully in this blog post all that thinking will pay off.

Just like there's no one way to establish friendships, there's no one way to build a PLN (professional learning network). Here are the primary tools I use, ranked in order of importance:

  1. Twitter (there's no easier way to connect and converse with others in real-time)
  2. Blog (everyone needs a flexible place to express original thoughts and receive feedback)
  3. RSS Reader (for finding and keeping up with other people's content, although some just use Twitter for this purpose)
  4. Social Bookmarking Service (for saving the interesting sites/pages you find)

Like I said, those are my primary tools. There are many more out there, and new ones show up all the time. As we add these new tools to our toolboxes we gain powerful new ways of connecting with our peers. Unfortunately, that power is diluted when the interaction between tools gets complex, as it has with Google Buzz. A number people in my PLN have tried to integrate Buzz into their PLN with mixed success. Buzz has too much promise to ignore, so let's learn from our experience and think about how we're using Buzz.

People will remember the privacy mistakes Google made when they launched Buzz, but I think they made another mistake that has gone unnoticed: instead of importing tweets into Buzz, Buzz should be using Twitter to notify people of new buzzes! Buzz is bigger and more capable than Twitter, and I think importing tweets will prove to be a backwards flow of information. If your blog had the option to import each of your tweets as a new blog post, would you do it? Of course not. You wouldn't use Diigo to bookmark your tweets to share them, either. Twitter asks you, "What's happening?" and while we've used it to do much more, tweets aren't proving to be at all useful in Buzz. We've all used Twitter to notify our PLN about our new blog posts; let's use Twitter the same way to alert our PLN about a new buzz.

Here's my attempt to diagram the flow of content in my PLN:


The model is far from perfect, for sure. You can save and share in both Diigo and Google Reader, but the services are different enough that I use each independently for each interesting site I want to share. (Thus the reason for the dotted line.) The "Share to Reader" bookmarklet gets pages into Buzz, and I would personally be happy if Google bought Diigo and really worked to nicely integrate its capabilities into Buzz and Reader. Also, Google really should integrate blog post comments in Buzz to Blogger, and vice versa. We really need a unified commenting system in a lot of places, but there's no excuse for Google to not have this figured out for their own services. Lastly, Twitterfeed shouldn't be necessary, as I think Buzz should have the option to post to Twitter.

So, in summary, remember that 1) there's more than one way to build a PLN; 2) tweet a buzz, but don't buzz a tweet; and 3) expect some bumps along the way. (And try not to stress out about the tools as much as I do!)

It's Up to Educators to Keep the "Open Innovation Portal" Open

Thanks to open source software, such as Linux, Mozilla Firefox, and OpenOffice.org, as well as thousands of smaller projects, I've come to expect more of things when they're described as "open." When I hear the word "open," I expect a community-oriented process where contributions are welcome, application of the product is flexible, and in turn for receiving something for free, you promise to keep your contributions free, too. This kind of open process is a beautiful thing and a great model of how well we can innovate.

The US Department of Education's recently launched "Open Innovation Portal" is designed to give students, teachers, administrators, and other educators a place to "contribute ideas, collaborate on solutions, and find partners and resources." I welcome this effort, and I hope to see this become a place where classroom-based educators have the power to shape the education reform conversation. If you follow many educators on Twitter or their blogs, you know good ideas are already out there, and innovation is happening because teachers like getting ideas from other teachers. The Open Innovation Forum can do the same thing on an even larger scale, with the key benefit of adding a policy-oriented audience.

As I write this, I rank #1 in Colorado on the Open Innovation Forum with 600 points. (I haven't submitted any ideas, but I did score points for filling out my profile.) While I hope to contribute the best of my ideas, I don't expect my #1 ranking to last. What's troubling, though, is the national leaderboard, where the #1 position belongs to a CEO of a company selling reading programs. I don't mean to judge the quality of their product - for all I know, it could be the ideal solution for a school looking to improve students' reading skills. But I'm troubled by the immediate influence of a for-profit company posting advertisements for their products as ideas, and wondering if the Open Innovation Portal will become little more than a national Craigslist for education.

The Open Innovation Portal will be as open as its members make it. If teachers want to freely share ideas (free as in speech, as well as $$$), then it's our responsibility to participate and take charge of the conversation. There's nothing stopping us. If we don't, I don't see how this will avoid becoming much more than a glorified, government-sponsored classifieds section. So fellow educators, I challenge you to go back through your blog posts and dig up your very best ideas and start posting them on the Open Innovation Portal. When you do, encourage your PLN to participate in the portal so your ideas are heard, and let's all help keep "open" in the open.

Why Digital Books Won't Spread Like Digital Music

Bud Hunt posed a question tonight asking why schools should consider replacing their paper books with eBook readers. After reading and writing responses, I think I know why you (or your school) should hold on to those dead-tree versions a while longer.

People want, even expect, to access books digitally. Wasn't "all the world's libraries in your computer" one of the early promises of the internet? We will still get there someday, but our vision is being clouded somewhat by our memories and experiences with music. I, like you, used to get my music by going to a store and buying the CD (or tape or record). Now if I want music, I can download it instantly, put it on my mp3 player, and take it everywhere. Now that we have that model to follow, why should digital books be any different?

What we lack in the book world are all those college kids in their dorm rooms, ripping their CDs and posting mp3s to their FTP server or onto P2P networks. The music labels certainly didn't want a digital music market, but it quickly became so easy to get music for free that they had to compete on the digital level or risk their business. iTunes and Amazon's mp3 store exist, in large part, because studio executives knew that selling music for $0.99 a track was better than getting nothing at all. Now digital music is everywhere, both legit and otherwise, and the record labels know that raising their prices would only drive customers to illegal downloading.

I'm a college student. I have books. I have a scanner. But if you think I'm going to sit here and scan my books so I can share digital copies with the rest of the world, you're crazy. I'd love to have PDF versions of all my books, but it's not worth my time or effort to digitize them. I know plenty of people who rip CDs and share their mp3s, but I don't know a single one who "rips and shares" their books. The book publishers aren't working with the same market forces as the music labels, and thus can afford to be conservative and patient with their business.

Our eBook revolution is going to happen, to be sure, but it's not going to look like the mp3 revolution. It will be slower, DRM will be as restrictive as the publishers and device manufacturers can make it, and there's no real pressure driving content prices down. Don't think so? Look what happened when Amazon tried to hold their ground on Macmillan's price hikes. Not that I like to promote any sort of illegal activity, but you might want to wait until someone breaks Amazon's (or somebody else's) DRM and spreads universally-readable versions of those books all over the internet. Until then, I wouldn't worry about missing the eBook revolution.

Research 2.0

Having just finished a semester as a student for the first time since 2002, I can say that advancements in web-based research tools have made research more efficient and enjoyable. I use Google Scholar all the time and, being at a research university, I can find most articles I want online. In the past semester I never once had to go to the library to hunt through a dead-tree journal. Integraing Zotero (for bibliography management) and Diigo (for web highlighting) into Firefox have also made my life easier. (I wish there was a Diigo extension for my brain so I could highlight any information anywhere, regardless of the medium. I think that goes well beyond Web 2.0.)

Research tools have come a long way, but I have some ideas about how much farther they need to go. I feel like most web research tools are still stuck in "Web 1.0," and not taking advantage of current technology. I've been envisioning a system for research publishing, indexing, and connecting that I think is possible to build. If you happen to read this and can do something to make it a reality, by all means steal every idea you can. Academia will thank you.

Let's start by considering the publishers. Journals give value to academia by ensuring quality for published research and delivering research to an audience. In exchange, journals take copyright of articles and restrict access to ensure they keep a sustainable business model. As an open-source guy, I don't particularly like the system but I can understand it. A "research 2.0" model can't simply do away with publishers; journals must retain their identity and they must still coordinate peer review of articles.

So imagine a wiki-like site where each article is a page. Unlike a wiki, the responsibility of posting each article would lie with a partner journal and the content would be static. The formatting of each article would be 100% consistent. Every citation in the article would link to the corresponding entry in the article's references, and each reference entry is a link to that article.

To protect the journals, no full articles could be posted for a period of 3-5 years after they're published. The journals would make an entry for the article, including an abstract and the article's reference list (so the links to other articles would be established), but the content of the article would be protected by the publisher, much as it is now. Researchers can't afford to be 3-5 years behind the latest literature, so they would still have to pay for content and keep the journals in business. Journals can't afford to lose readers, so they'd want to be part of this "research 2.0" system.

I like how Google Scholar can figure out how often articles are cited by other articles, although I'm not sure how well it works. In "research 2.0," the consistency of the links would allow for careful and accurate analysis of the relationship between references. Now here's where the 2.0 part really comes in: users could create an account on the site that allows them to mark articles as read, and articles would be "rated" based on how often they've been cited by other highly-rated articles. The site could analyze the articles a person has read, what has been cited in those readings, and recommend other articles to read. Just like a recommendation system at Amazon or Netflix, the system would use the knowledge of what I and others have already read to recommend what I should read next. Heck, I'd even pay to subscribe to such a system, and the journals could all take a cut in exchange for their cooperation.

I can't think of a single technological barrier to making this happen, and wouldn't be surprised if engineers on the Google Scholar team have already thought of it. I'm thinking about building a personal wiki to keep track of what I've read and what those readings have linked to, but I'm not sure if entering all that information is worth all the time and trouble it's sure to take.

Beyond the Blacklist

Internet filtering is a hot topic, highlighted by many bloggers recently during Banned Books Week. At my school last year, our tech staff implemented their solution to filtering: a whitelist. (I should be hearing audible gasps from across the internet as you read this.) Instead of blocking potentially harmful sites (which included anything "distracting" at this school), they argued it would save everybody time and trouble to just allow kids to use a hand-picked version of the web, and make it easy for teachers to request sites they wanted added to the whitelist.

To their credit, our tech staff did not make this change lightly or without involving the staff in the discussion. They assured us it would be okay: all *.gov and *.org TLDs would be on the whitelist (I'm not sure if they knew that anybody could get a *.org, no different than *.com), they were going to promote use of the Librarians' Internet Index, and new sites could be added by teacher request. Also, teachers' computers had totally unfiltered access to the web, so we wouldn't be inconvenienced by rules meant for students (which reduces teacher complaints/awareness). For students there would be no Facebook, no YouTube, and no Google. (The "no Google" policy didn't last long - the school provided a custom Google search several weeks after the whitelist was implemented after many complained. It still only searches the whitelist, however.)

The whitelist survived the rest of the school year and remains in place. It doesn't help that few staff members are what I would consider web2.0-savvy. (Some of our teachers had to be taught a couple years ago that spreadsheets could be scrolled left and right, not just up and down. Asking some to teach their classes how to edit Wikipedia would be like asking them to take their class on a field trip to the moon.) With predominantly tech-novice teachers, the whitelist will remain, available resources will be underutilized, and the information gap will grow. Students aren't allowed to use their mobile phones and the tech staff practically lives in fear that a student will bring a laptop into the building and want internet access. I'd like to believe this school is an exception to the norm, but it isn't. Restrictive access to information and technology tools leaves students and teachers to search for work-arounds, such as described by Dr. Alec Couros in his post, "Freedom Sticks For The Classroom."

We spend a lot of time trying to ensure our content is relevant to our students, but increasingly how we deliver content is what is losing relevance. Students understand how technology increases their power and access socially, and expect technology to increase their power and access educationally, too. Schools need to rethink their policies, such as described by Will Richardson in his post, "Don't, Don't, Don't vs. Do, Do, Do." So with less restrictive access, how do we encourage effective and productive uses of technology? We teach. We teach students how to search, how to judge the quality of information, how to avoid distraction, and how to give back to that great body of knowledge that is the internet.

To quote Ira Socol, "let's follow up 'Banned Books Week' with 'Banned Sites Year' - a commitment to replacing filtering with education and intelligent conversation." Get your tech staffs involved, and hope they adopt stances like St. Vrain Valley here in Colorado, as described by Bud Hunt in his post, "Would You Please Block?":
What we’ve decided is that we will no longer use the web filter as a classroom management tool. Blocking one distraction doesn’t solve the problem of students off task – it just encourages them to find another site to distract them. Students off task is not a technology problem – it’s a behavior problem. It is our intention that we help students to learn the appropriate on-task behaviors instead of assuming that we can use filters to manage student use. Rather than blocking sites on an ad hoc basis, we will instead be working with folks to help them through computer and lab management issues in a way that promotes student responsibility. We know that the best filters in a classroom or lab are the people in that lab – both the educational staff monitoring student computer use as well as the students themselves.
The internet is the greatest information resource in the history of the world, by far. Access to that resource is more abundant than ever, and people (including students) expect access. (Most students carry a device with them that gives them access, and we tell them to put those devices away.) If we aren't willing to move beyond the blacklist, we need to seriously reconsider what we believe the purpose of education to be. That's a big debate for another day, but I don't think any of us would agree education should be about the restriction of access.