Search

doublemirror

attending to the shadow of living and learning on the web

Author

@mdvfunes

Mariana Funes is published author, chartered psychologist, a contemplation activist and a learning technologist.

No! You should not do DS106

“I found a village of humans from many parts of our planet. How strange is that? I found a small village that seemed caring. Not all of them to be sure. Some were more distant than others, some more polite, some more fearful. But I found them to be, in the rectangle before me on my desk in Swaffham, humans who shared some similar purpose. And I was part of it. The villagers would help me if I needed assistance. Not just the “professor” Jim Groom, but the students themselves offered assistance. Just like my town of Swaffham. There was caring and there was camaraderie. And chaos, but that is another story for someone else to tell.” Dr Oblivion who retired, played Suduku and lived peacefully in Swaffham before his mysterious death.

tumblr_mr6077U9ks1szp3bdo1_1280
or is it?

I have been trying to find a cogent critique of the University of Mary Washington’s digital storytelling course – DS106 and I have not managed to find anything that addresses my need to evaluate this course for the Masters in Online and Distance Education I am doing at the Open University (OU) in the United Kingdom.

I have to admit a bias upfront – I love DS106 and I am putting my Masters on hold in order to join the next run DS106 charmingly (?) called ‘Headless 13’  I also believe in the ethos demonstrated by the yearly questions set by The Edge – particularly the 2008 one ‘ What have you changed you mind about recently and why?’ I set up this blog to challenge my own thinking beyond those things that seem to me to be self evident and obvious.

So, I set myself the task to look beyond the self evident truth that DS106 is the best thing to have happened to higher education generally and to open education specifically since 1373 when ‘the people of Florence petitioned the Signoria of Florence to provide public lectures of Dante’s work, resulting in a year’s course where a lecturer, paid 100 gold florins, spoke every day except holy days’ (Peter and Deimann, 2013).

One thing we can say about DS106 uncontroversially is that is nodal online learning, a hashtag classroom and that a syndication engine plus the web are the bare bones of any DS106 like course. Once we start to talk about the kind of nodes and links or the directionality of the links we get into a more difficult terrain. It is not my intention here to repeat analysis already available elsewhere. Instead, I want to offer an outsider’s view of DS106, I am utterly uninterested in wether DS106 is a cMOOC but not an xMOOC – although it may be. I daydream when reading arguments about whether it can be put in a box and whether this is desirable – although it may be. As an outsider coming in, I see that it succeeds at getting a level of participation and commitment many can only dream about. It is easy to join in without questioning its limits and boundaries. But then, I was not put on this earth to join anything unquestioningly and I have an assignment to complete damn it!

Mind-shifting on assessment

Assessment is the process of measuring a person’s knowledge or skills. It’s not a science; it doesn’t prove anything, but passing a test or completing a practical task implies a certain level of competency. A special type of assessment (called formative assessment) is used to aid the learning process (this is called ‘assessment for learning’).

Bobby Elliot, 2003.

Assessment 2.0

This table has changed my life! Well, may be not quite, but it has changed my perspective on assessment. My environment is higher education, private higher education. This has its own issues when assessing student’s work. I teach on a Masters in People an d Organisational development at a business school in the United Kingdom. Sometimes, students come from client organisations, sometimes they can become clients after the course, and sometimes they come to work for us as consultants. Transparency in assessment is important here, as is layers of peer review and checking standards of assessment across the faculty. How do we know that standards are comparable across the faculty? We implement a riguourous assessment process that is defined each time we start working with a new cohort – self managed learning works within a framework  that Elley (1993) defines as collaborative assessment. Her focus is on how power is used in the process – ‘power together’ as compared to ‘power over’ the student or the educational establishment:

This model of power assumes that student, peers and staff work together to secure a common view of assessment and its outcomes, based on hearing and understanding different perspectives, and seeking to secure agreement which values all perspectives. This model is essentially collaborative, dependent on reaching consensus.

There are delightful and tangled issues that arise from this assessment model, not least the point of friction between traditional university assessment methods and how we assess students at my business school. Until today I had always seen collaborative assessment as a cross I had to bear for working at a non-traditional university. Now I see that I have been working at the leading edge of assessment for many years and have never stopped to critically reflect on the heutagogy that is implied in self managed learning as an approach to education.

Notification Center

What Elliot (2008) crystallised for me in his table above is that in traditional education we shoe horn evidence for learning into a shape fixed by the educational establishment in order for it to award its accreditations. This is contrasted with what in the literature is defined as ‘assessment for learning’:

“the process of seeking and interpreting evidence for use by learners and their teachers to decide where the learners are in their learning, where they need to go and how best to get there” (ARG, 2002).

This definition obscures the core question: who determines what counts as evidence of  arriving ‘there’? In my world of work this is a collaborative process revisited afresh at the start of each cohort. Most of the literature I have read on this focusses on schools and how assessment needs to be redefined to support pupils on learning rather than exercise power over them by assessment of learning. This is not my area of work, so I will say no more than that I understand that there is a lot of prescriptive work that needs to be done to help teachers and educational government departments understand this distinction. Self managed learning is being used in schools by some pioneers but I know little about how successful these initiatives have been. I count myself lucky that my focus on assessment is in dealing with the downsides of being at the other extreme of the assessment for learning continuum: When I started to read about assessment for learning from my ivory tower, I just asked: Is there any other kind?

I want to focus the rest of this post on the content of Elliot’s table above. A key insight is summed  up by Al-Rousi (2013) and focuses on the type of evidence that supports learning:

Elliott [is] focused on the use of digital evidence [… ] naturally occurring, [ i.e.] already existing […] not created solely for assessment purposes, [manifested] through multimedia, [and] distributed across different sources.

Elliot is indeed saying that instead of shoehorning evidence we might choose to purposefully build in the use of naturally occurring evidence into our assessment process. He is further saying that we need to use web 2.0 tools to develop an e-assessment strategy and this he calls Assessment 2.0. Self Managed learning works with naturally occurring evidence, but has no e-assesment strategy embedded in its approach. Collaborative assessment in self managed learning ensures that the evidence to be sought is outlined in the learning contract upfront and this can be any type of evidence that supports the learning outcomes being defined. We encourage students to use wide sources of evidence such as video files, audio files, essays, reports, flowcharts, lesson plans, storytelling, painting, spreadsheets and self-assessment statements.

What we are not doing enough of is looking at the use of digital evidence to support learning in an embedded way. If we define e-assessment as anything that involves digital media, then we have been doing it for years – word document are submitted, we add our formative feedback via Track Changes, use spreadsheets to tabulate data, create research reports, etc. This is not what Elliot intends to suggest when he talks about assessment 2.0 in my view. He quotes Downes (2006) notion of a personal learning environment and posits the need for a Personal Assessment Environment (PAE) where students use the type of web 2.0 tools exemplified in his table to critically reflect on what it means to provide evidence for learning, set it up before getting on with the business of learning, harvesting it for insights regularly and then ordering it in a meaningful way to demonstrate achievement of a given standard which in my domain is Masters standards. This notion is game changing for me. It implies that digital literacy would  no longer be a choice for our faculty or our students, that an e-assessment strategy has to be agreed and implemented to further support students that goes beyond just allowing students to present their evidence as e-portfolios (which by the way we still do not allow for administrative reasons: students have to print to copies of their portfolio on paper and hand in by a specific date…).

Whilst Self managed learning meets most of Elliot’s characteristics when assessing for learning in terms of its principles of operation (e.g. being collaborative, peer and self assessed) it falls down when assessed against his ‘tool supported’ characteristic. Some may argue that what matters most is that the principles are adhered when developing effective assessment strategies in any educational domain, that tools used are a secondary consideration. I disagree with this assertion. Our thinking is shaped by the tools we use. Writing a blog is not the same as writing a letter or and essay. Assessment for learning in a self managed learning Masters means supporting students in the creation of a PAE to gather tangible evidence through their 18 months learning journey. Our challenge in this is almost as hard as that of other educational sectors when shifting from one preposition to another. We already assess students for learning, but how might we design assessment 2.0 into our work?

Notification Center-1

A modest experiment I am carrying out is the use of Pin-Interest to support evidence gathering and dialogue for one of my student’s learning contract. The board’s theme only makes sense in relation to the student’s learning contract, we agreed to keep comments general enough for the board to be public but specific enough that the student could track her learning journey when it came to writing up. We also agreed that the board would be part of the evidence used to support her self assessment statement on achievement of learning goals for the Masters. However, whilst we are using it for formative feedback, I fear that, at best, screenshots of the board will be all that makes it to the final portfolio.

My understanding of what Elliot proposes with Assessment 2.0 is that we need to incorporate the distributed nature of digital evidence (amongst other characteristics he discusses) into the way we assess students rather than students having to shape their evidence into a fixed format limited by low digital literacy in certain sectors of the educational establishment. In embedding these tools into the formative stages of learning we would be enhancing the quality of their thinking and preparing them to develop a digital identity that can support them in their future career goals – given the ever increasing need to learn to function effectively online for most professionals.

Al-Rousi, S. (2013) ‘Does WEb 2.0 = Assessment 2.0’  http://learn.open.ac.uk/mod/oublog/view.php?user=1124720 

Assessment Reform Group (2002) Assessment for Learning: 10 Principles http://assessmentreformgroup.files.wordpress.com/2012/01/10principles_english.pdf

Eley, Ginney. “Reviewing Self-Managed Learning Assessment.” , 1993. http://www.heacademy.ac.uk/assets/documents/resources/heca/heca_lm09.pdf.

Elliot, Bobby. “Assessment 2.0.” , Sept. 2008. http://www.scribd.com/doc/461041/Assessment-20.

Can digital storytelling provide an effective educational tool?

Is this a story?

I thought so and I tried to tell my own story using this medium. My first humble attempt at telling a story using kinetic typography – I created it to illustrate the shaky ground of designing technology for learning.

What about this? Is this a story?

pin

or this?

sixwords

The original 6 word story which has inspired twitter 6 word stories.

In answering the question of the title I started challenging our conventional wisdom of what a story is.

Continue reading “Can digital storytelling provide an effective educational tool?”

Why am I MOOCing?


shaks.jpg

Continue reading “Why am I MOOCing?”

Purposeful Tinkerings

‘ [Web 2.0 tools offer] a new user-centric information infrastructure that emphasizes participation [ ] over presentation, that encourages focused conversation [ ] rather than traditional publication, and that facilitates innovative explorations, experimentations, and purposeful tinkerings that often form the basis of a situated understanding emerging from action, not passivity.’ John Seely Brown

In this post I begin to unpack what innovation means to me in the context of learning; I need to develop a personal working definition of innovation for my  Online Education course. I have spent a lifetime teaching people about the creative process and looking at innovation as the output of the psychology of creativity. From a traditional academic perspective I could write  and have written a great deal about creativity and innovation. The theories, the practice and application have accompanied me throughout my career. This latest assignment is asking for a working definition that will enable me to evaluate if a particular technological achievement can be called an innovation. It asks less for a theoretical endeavour and more for an exploration situated in my context of operation.

Continue reading “Purposeful Tinkerings”

Blog at WordPress.com.

Up ↑