In 2020 MA: DTCE lecturer Drew Whitworth published “Mapping Information Landscapes”, a book that explores the history of mapping as a means by which we learn how to make good judgments about information, and communicate these understandings to others.
At this time, the world’s health literacy — our ability to make judgments about information concerned with our health — is facing a severe test. We need to be attending to the facts, not the fiction, about COVID-19. Read more
I doubt any of us suspected that over the last two weeks, since my last #DMIL2020 post about COVID-19’s likely impact on not just our health, but our information, economy and society, things would escalate so quickly.
On Friday 5th October the DTCE students were taken on a ‘mystery field trip’ to The Forbidden Corner, which is a labyrinth in North Yorkshire. The aim of the day was to get the students working together in groups to gather information necessary to complete a complex task — namely, mapping this intricate space. Read more
From 5th-9th June 2017 the Information for All Programme (IFAP), part of UNESCO, organised a “Global Expert Meeting” on Multilingualism in Cyberspace in Khanty-Mansiysk, Russia, with the UK representative being Dr Drew Whitworth of the MA: Digital Technologies, Communication and Education team.
The city of Khanty-Mansiysk lies in the Ugra region of western Siberia, at the confluence of the rivers Ob and Irtysh. For its size and relative isolation, it might seem surprising that Khanty-Mansiysk can support a national league ice hockey team, a large conference centre, a brand-new concert hall and a world-renowned chess tournament centre, but the explanation is a simple one — oil revenue. Ugra is the centre of the Russian oil industry and in the marshes and forests of the region may lie the world’s largest remaining oil reserves.
However, at least some of the money is being put to good educational use. The region is predominantly Russian-speaking but there are two living minority languages there with about 50,000 speakers between them, Khanty and Mansi (hence the name of the city). From the name of the region, Ugra, linguists drew the name of the Finno-Ugric group of languages, to which these tongues belong, and Khanty and Mansi are therefore related to Finnish, Estonian and Hungarian, all quite different from the Indo-European group spoken in most of the rest of Europe. The semi-autonomous government of Ugra quite generously funds initiatives to preserve the language, with the Ob-Ugric Institute engaged in academic research and practical educational programmes.
Not all minority languages are so well-backed by state funding, and many are disappearing. Many of the various projects reported on in the conference, organised by the UNESCO Information for All Programme (IFAP), were interested in using digital, Internet and mobile technologies to help preserve the linguistic diversity of regions and the world as a whole. Much evidence suggests this diversity is not innately protected in cyberspace.
For example, though there is technically a ‘free market’ in what content is created for the World Wide Web, the distribution of different languages thereon does not reflect the diversity of mother tongues across the world. Several delegates did argue that W3C figures on the use of different languages online were based on dubious methods that over-estimated the amount of English content and underestimated other languages, particularly Chinese, but whatever methods are used to survey the Web, languages such as English, French, German and Spanish take an even greater slice out of the total than would be expected looking at the number of speakers in the world. On the other hand, some languages with many speakers, notably Arabic, are significantly under-represented. The problem here is not just language, but script. The conference heard about the phenomenon of ‘Arabezi’, the transliteration of Arabic into the Roman alphabet. Arabic keypads for mobile phones are a fairly recent introduction.
More recently, spearheaded (in terms of public tools anyway) by Google Translate, automatic translation has become more available. It is an open question whether computers will ever be able to attain the skill level of the best human translators, like the two simultaneous Russian/English translators who worked throughout the conference with impressive skill. But certainly automated translation has improved in recent years. Yet Google Translate remains English-centred, and does not recognise many minority languages. The conference was told how language preservation cannot be successful from the top-down: by their very nature languages grow from local community roots, and the most successful initiatives, while they may use ICTs in various cutting-edge ways (e.g. crowdsourcing), have to be rooted in the local physical and informational landscapes that gave rise to the language itself.
The question of why the world needs this linguistic and cultural diversity in the first place was addressed by Drew Whitworth in his contribution to the final session of the conference. He introduced the concept of xenophilia, the “love of difference”, as not just a moral philosophy but a potential active approach to the design of information systems. Should a perfect information system deliver only what the user requests?
What if, for instance, a future visitor’s search for information about a region like Ugra brought up not only English-, or Russian-language content (and thus perspectives), but locally-generated, Khanty-language information, translated into a language that the reader could comprehend but at the same time serving as an introduction to local views on the area and the indigenous language itself? Difference and diversity are essential to knowledge formation, something central to the social theories of learning of authors such as Etienne Wenger, or from George Siemens, connectivism. Knowledge can be formed within communities, but it is only at the boundaries, the zones where one community or group must interact with another, that ‘translation’ takes place, thus dialogue between the groups. (For a longer version of Drew’s presentation see this Slideshare page.)
In short, linguistic diversity is just as essential to sustainable development as is biodiversity to the health of our broader ecologies. We must learn how to preserve it as an integral part of our work with technologies in all ways, whether ‘educational’ or not.
[This post has been made as the first one on this year’s ‘Social Media Week’ for the Educational Technology and Communication course unit on the MA: DTCE. There will be a series of resources put out over the next few days via a mixture of Twitter, blogs and other social media.]
I have been a fairly active user of social media for some time now: up to 20 years, depending on your definition. I want in this blog post to recount my personal relationship with these different spaces, because I think it will put this week’s teaching into context, as well as providing some historical background.
This is an innately personal task, and almost self-reflective. I am not telling anyone how they ‘should have’ developed with or used social media. If you are reading this (and all my students should be), you are engaging right now with a social media application (SMA) — that is, the blogging tool used to post this (WordPress) — and you will have your own configuration of other SMAs that you have generated down the years. Whatever I have done in the past is irrelevant to the ongoing choices you make, as you configure and reconfigure your learning context, shape your own information landscapes. But as I said in the final chapter of Radical Information Literacy I believe in the value of narratives to teaching, so here is one.
When the first moves were made to construct what we now call the Internet, in the 1960s, the aim was to get computers communicating with other computers, rather than people communicating with other people. But of course even when one computer sent the first Internet message, ‘login’, to another in California in October 1969, it was still the human operators who had prompted the utterance. And it was not long before the emergence of the first Internet application that was specifically designed to carry, and more importantly facilitate and organise, human-to-human communication. This was email.
(Arguably, in fact, email predates the Internet, but it was in the 1970s that the standards and clients (applications) for carrying these messages across the network were developed, making email almost certainly the oldest software still in everyday use.)
As I have recounted in other parts of my teaching materials, I began my personal relationship with computing in the early 1980s through BASIC programming on machines like the Sinclair ZX Spectrum (pictured). I studied for a BTEC (technical qualification) in Computer Studies fom 16-18 and then began work as a computer programmer, a job I held until 1991. But during all that time I barely used the emerging Internet. On my desk at the insurance company I worked for was a ‘dumb terminal’, able to communicate with the mainframe sat in another building, but not with the wider world, ‘online’ as we would put it now. I did not have a personal computer, nor felt the need for one. Certainly the idea that we might access a range of even purely text-based information through our telephone system would have seemed outlandish, although at the time there was also Teletext (Ceefax), run through the TV; but this was still one-way information transmission, a broadcast, just one of text alone. (The French Minitel system being used at this time was much more interactive, but was confined to that country.)
I did, temporarily, encounter Email in 1990 or so. The notion that I could send what was, in effect, an electronic postcard or short letter to someone via my computer was not particularly revolutionary, but it did seem useful, because I was at the time working on projects which required me now and again to communicate with people who were located on the European mainland. But as I said it was only a temporary solution to a communication need. When I stopped doing work with people located abroad, and the furthest I had to interact with anyone for work purposes was half a mile down the big hill in Tunbridge Wells, Kent, email stopped being a relevant resource for me. Almost all workplace and social interactions remained face-to-face or by phone, with the occasional printed memo or letter.
Thus, email went away again and didn’t reemerge into my life until several years later by which time I was at the University of Leeds (pictured). In the meantime, Tim Berners-Lee at CERN had invented the World Wide Web, the key platform on which social media would later rest.
Yet the WWW was not the basis for Usenet, the first tool I used that really could be said to be the basis of a virtual community. The term ‘social media’ was not used about Usenet, but nevertheless it was an SMA, and was, in its way, my ‘killer app’, the application which really cemented my use of the Internet more broadly, gave me a reason to keep coming back to it. For those of you who’ve never heard of it, Usenet, more-or-less, was the platform for thousands of ‘bulletin boards’, discussion groups devoted to a whole range of topics including sport, music, crafts, pets and obscure indie bands. There were several groups I subscribed to, but the main ones were a group focused around the region where I lived at the time and another that was local to Leeds Uni.
This all might sound rather unpromising but at that time, namely from about 1997-2000, Usenet was experiencing a kind of ‘golden age’. These groups were not just fora for announcements, they were the basis for genuine virtual communities, of the sort eulogised by writers such as Howard Rheingold (in his book The Virtual Community) and Sherry Turkle (Life on the Screen). This was the time in which excited academics wrote about the developing ‘cyberculture’, going so far (Turkle at least) to claim that it was a genuinely new form of interaction, that because of the anonymity offered by the physical separation, people were playing with gender, racial and sexual identities online, and that these were positive developments.
Rheingold drew attention to the use of these virtual spaces as learning tools. He recounted the tale of how a participant in his main Usenet group, the WELL in San Francisco (pictured), had a family medical problem and, from members of the group, sought medical advice — not a free commodity in the USA. Here, though, the requisite knowledge existed within the community and was brought to bear to help the member with his problem. When the illness went into remission the community as a whole celebrated. This kind of thing shows the value of social networks to acquiring information and support.
For a few years, the Usenet groups I was part of did feel like a real community. Offline friendships formed through shared membership of the group and there were at least three marriages that I know of (all of whom remain together nearly two decades later, I would add).
But then the trolls came. Perhaps you have not heard this term before, but the term Internet ‘troll’ is applied to those who abuse the communicative freedom offered online with disruptive and anti-social behaviour. This is not a story that I feel like recounting in detail, but suffice it to say that over a period of some years, starting with one disgruntled individual who did not like to see a group devoted to ‘his’ local area being ‘taken over’ by types of people he considered undesirable (students! gay people! the horror…), the space we had grown to like and enjoy became subject to systematic attempts to make it unusable — at least, make it a far less pleasant space in which to socialise. The posting of abusive and usually anonymous messages and the flooding of the board with spam and junk became regular occurrences.
Usenet was always a self-governing medium, uncommercial and essentially anarchic, in the objective sense of that word, meaning ‘without overarching authority’. In the end this was its downfall. Despite sterling efforts on behalf of some members of this community, there seemed no way to stem the tide of trolling. Complaints could be made to internet providers and accounts suspended but as it was easy enough to post anonymously, most of the really unpleasant stuff could not be traced back to specific individuals. That no commercial organisation was making money from Usenet was a good thing, and helped it flourish in one way, but it also meant there was no one with enough of a vested interest to try to address the damage caused by trolling. In the end, worn out by years of this, I and others moved on. By the mid-2000s there was a new game in town anyway.
I stand by my claim that I was the second academic at the UoM to get a Facebook account. No, I don’t remember the name of the first one and yes, with hindsight I am aggrieved it wasn’t me. This was in May 2006, while I was visiting the University of Illinois. Facebook was just mentioned to me almost in passing, I was asked whether I had seen the tool. I got an account and then almost never used it until 2008 when friends in the UK started to get on board.
There was then an explosion of use on my part. Particularly from around 2010 until late 2013 I would say I was actively using FB all the time, meaning it was almost constantly open in the background and I would make frequent status updates, several times a day. I have certainly reduced my usage since, both posting and reading it a lot less. I admit to being increasingly wary of its negative side, something becoming more and more apparent, particularly in the last 12 months or so. Further articles to be posted over the next few days will explore this point, and see also the Marcus Gilroy-Ware book Filling the Void, a chapter from which can be found on Blackboard as subsidiary reading. Facebook remains a valuable resource for acquiring information about one of my particular interests (attending obscure football matches… I’ll spare you those details, too) but that’s my main use of it these days.
I have also never used Facebook directly for work. It always seemed worth keeping my personal and work personae separate. I have a rule of not ‘friending’ current students — if you sent me a friend request and I didn’t reply, that’s why, it’s not that I don’t like you. Another SMA I use is Twitter, and that does get used for work-related and academic purposes. My use of it is more sporadic, with bursts of activity linked to specific work events (conferences, this week’s teaching), links to interesting and relevant news stories or posts by a couple of hundred people that I follow. I could follow more but I think this ‘feed-load’ is manageable, and it keeps my feed fairly focused on work-related matters (as opposed to politics, sport and all that cultural stuff). (I assume you’re following me already due to its being used as the core SMA for this week of social media teaching, but if not, find me at @DrewWhitworth1).
I know this list risks getting boring but one final SMA to mention is my use of blogging which, again, is personal rather than professional. I am a compulsive chronicler. I know a lot of people who have this urge, often expressed in highly personal and idiosyncratic ways. I met someone once — still a FB friend and, I know, also a regular blogger — who kept a log book in her car’s glove compartment into which she wrote, by hand, an entry every time she filled the tank with fuel, the date, how much she had put in, how much it had cost. I could see no reason for this other than for the sake of the record-keeping itself. I have my daily photo blog (Being 42) — originally it was going to be for a year, I am, at the time of posting this, on day 2,613 (over seven years). Then there is the blog about my walks in the Lake District, 153 of them by now, all meticulously chronicled and photographed (like the one depicted above — this is Bassenthwaite Lake, Cumbria).
These are media of artistic self-expression, in that I do try to take the occasional decent photograph, and the fact that I am presenting this work publicly is something that encourages me to keep doing it. I like the fact that I have followers, that I get likes and comments. I follow photography blogs in return, too. In a nicely informal sense this is all very much a learning environment. While I’ll never be a professional-standard photographer, and don’t want to be (investments of too much time and money would be required), I think my images have improved in quality over the years, and the affordances of the blogging environment provide feedback for me as to how I can improve further. This is teaching of such a subtle hue that many would not consider it teaching at all, but I believe it is — we are dealing here with ‘More Able Partners’, leading me through my own personal zone of proximal development (ZPD). (This stuff will come in week 7 of ETC.)
And here, at the end, is the point of this rambling. In my opinion it is pointless to argue about whether
social media “should” form part of a learning environment. It clearly already is, not just in my case, but for billions of other people. When I climbed Kilimanjaro in 2015 (and faithfully blogged about it) many of the Tanzanian guides who got me and my walking colleagues up there were regular users of Facebook. And I have not even mentioned LinkedIn, Instagram, WhatsApp and many other applications that doubtless you are more familiar with than me: plus workplace management tools like Yammer. But to go on further about all these really would be boring — you get the point by now I am sure.
I accept the critiques of many who will say that we find in social media a kind of ersatz, highly commercialised alternative to ‘real’ community relations and genuine friendships. But it would be insulting to my intelligence, and those of the many friends and acquaintances who I know are also clever, agreeable people to say that every time we log on and use a SMA we are somehow fooling ourselves, that we are dupes of the systems set up to extract capital from our interactions. Of course these things are happening. But new literacies are developing around SMA — and have been for twenty or more years now — and these literacies can and must incorporate critical attention to what is happening to the information we consume and produce, where it has come from and who is reading it.
But so be it. SMAs are here to stay in some form or another and we as educators can make conscious, critically informed choices about when to incorporate them in our learning environments and when not to. There will be more to come on that in the remaining posts this week.