Making universities obsolete

By Matt Welsh, an engineer at Google

This post looks at traditional higher education and the recent commercial launches such as Udacity, Khan Education (both Google linked offerings) and looks at how technology can increase access to education but whether ‘online’ education is a substitute for real university as potentially perceived by employers in the current certification system. He looks at videos as an example and the advantage with the opportunity to replay as many times as you need, but questions whether this is deep learning.

In full

Advertisements

A personal cyberinfrastructure

The text of this article is licensed under the Creative Commons Attribution-NonCommercial-NoDerivs 3.0 License (http://creativecommons.org/licenses/by-nc-nd/3.0/).

EDUCAUSE Review, vol. 44, no. 5 (September/October 2009): 58–59

A Personal Cyberinfrastructure

Gardner Campbell

Gardner Campbell (Gardner_Campbell@Baylor.edu) is Director of the Academy for Teaching and Learning and an Associate Professor of Literature and Media at Baylor University.

Comments on this article can be posted to the web via the link at the bottom of this page.

Cyberinfrastructure is something more specific than the network itself, but it is something more general than a tool or a resource developed for a particular project, a range of projects, or, even more broadly, for a particular discipline.

— American Council of Learned Societies, Our Cultural Commonwealth, 2006

Sometimes progress is linear. Sometimes progress is exponential: according to the durable Moore’s Law, for example, computing power doubles about every two years. Sometimes, however, progress means looping back to earlier ideas whose vitality and importance were unrecognized or underexplored at the time, and bringing those ideas back into play in a new context. This is the type of progress needed in higher education today, as students, faculty, and staff inhabit and co-create their online lives.

The early days of the web in higher education involved workshops on basic HTML, presentations on course web pages, and seed money in the form of grants and equipment to help faculty, staff, and occasionally even students to generate and manage content in those strange “public.html” folders that suddenly appeared on newly connected desktops. These days were exciting, but they were also difficult. Only a few faculty had the curiosity or stamina to brave this new world. Staff time was largely occupied by keeping the system up and running. And few people understood how to bring students into this world, aside from assigning them e-mail addresses during orientation.

Then an answer seemed to appear: template-driven, plug-and-play, turnkey web applications that would empower all faculty, even the most mulish Luddites, to “put their courses online.” Staff could manage everything centrally, with great economies of scale and a lot more uptime. Students would have the convenience of one-stop, single-sign-on activities, from registering for classes to participating in online discussion to seeing grades mere seconds after they were posted. This answer seemed to be the way forward into a world of easy-to-use affordances that would empower faculty, staff, and students without their having to learn the dreaded alphabet soup of HTML, FTP, and CSS. As far as faculty were concerned, the only letters they needed to know were L-M-S. Best of all, faculty could bring students into these environments without fear that they would be embarrassed by their lack of skill or challenged by students’ unfamiliar innovations.

But that wasn’t progress. It was a mere “digital facelift” — Clay Shirky’s phrase for the strategies that newspapers pursued in the 1990s when they couldn’t “think the unthinkable” and see that their entire world was about to change.1 Higher education, which should be in the business of thinking the unthinkable, stood in line and bought its own version of the digital facelift. At the turn of the century, higher education looked in the mirror and, seeing its portals, its easy-to-use LMSs, and its “digital campuses,” admired itself as sleek, youthful, attractive. But the mirror lied.

Then the web changed again: Google, Blogger, Wikipedia, YouTube, Facebook, Twitter. The medium is the message. Higher education almost completely ignored Marshall McLuhan’s central insight: new modes of communication change what can be imagined and expressed. “Any technology gradually creates a totally new human environment. Environments are not passive wrappings but active processes. . . . The ‘message’ of any medium or technology is the change of scale or pace or pattern that it introduces into human affairs.”2 Print is not advanced calligraphy. The web is not a more sophisticated telegraph. Yet higher education largely failed to empower the strong and effective imaginations that students need for creative citizenship in this new medium. The “progress” that higher education achieved with massive turnkey online systems, especially with the LMS, actually moved in the opposite direction. The “digital facelift” helped higher education deny both the needs and the opportunities emerging with this new medium.

So, how might colleges and universities shape curricula to support and inspire the imaginations that students need? Here’s one idea. Suppose that when students matriculate, they are assigned their own web servers — not 1GB folders in the institution’s web space but honest-to-goodness virtualized web servers of the kind available for $7.99 a month from a variety of hosting services, with built-in affordances ranging from database maintenance to web analytics. As part of the first-year orientation, each student would pick a domain name. Over the course of the first year, in a set of lab seminars facilitated by instructional technologists, librarians, and faculty advisors from across the curriculum, students would build out their digital presences in an environment made of the medium of the web itself. They would experiment with server management tools via graphical user interfaces such as cPanel or other commodity equivalents. They would install scripts with one-click installers such as SimpleScripts. They would play with wikis and blogs; they would tinker and begin to assemble a platform to support their publishing, their archiving, their importing and exporting, their internal and external information connections. They would become, in myriad small but important ways, system administrators for their own digital lives.3 In short, students would build a personal cyberinfrastructure, one they would continue to modify and extend throughout their college career — and beyond.

In building that personal cyberinfrastructure, students not only would acquire crucial technical skills for their digital lives but also would engage in work that provides richly teachable moments ranging from multimodal writing to information science, knowledge management, bibliographic instruction, and social networking. Fascinating and important innovations would emerge as students are able to shape their own cognition, learning, expression, and reflection in a digital age, in a digital medium. Students would frame, curate, share, and direct their own “engagement streams” throughout the learning environment.4 Like Doug Engelbart’s bootstrappers in the Augmentation Research Center, these students would study the design and function of their digital environments, share their findings, and develop the tools for even richer and more effective metacognition, all within a medium that provides the most flexible and extensible environment for creativity and expression that human beings have ever built.

Just as the real computing revolution didn’t happen until the computer became truly personal, the real IT revolution in teaching and learning won’t happen until each student builds a personal cyberinfrastructure that is as thoughtfully, rigorously, and expressively composed as an excellent essay or an ingenious experiment. This vision goes beyond the “personal learning environment”5 in that it asks students to think about the web at the level of the server, with the tools and affordances that such an environment prompts and provides.

Pointing students to data buckets and conduits we’ve already made for them won’t do. Templates and training wheels may be necessary for a while, but by the time students get to college, those aids all too regularly turn into hindrances. For students who have relied on these aids, the freedom to explore and create is the last thing on their minds, so deeply has it been discouraged. Many students simply want to know what their professors want and how to give that to them. But if what the professor truly wants is for students to discover and craft their own desires and dreams, a personal cyberinfrastructure provides the opportunity. To get there, students must be effective architects, narrators, curators, and inhabitants of their own digital lives.6 Students with this kind of digital fluency will be well-prepared for creative and responsible leadership in the post-Gutenberg age. Without such fluency, students cannot compete economically or intellectually, and the astonishing promise of the digital medium will never be fully realized.

To provide students the guidance they need to reach these goals, faculty and staff must be willing to lead by example — to demonstrate and discuss, as fellow learners, how they have created and connected their own personal cyberinfrastructures. Like the students, faculty and staff must awaken their own self-efficacy within the myriad creative possibilities that emerge from the new web. These personal cyberinfrastructures will be visible, fractal-like, in the institutional cyberinfrastructures, and the network effects that arise recursively within that relationship will allow new learning and new connections to emerge as a natural part of individual and collaborative efforts.

To build a cyberinfrastructure that scales without stiflling innovation, that is self-supporting without being isolated or fatally idiosyncratic, we must start with the individual learners. Those of us who work with students must guide them to build their own personal cyberinfrastructures, to embark on their own web odysseys. And yes, we must be ready to receive their guidance as well.

The author’s reading of “A Personal Cyberinfrastructure” is available as a podcast on his blog, Gardner Writes (http://www.gardnercampbell.net/blog1).

Notes

My heartfelt thanks go to the University of Mary Washington’s Division of Teaching and Learning Technologies and its many friends for their help in dreaming, articulating, and sustaining these dreams.

  1. Clay Shirky, “Newspapers and Thinking the Unthinkable,” March 13, 2009, <http://www.shirky.com/weblog/2009/03/newspapers-and-thinking-the-unthinkable/>.
  2. Marshall McLuhan, Understanding Media: The Extensions of Man, 5th printing (New York: McGraw Hill, 1964), pp. vi, 8.
  3. Jim Groom has outlined several key parts of this vision: “A Domain of One’s Own,” bavatuesdays, November 29, 2008, <http://bavatuesdays.com/a-domain-of-ones-own/>.
  4. W. Gardner Campbell and Robert F. German Jr., “The Map Is the Territory: Course ‘Engagement Streams’ as Catalysts for Deep Learning,” EDUCAUSE Learning Initiative (ELI) Annual Meeting, January 21, 2009, podcast at <http://www.gardnercampbell.net/blog1/?p=746>.
  5. EDUCAUSE Learning Initiative (ELI), 7 Things You Should Know about Personal Learning Environments, May 12, 2009, <http://www.educause.edu/Resources/7ThingsYouShouldKnowAboutPerso/171521>.
  6. Recent research suggests that overly templated approaches to e-portfolios and other online learning environments may actually decrease integrative learning and metacognitive capacities. See Kathleen Blake Yancey, “Electronic Portfolios a Decade into the Twenty-first Century: What We Know, What We Need to Know,” Peer Review, vol. 11 no. 1 (Winter 2009), pp. 28-32

Getting to know you: Introducing Jonas Bäckelin

Introducing Jonas Bäckelin, Contributed by Liz Renshaw:

1.    Can you tell us a bit about yourself Jonas?

photo of JonasMy name is Jonas Bäckelin and I’m living in Balchik by the Black Sea Coast of Bulgaria.  My professional career started with my qualifications in environmental chemistry and marine biology, followed by working as a teacher with specialization in didactics and ‘Information and Communication Technology’ (ICT). I’m now focusing on my thesis for my Master of Arts and Social Science in ‘Adult Learning and Global Change’ (ALGC), with the working title “Navigating Distributed Knowledge with the use of Web Tools”. My commitment to a new level of teacher training curriculum has involved me in the development of coherent strategies to fully integrate the use of computers as pedagogical tools in the classroom.

In 2012 I’ve started eduToolkit a ‘Grassroots Organization’ promoting ‘Teachers Open Online Learning’ (TOOL) for Professional Development. We investigate the concept of ‘The Networked Teacher’ and find out more about ‘Networked Literacy & Fluency’ in education. I’m developing our first course with the help of WikiEducator called “Certified Networked Teacher – The Use of WebTools” and we will use assessment badges through Peer-2-Peer University (P2PU).

2. Why did you decide to participate in Change11?

A: My fellow students from Canada in ALGC introduced me to the Massive Open Online Course (MOOC) “CCK08-Connectivism and Connective Knowledge”, but it took me until the third offering of  CCK11 facilitated by Stephen Downes, George Siemens and Dave Cormier until I was participating as a for non-credit student.  I got bitten by the MOOC bug, completed the eduMOOC and enrolled as ‘Network Mentor’ in Alec Couros course “EC&I831-Social Media and Open Education”.  Continuing with the MOOC ‘Change – Education, Learning and Technology’ in September was only natural as an ‘early adopter’.

3. What have been a couple of highlights so far in the Mooc?

A:  We are moving several frontiers simultaneously and I’m starting to realize that a single teacher can’t cope with the scope of change in education.  Some of the highlight are Mobile Learning (Zoraini Wati Abas), Collective learning (Allison Littlejohn), Rhizomatic Learning (Dave Cormier), Slow learning (Clark Quinn), Authentic learning (Jan Herrington).  The general trend is that fragmented and distributed knowledge can be managed through teaching, but we need online resources and tools.

4. How do you deal with the abundance of information in the Mooc?

A: I try to pay attention to outlines or key distinctions in order to create my own learning outcomes.  When listening to recordings or reading blog posts and articles I use our traditional tool Pen & Paper to create a concept map.  During CCK11 I created a workflow where I summarized my progress weekly in Insights, Thoughts and Questions.  This model has proven useful for monthly updates in the Change MOOC.  With help of examples and blog posts from other participants I like to make comparisons and find relationships – Remix and Mash-Up.

5. How do you go about building and sustaining your Personal Learning Network?

A: My struggle involves finding the balance between Practice & Reflection (i.e. blogging) and Model & Demonstrate (i.e. facilitating learning) and my main focus is on how I will become a node that creates learning resources for teacher’s open online learning.  The connections with experts in the ‘knowledge domain’ have grown into my ‘Personal Learning Network’, but the self-generating and sustainable networks come from expectations and feedback among peers and friends. NEXT PAGE

Lurking or Legitimate Peripheral Participation

By Christy Tucker, CC/A/3.0

During the July 7 early #lrnchat about social media and social learning, there was a lot of discussion about lurking.

Can I Play?In response to the question “What are some ways you learn through social media that aren’t collaborative, with other people per-se?”

I replied:

I do a fair amount of lurking (ie “legitimate peripheral participation”)

I also retweeted this message by Colby Fordham:

We all like sharers, but there is a value in lurking. [You] have to [learn] the rules and important topics.

and Jane Bozarth replied

…and then stop lurking

Often, lurking is just a temporary phase, and you do jump in afterwards. But is that always necessary? I have lots of online communities where I sit on the periphery and lurk, long past the initial phase of learning how the community works.

A few examples:

  • YouTube: Most of the time on YouTube, I’m just watching. I’m not creating my own videos, commenting, sharing, or bookmarking. I have a few videos, but I’m lurking at least 90% of the time.
  • Kongregate: Technically, I am not a lurker on this gaming site by the strictest definition, since I do rate games. I read through the forums and chat  sometimes, but rarely jump into the conversation.
  • News: I don’t get a newspaper in “dead tree” format; I get most of my news online. I read several newspapers and blogs, all of which have commenting or community features. Most of the time I don’t even read the user discussions, and I never add my own comments.
  • Slashdot: I skim the RSS feed, but I don’t have an account and have never commented.
  • Wikipedia: At one point, I contributed quite a bit (2500+ edits), but it’s been over a year since I’ve been active.

I learn on all those sites. (Yes, even Kongregate: I learn game strategies on the forums. What I learn is of limited use in the rest of my life, but it’s useful for my goals when I’m on that site.) I’ll be honest; I’m not really interested in getting sucked into the high drama conversations on most of those sites. Wikipedia, for example, can be pretty intense and nasty. It’s the only place online I’ve actually been directly threatened (although there was no actual danger, it was still disconcerting). If I’m going to be part of conversations, I’d rather they be part of the learning community, or at least more productive than many of the conversations at the sites above.

Would I be a better gamer if I was active in the Kongregate forums? Most likely. But I’m not looking for a high level of expertise in gaming. So why should I expend my energy there, when peripheral participation gets me enough expertise to meet my personal goals?

In the #lrnchat conversation, Jane called this behavior “taking,” and she’s right—I’m reading and taking advantage of the resources without giving back. I give back here, but I don’t give back in every community that I use. My giving is very uneven, and sometimes I just lurk.

Is it wrong to lurk, or is it appropriate to have different levels of participation in different online communities? Should we exclude anyone from reading the RSS feeds of our blogs if they aren’t commenting,  bookmarking, +1-ing, etc?

In Digital Habitats, Etienne Wenger, Nancy White, and John D. Smith call lurking “legitimate peripheral participation”:

From a community of practice perspective, lurking is interpreted as “legitimate peripheral participation,” a crucial process by which communities offer learning opportunities to those on the periphery. Rather than a simple distinction between active and passive members, this perspective draws attention to the richness of the periphery and the learning enabled (or not) by it. (p. 9)

Do the people active in a community learn more than those on the edges? Yes, I do believe that. But if your goal isn’t to be an expert, peripheral participation may give you enough learning to meet your needs. You can learn via social media without it actually being social learning.

What do you think? Are there communities where you are in the center of the action, but others where you’re on the periphery? Is there a place for lurking in learning communities, or should everyone be an active participant? If we’re designing learning with social media, can we focus just on social learning, or can we also support use of social media for peripheral participation?

Image credit:

Can I play? by jaxxon

It doesn’t matter what we cover, it matters what you discover, Noam Chomsky [#lwf12]

By Oliver Quinlan

20120125-095036-dpx

Learner choice, or indoctrination? A system where learners choose what they want to learn, or a system to induct you into society’s existing structures. What is the purpose of education?

Chomsky contrasted the ideals of creative enlightenment of ideas, and control of society as two possible answers to this question. There are powerful structures in society which would prefer people to conform and not try to shake systems of power and authority, he said we need to take a stand between these two.

The growth of new technologies has caused a major change in the nature of culture and society, but then it has done for centuries. The shift from telephone to email is significant, but it doesn’t begin to compare with the difference between a sailing vessell and a telegraph. What about the impact of widespread plumbing? Chomsky said we should recognise that more dramatci changes have occurred before.

‘Technology is basically neutral; it is kind of like a hammer, the hammer doesn’t care whether you use it to build a house or to cruch someone’s skull.”

On the impact of the internet, he said it was a valuable tool, but we need frameworks to be able to work with it effectively and be willing to adapt this framework as we go. He said you cannot follow a meainginful line of enquiry without some kind of framework within which to work, even if they are constantly adapting this framework. A person will not become a biologist simply from being given access to a library.Without such frameworks we are just picking out random facts that don’t mean anything.

Unless behind these technologies we have some ‘well constructed conceptual apparatus’, they are very unlikely to be helpful and are likely to be harmful. Chomsky called for us to cultivate the capacity to ask questions, and seek out what is new in a focused framework.

Posing the question of the purpose of education around human capital, he says, is a distorting way of looking at things. A nation of creative individuals or one who will simply increase GDP? Perhaps we need to start questioning the values we measure from education, and not simply in terms of economic value.

Progress works by testing things out, and then those things that work are more widely adopted. What we need to be focusing on, he said, is encouraging young people who are free thinkers, and are willing to do this trying out. We should cultivate the capacity to seek what is significant and always be willing to question.

“It doesn’t matter what we cover, it matters what you discover.”

If You’re Human, You’re a Slow Learner #change11

By Andrew Neuendorf

Sometimes the Web can make a beautiful, serendipitous nexus. Whilst pursuing two seemingly separate lines of thought in two seemingly separate universes (integral philosophy on Beams and Struts and education theory on the Change MOOC) I discovered a connection that makes me a little less schizophrenic and a little more dialectic.

Here’s my little self-absorbed tale of discovery: Jeremy Johnson commented on my Beams and Struts article (“The Singularity is Near-Sighted”) and recommended William Irwin Thompson’s wonderfully-titled  “The Borg or Borges?” Here Thompson revisits one of his key concepts from Coming Into Being, that consciousness is a delay-space where different inputs from the senses are cross-referenced and their interactions stabilized, giving rise to a unique emergent self-awareness. Time is sort of slowed-down so that some of its components can get to know each other, exchange echoes, and establish a perspective.

In other words, human consciousness is the result of slowing down.

As Thompson so eloquently puts it:

Fast is fine for the programmed crystalline world of no surprises and no discoveries, but slow is better for the creative world of erotic and intellectual play.

This fits nicely with Clark Quinn’s Week 13 presentation on Slow Learning. Quinn writes in his opening blog post:

Really, I’m looking to start matching our technology more closely to our brains. Taking a page from the slow movement (e.g. slow X, where X = food, sex, travel, …), I’m talking about slow learning, where we start distributing our learning in ways that match the ways in which our brains work: meaningfulness, activation and reactivation, not separate but wrapped around our lives, etc.

Slow is the way to go. We’ve gotten so used to outsourcing our cognition to machines, to opening multiple tabs, and craving faster connection speeds that we’ve overlooked the exquisite work of evolution. Some see the brain as a vehicle for rapid computation. Perhaps that steam pouring out of our ears isn’t mere by-product. Maybe we’ll slow down and see it’s really the driving spirit, and we’ve been blowing it off and letting it dissipate as waste. Not the ghost in the machine, but the ghostly machine.

Forget machine. Forget ghost. We could call it, to paraphrase Yeats, a sustained glimpse out of Spiritus Mundi. Or it could simply be the dance of complexity teaching its steps to the dancer, inviting improvisation for the first time.

Thompson says it best, in conjunction with John Keats:

The field of consciousness has more to do with slowness and a higher dimensionality, even beyond the three of the physical volume of the brain, in which hyperspheres— or some other higher dimensional topology — involve simultaneity in a neuronal synchrony — in a pattern. A mind, in the opening words of Keats’s ‘Ode on a Grecian Urn’, is a ‘still unravished bride of quietness’, a ‘foster-child of silence and slow time’.

And now for the ironic part: I have made this connection between the cultural historian and mystically-mind Thompson and learning technology strategist Clark Quinn because of the internet, because I was taking on more than one field of study at once, and because of Twitter, blogs, and .pdf files.

In other words, I’m writing about slowing down because I’ve been living fast.

If there is a lesson here, it’s that we need a new term and a new understanding for how a person can live and think and create in relation to technology without having to adopt one of the two polarities of Luddite or Techie. If you’ve read my Beams and Struts article you know I’m skeptical of The Singularity. Still, our lives are interconnected with technology, and likely made better because of it. It’s a matter of how one stands in relation to technology. Is it a tool, or are you?

The writings of both Thompson and Quinn suggest giving precedence (and prescience) to human consciousness over its hyper technological extensions.

2012 – the year of the MOOC?

by Alistair Creelman

Is 2012 the year of the MOOC? It certainly seems so since I’m discovering new providers of free open learning every week. The latest to turn up on the radar is a new development from Udemy offering university teachers a chance to offer courses for free. Udemy has been around for a year or two now and their main aim is to provide people with a platform for creating and marketing courses in just about anything. Create your course, place an ad for it in Udemy and see if it takes off. Courses may be free but most have small course fees attached.

The new venture for Udemy is called the Faculty Project. Here university teachers can make their own courses open to all, including video lectures, presentation material and texts. Students enroll for free and progress at their own rate through the material using a discussion forum for collaboration with other students and even with the teacher according to the information. The initial list of courses available covers a wide range of subject areas from Operations management to Ancient Greek Religion and this is planned to expand as rapidly as they can recruit new teachers. They promise to keep the courses freely available indefinitely.

Here’s yet another example of people getting together to offer free education to a global audience. The course material itself can teach you a certain amount but by adding the input of a mentor/teacher and gathering students together into study groups using discussion forums or even better through all the other collaborative learning tools available today (eg VoiceThread, Skype, Google Docs, OpenStudy etc). Different students will learn different things; some will take the whole course, others will take selected parts. You learn what you need to learn.
There may not be any university credits on offer for all these open courses but tangible rewards may still be available.

The open badges initiative that I have written about several times is gaining momentum and a new article in none other than the business establishment magazine Forbes highlights the potential of alternative credentials: Why Get a Pricey Diploma When Badges Tell Employers More? They see the potential of badges to harness the energy of informal learning but point rightly to concerns about validity and quality assurance. If these concerns are addressed a real power shift in education will take place:

“… once we find empirical ways to verify competency via accredited and ranked badge providers, not only might traditional education brands and their pals in the standardized test industry lose their monopoly on credentialing, but badges themselves might gain the widespread legitimacy they currently lack. If that happens, we will be a step closer to destroying the time-consuming, budget-busting, bubble-inducing myth that everyone must have a four-year college degree to succeed in America.”

Evolution of Software Development Education – Part I: Beginning of Computing and Computing Education

by  Sanjay Goel , http://in.linkedin.com/in/sgoel

Computing in the form of processing: understanding, creation, manipulation, communication, expression, and rendering of symbols has always been a very important natural activity of human mind. Though the use of the term computing is not limited to be used in the limited context of processing of formal mathematical symbols, computer software transcends such boundaries to support processing of diverse range of symbols. With the invention of computing machines, the field of computing has advanced beyond one’s imagination. Computing has transformed many aspects of everyday lives for a vast majority of mankind. The role of computing has been evolving from enhancing efficiencies through otherwise by-passable support systems to creating real-time mission critical systems. The initial application domains driving computing till 1960s were code breaking, engineering calculations, scientific simulation, as well as repetitive data processing in defense, space, government, insurance, banking, and some other large business organizations. Some attempts of language translation and information retrieval were also made even in 1950s. Outgrowing the initial goal of doing repetitive mathematical calculations, computers have already permeated almost all spheres of human activities even including arts and sports. The socio-cultural effect of computing and communication technology is much wider, deeper, and faster than the effect of other technologies. Computing has also been used to expand our understanding of mind and reasoning.

India’s decimal number system inspired ninth century Persian mathematician Mohammed ibn Musa al-Khowarizmi to write a book on calculating using this number system. Based on his name, Algorism slowly started referring to arithmetic operations in this number system. These algorisms were strictly mechanical procedures to manipulate symbols. They could be carried out by an ignorant person mechanically following simple rules, with no understanding of the theory of operation, requiring no cleverness and resulting in a correct answer. The word Algorithm was introduced by Markov in 1954. Before the 1920s, the word computer was used for human clerks that performed computations. In 1936, Turing and Zuse independently proposed their models of the computing machine that could perform any calculation that can be performed by humans. In the late 1940s, the use of electronic digital computing machinery based on stored program architecture became common.

Late 1950s saw the arrival of high level languages. The Association of computing Machinery (ACM) was founded by Berkeley in 1947. It started its first journal in 1954. Mathematical logic and electrical engineering provided the foundation for building modern computers. The personnel training responsibility was largely taken up by the manufacturers themselves. Most early programmers were math graduates, many of them were women. In the 1950s, a large numbers of private computer schools emerged to fill the burgeoning demand. The word software was coined by John Tukey, famous statistician, in 1958. The words computer science, information systems, information technology, system analysis, and system design were being used even before. Dunn of Boeing  defined Information Technology as a body of related disciplines which lead to methods, techniques, and equipment for establishing and operating information processing systems. He also provided a simple definition of information systems as a connective link between five basic management functions of defining objectives, planning, gathering resources, execution, and control. In 1968, the computer science study group of NATO Science Committee coined the word software engineering to imply the need to transform software design and development into an engineering-type discipline. ‘

Till 1970’s, computing was often regarded as a subfield of one or more of a mixture of disciplines of mathematics, operation research, electrical engineering, statistics, industrial engineering, and management. Many of existing undergraduate programs of these disciplines were modified to accommodate some of the naturally fitting aspects of computer science. Mathematics departments taught practice and science of programming and numerical analysis. The electrical engineering department emphasized on design and construction of electronic digital computer, and management schools paid more attention of design of information systems. Initially, masters and later undergraduate degree programs and departments of computer science were emerging as offshoots of the mathematics departments in colleges of science and arts. Stanford established its computer science department in 1962, and by the late 1960s many universities in United States had started computer science departments. Concurrently, the management schools and others interested in business data processing applications focused on information systems, and started developing these programs. The engineering schools offered computer technology and computer science programs, and also computer as an option in various existing programs.

References:

1.  

#Change11 Traineeship Programs and Cynefin Framework based on Dave Snowden – Part 1

by John Mak

Jenny Mackness summarises in her post on the presentation by Dave Snowden.  I am impressed by Dave’s saying: “There are whole tracts of knowledge that can only be understood through interaction, e.g. through an apprenticeship model of education, which allows for imitation and failure, such as for London taxi drivers. Failure is key to human knowledge acquisition.” That sounds practical, as we have been adopting such an apprenticeship model of education here in Australia – with on-the-job training for the last 2 decades. To a great extent, I reckon it is one of the best ways of learning through practical hands on- deep down to earth learning.  The merits with such learning is that apprentices and trainees could actually follow through with the gaining of skills that they could apply on the jobs, reinforcing the experience, and thus allow for reflection of what works and what doesn’t in their particular fields.  This apprenticeship and traineeship on the job model of learning has also been highly valued as one of the situated learning – a model of learning where “Learning begins with people trying to solve problems.[4] When learning is problem based, people explore real life situations to find answers, or to solve the problems. Hung’s study focuses on how important being social is to learning. In believing that learning is social, Hung adds that learners who gravitate to communities with shared interests tend to benefit from the knowledge of those who are more knowledgeable than they are. He also says that these social experiences provide people with authentic experiences. When students are in these real-life situations they are compelled to learn. Hung concludes that taking a problem-based learning approach to designing curriculum carries students to a higher level of thinking.[4]

To what extent is the above claims valid? There are lots of problems waiting for us to solve, especially when one is at work, or studying in a course, or immersing in networks or communities, or in gaming, even having personal informal study, as part of the life-long or life-wide learning, or in the case of learning a particular skill as a hobby or interest.  For instance, if I want to learn how to play badminton, then I would likely try it myself, and watch others playing in the court, or watch some of the videos on Youtube, in order to understand some of the basic techniques, and thus could practice the skills when playing.  I could also share some of my experiences with others, or ask others for help, so as to improve my knowledge or skills.  As a disclosure, badminton is my favorite sport.  If one is learning how to cook, then he or she would likely watch some of the videos on cooking, checking with cookbooks on the recipes, and trying to cook different dishes at home.  However, would one become a chef just by doing that?  Not likely? I learnt that most chefs have acquired the skills through apprenticeship programs.

I just happened to discuss with the owner of the restaurant today, and he shared with me his experience as a chef before becoming an owner.  Surely, he learnt through immersion into the particular trade (as a chef), and so it is different from that of an amateur.  I like cooking too, but I could only do some very basic dishes, like fried rice, fried noodles, porridge or soup, but would never be able to achieve that level of mastery of the chef, without more expertise training and guidance.  To this end, I am impressed with Dave’s mention about the importance of training as a generalist, rather than a specialist, and that: “In universities we are training recipe book users and assessing whether they can reproduce the recipe. We are not training chefs who can achieve a huge amount without a recipe. Chefs have a mix of practical and theoretical wisdom and willingness to engage conceptually and theoretically with real world problems.” as cited by Jenny on Dave’s presentation.  So, it is important to have an open mindset in order to develop those expertise, likely through learning with more knowledgeable others, and or training on the job or workplace.  Is traineeship the solution then?

How about the effectiveness of traineeship model? The report states that: “The findings suggest that traineeships are an important pathway for female early school leavers. However, traineeships are poorly targeted if the target group is disadvantaged young people.”

There are concerns what traineeship program should be aiming for, whether it is more relating to provide avenues of training for those unemployed or disadvantaged people who would like to pursue a trade or the skills acquisition, both for new entrance and those currently employed on the job, for upgrading and/or recognising their skills.  This up-to-date report on traineeship provides the details with recommendations.

It has been revealed that most “trainees” could learn the skills on the job, and for those who are existing workers with years of experience (veterans in particular), what are necessary would be a reinforcement of their skills to ensure that they are kept up-to-date and so it is more aligned with recognition of their competency, though certain skills acquisition would surely happen with the introduction and application of new and emerging technology at work.  I reckon a simple to complicated scenarios would be sufficient for the “training” of most of these trainees.

For new entrance trainees, especially those early school leavers, or unemployed people, I could see the needs falling into a number of patterns, with a wide spectrum of skills.  For most of the early school leavers, their interests may lie more with the hands-on manual, technical or technological, administrative and clerical work, which may range from cooking and catering, hospitality and hotel work, office administration, warehousing, transport and distribution, freight forwarding, automobiles, mechanics, fitting and machining, performance arts, ICT, child care, nursing, finance and accounting, finance etc.  So, the emphasis here is on the skills for a particular trade or profession, though there are also strong emphasis on knowledge, where the trainees are expected to “acquire” such knowledge (like health and safety, legislation, company rules and regulations, procedures, products and services, and general knowledge on ICT and customer service) in order to perform the job to the standards required. I reckon the scenarios most likely fall into the simple (in majority) scenarios, where systems, processes and procedures would determine the best practice, and training would more likely be based on the supervision by their supervisors, or trainers, though institutional teaching and facilitation would also be incorporated to reinforce the knowledge and skills learnt through the job.  The challenge for  the training of disadvantaged or unemployed people is that most institutions would need to provide those on-the-job experience for them to actually practice the skills.  On some occasions, simulated working or virtual learning environments were introduced to augment the classroom training.   The use of authentic learning in a classroom setting may be a good alternative to solving this problem.

Are these skills and knowledge the same or different from the literacies cited in various reports?  See Keith’s post Here.

Two Rules for Teaching in the XXIst Century

By Daniel Lemire who has kindly licensed under CCA/NC-ND 3.o Unported , please note that his blog licenses commercial use under Creative Commons 2.0

Education in the XXth century has been primarily industrial: organize the workersstudents in groups under the supervision of a managerteacher.

We all have been in such systems for so long that we take it for granted. How else is anyone to learn? Maybe some can learn differently, but most can’t because they are unmotivated and lazy, they lack the critical skills to differentiate right from wrong on their own and they can’t assess their own level of expertise. At least, that is what I’m told, but I think it is unfair.

To me, this is like saying that we have to keep long-time prisoners in jail because they do not know how to organize themselves when given their freedom.

Indeed, if students who went through years of schooling cannot learn on their own, if they cannot assess their own progress, and if they generally cannot organize themselves without supervision, we have to wonder whether schools bear part of the blame. And I think they do: we enroll students in supervised and regimented systems where they are constantly told what to do, constantly tested by others and where they have to follow rigid rules as to what they should learn. It is no surprise that many students cannot work on their own when they leave school.

There are a few broken individuals who never really became adults. They have to be kept in check all the time because they could not survive on their own. But if these constituted the essential part of the human race, we would have gone extinct a long time ago. Our ancestors, not long ago, had to survive in small bands hunting small animals and grabbing whatever they could eat. They had to be incredibly resilient because human beings spread throughout the globe like no other animal species.

To put it bluntly, most people lack autonomy, they can’t be entrepreneurs, precisely because we have carefully beaten it out of them. I have two young kids and they are crazy. One of them is building a castle out of paper in his room. The project is huge and complicated and has worked on it for days, on his own, without anyone telling him what to do. He made mistakes (which he explained to me) and he had to fix them. How often do schools let students embark on self-directed projects? Almost never.

My sons are not exceptional. Like other kids their age, they behave in unconventional ways, trying crazy things on their own, having crazy thoughts on their own. Eventually, with enough schooling, they will settle down and do as they are told in a more reliable manner. They will become very good at following directions.

How good will they be at emulating someone like Steve Jobs, who repeatedly broke all rules? I fear for them that their sense of initiative and wonder will be killed by the time they finish their schooling. (Thankfully, I am a crazy dad with crazy ideas, so maybe I will mitigate the damage.)

Hence, as a teacher, I reject the industrial model as much as I can. I believe that, in an ideal world, we would not need any teaching at all. There is hardly anything you can’t learn through an apprenticeship. For example, if you just helped out Linus Torvalds for a couple of years, you could become an expert programmer. In fact, I suspect you would fare much better than if you just took programming classes.

The problem with apprenticeship is that it scales poorly. How much patience will Linus Torvalds will have for kids who hardly know anything about computers? How many could he coach? Would he want to have kids over at his house while he is coding?

We still use the apprenticeship model in graduate school. But to accommodate most students, I still haven’t thought of a better model than setting up classes. But should the classes be organized like factories with the teacher acting as a middle-manager while students act as factory employees, executing tasks one after the other while we assess and time them? I think not. My teaching philosophy is simple: challenge the student, set him in motion, and provide a model. I try to be as far from the industrial model as I can, while remaining within the accepted boundaries of my job. I have two rules when it comes to teaching:
•Focus on open-ended assignments and exams. Many professors are frustrated that students come in only for the grades. Probably because they focus on nice lectures and then prepare hastily some assignments. Turn this problem on its head! Focus on the assignments. If your students are not very autonomous — and they rarely are — give several long and challenging assignments (at least 4 or 5 a term). Do make sure however that they know where to get the information they need. Provide solved problems to help the weaker students.
However, keep the assignments open ended. We all like to grade multiple choice questions, but they are a pedagogical atrocity. In life, there is rarely one best answer: assignments should reflect that. In some of my classes I use “programming challenges”: I make up some difficult problem and ask the students to find the best possible solution. Often times, there is no single idea solution, but multiple possibilities, all with different trade-offs. Quite often the students ask me to be more precise: I refuse. I tell my students to justify their answer. Over the years, I have been repeatedly impressed by the ingenuity of my students. Many of them are obviously smarter than I am.

What about lecture and lecture notes? They are secondary. In most fields, the content, the information, is already out there. It has been organized several times over by very smart people. Books have been written on most topics. There is a growing set of great talks available on YouTube, Google Video and elsewhere. Your students do not need you to rehash the same content they can find elsewhere, sometimes in better form. Stop lecturing already! Just link to what is out there and encourage your students to find more using a search engine. Only produce content when you really cannot find the equivalent elsewhere. Please link to material beyond the grasp of most of your students: they need to know the limit of their knowledge.

The famous software engineering guru Fred Brooks agrees with me:

The primary job of the teacher is to make learning happen; that is a design task. Most of us learned most of what we know by what we did, not by what we heard or read. A corollary is that the careful designing of exercises, assignments, projects, even quizzes, makes more difference than the construction of lectures.

For my years as a student, I hardly remember the lectures. They were overwhelmingly boring. And I soon learned that even if a teacher was remarkably able and he could give me the impression that I understood everything… this impression was quickly falsified when I tried to work the material on my own.

•Be an authentic role model. Knowing that someone ordinary, like your professor, has become a master of the course material means that you, the very-smart-student, can do the same. That’s the power of emulation.
When Sebastian Thrun gave his open AI class at Stanford, tens of thousands of students enrolled. Sure enough, the Stanford badge played a role in the popularity of the course, but ultimately, it is Thrun himself, as a role model, that matters. He has now left Stanford to create his own independent organization (Udacity). Thrun must be confident about his success since he left his tenured position at Stanford, reportedly because he cannot stand the regular (industrial-style) teaching required at Stanford. One upcoming course is “programming a robotic car”. I have no idea how good the course will be, but it will be motivating for students to attend the class of the world’s top expert in the field of robotic car.

The status of the teacher as an expert has always been important. However, the ability of people like Thrun to reach thousands of people every year through his teaching means that there is less of a market for teachers who aren’t impressive AI researchers.

Unfortunately, as long as I teach within a university, there are a few things I am stuck with:
•Deadlines: Some students are able to go through the material of a class in 4 weeks. Others would need 16 months. Alas, universities have settled on a fixed number of weeks that everyone must follow. If you complete the course faster, you’ll still have to wait till the end of the term to get credit. If you need more time, you will have to make special arrangements. Of course, schools follow the factory model: we can’t have workers come in and finish whenever they want. But outside an industrial setting, I think that deadlines are counterproductive. If I take a class in computing theory and end up proving that P is equal to NP, but I end up my paper a few weeks after the end of the course, I will still fail. Meanwhile, the good student who followed the rules but showed a total lack of initiative and original thinking will go home with a great grade. What do we reward and what do we punish?
•Grades: Grades are a very serious matter in schools. Denis Rancourt, a top-notch tenured physicist at the University of Ottawa, was fired after refusing to grade his students. (He would give A+s to everyone.) Grades are effectively the quality control mechanism of schools, where students are the product. Somehow, we have totally integrated the idea that we could sum up an individual by a handful of letters. It sure makes managing people convenient! It all fits nicely in a spreadsheet. Of course, students have adapted by cheating. Schools have reacted by making cheating harder. But I cheated all the way through my undergraduate studies getting almost perfect score in all classes. How? I discovered a little trick: at the University of Toronto, all past year exams were available at the library. If you took time to study them, you soon found out that, at least in the hard sciences, a given professor would always use the same set of 10 to 20 questions, year after year. So all you had to do was to go to the library, study the questions, prepare them, and voilà! An easy A. But it is all rather pointless. In theory, grades are used by employers to select the best students, but serious employers don’t do this. We use grades to select the best candidates for graduate school, but I doubt there is a good correlation between grades as an undergraduate and research ability. I know two top-notch researchers who have admitted getting poor grades as undergraduates. For years, I have served on a government committee that awards post-doctoral fellowships: I am amazed at how poor the undergraduate grades are at predicting how well someone might do during his Ph.D. Conversely, I have seen many graduate students who had nearly perfect scores throughout their undergraduate studies who are totally unable to show even just a bit of initiative. They do well as long as you always give them precise directions.

Credit: Thanks to Michiel van de Panne for the reference to Brooks’ quote.

Further reading: Making universities obsolete by Matt Welsh, an interesting fellow who left his tenured position at Harvard to go work in industry.

Disclaimer: Many people are better and more sophisticated teachers than I am. And the industrial model does work remarkably well in some settings. Yet I think that they the skills it fails to favor are increasingly important. We have to stop training people for factory jobs that are never coming back.