Tuesday, August 28, 2012

Society moves online.

You can hear the chiding of parents to their children these days that they need to get off the computer and “get a life”. This theme resounds just not within the walls of our homes, but has spread throughout pop culture. One particular television commercial that reminds me of this phenomenon is the car manufacturer commercial where the child is concerned about the social well-being of her parents. She says that she was “really aggressive” with her parents about joining Facebook and then mentions that they are up to “19 friends now” and that she has “687 friends”. “This is living”, she says.
Those who do not use social media technology do not understand its lure. Having worked for America Online as an online guide back in the early 1990s has kind of helped me with this perspective a bit and help me to understand the concept and the attraction of LOL (Living On Line).
There are several factors of social media that makes it attractive, at least to me. First is the anonymity and sense of security that comes with not having to face people directly. There is an aspect of safety that comes from interacting with people in an asynchronous fashion that allows me to say what I want to say and be who I want to be without the constraints placed on me in face to face social interactions. I do not have to deal with hurt feelings instantaneously; rather, I can think carefully about what I want to say next before I say it. I can backspace before I hit send and the other party does not know what I was about to say and didn’t.
Other things that I find attractive about social technology:
  • I can be whoever I want to be, even if I am just lying to myself and nobody else believes me. If I am discovered and outted, I can create a new account and recreate myself yet again.
  • I don’t have to get dressed up to interact with my friends.
  • My friends don’t have to live around me, so I can find and interact with more people who share my interests.
  • Because my friends don’t necessarily share the same geographical time zone limitations, I can interact with them anytime I want.
  • With the help of Google and Wikipedia, I can be the smartest person I know, online.
  • I can keep my friends separated by activity and interests. If one group of friends find something I think or something I do to be “weird” or uninteresting, I know that there is another group who does not. I can hang out with whomever I want depending on my mood.
  • I can shop without going out.
  • I can order in without getting dressed.
  • I can play games for as long as I want.
  • I don’t have to be nice all the time.
  • I can always find someone who sympathizes or empathizes with my situation, whatever that situation is.
  • I do not have to commit to anything.
  • No matter what I say or do, someone out there agrees with me.
Scene from Surrogates © Touchstone Pictures 
I can certainly understand the lure of having “687” friends online and it certainly does not surprise me that social media technology is sweeping the world. My list is nowhere near comprehensive in terms of the reasons why people prefer the online society as opposed to the “real world”. The movie “Surrogates” takes online media just a step further where participants control humanoid robots that act as real life avatars in the real world.

While the movie is pure fiction, the concept of controlling an avatar in a virtual world is already here in both gaming technologies and advanced social media environments such as Second Life.
Screen Capture from Second Life

Second Life is an interactive 3D virtual world that allows participants to not just interact with each other, but also interact with objects within the virtual environment. The next natural step to online gaming and 3D virtual realities is immersive virtual human interface devices that will allow for the "Surrogate" type experience in a virtual environment much like in the form of the "Matrix" rather than attempting to build androids to interact in the real world.

We can carry this paradigm even further to include 3D immersive virtual training environments that allows students to not only be in class from anywhere, but to also interact and interact with objects within that environment without the risks involved with damaging equipment  or endangering lives.  Because there is no limitation on the number of training aids that you can have in a virtual environment, there is never an issue with sharing equipment to be trained on.  Additionally, demonstrations can be done on a one-on-one level in a virtual room full of students as there is not the spatial limitations that would normally be associated with a real room full of bodies all jockeying for position to be able to see the demonstration.  Virtual environments allow lessons to be recorded and replayed at will for students who may not have fully understood the lesson the first time around.
Amiga Virtual 3-D Simulator-Virtuality
The potential of online social media has only touched the tip of a very large iceberg with many applications that go far beyond simply being able to asynchronously connect to people around the world.  The internet combined with virtual immersive 3D technologies has the potential of changing the very way that we deal with the world around us.  It is both exciting and scary at the same time as we consider all of the possibilities, both good and bad.

Monday, August 20, 2012

Imagine the Possibilities...

What if we could simply download and store information that we need in order to learn rather than spending years in a classroom accumulating knowledge that we may or may never use?  What if we could take the memories of the greatest minds of the world and convert them into experiences that we could all share?  What if we could share more than just data, but also share experiences, sensations, and thoughts?  How much different would the world be?

The zenith of computing technology may not rest in silicon chips, sub-micro transistors, and complex programming; rather, it may rest in the ability to take the essence of technology and embed it into the human brain.  We live in an exciting age where technology and neuroscience has reached a nexus.  We begin to look at the brain as not just a wonder of biological design; rather, we see it as a complex biological computer capable of processing over 100 million million instructions per second and storing over 100 million megabytes of information.

It is estimated that the average human brain only utilizes a fraction of its total potential at any given time.  The untapped resource is not our technological advances; rather, it is contained inside roughly 1500 cubic centimeters of space on our shoulders.  The key to unlocking the full potential of the brain may be through the help and use of technology.

Mapping the human brain is important for more than just the neuroscience community.  Knowledge is often more than being able to store and regurgitate information, it is knowing where to go and what to access that makes the process intelligent.  What we do know is that there are certain regions  of the brain where certain information is stored and processed on a very superficial level.  Understanding where the brain stores what information is a crucial step towards creating an effective man/machine interface.  What can make the process more difficult is that the brain, when injured, can in fact remap itself to a certain extent to bypass damaged areas to regain certain capabilities. However, given this fact, there is the possibility that if the human brain can reroute itself and to a small extent man has been able to do simple rerouting of brain impulses in lab specimens, there is a possibility that those very same impulses can be rerouted outside of the brain to another specimen or perhaps to a computer storage device where the data can be stored and used at a later time.

We have already mapped enough of the brain to issue simple commands such as cursor control to a computer.  As time goes on and our understanding of the brains transmission areas increase, the capability to send more complex commands is right around the corner.  To download information from a computer to a human brain is not necessarily dependent on understanding the "language" of the brain if other brains can understand what is being transmitted, we only need to be able to direct thought from a source to a target with the right connections, so we may be closer than you think to "jacking in" to download lessons stored as memory engrams on a computer.

Imagine, if you will, that we have the capability of being able to capture and store thought impulses from the brain and then, at will, transmit that stored data to another person.  Instead of learning math, history, or science, we could simply transmit the necessary data on demand.  No longer would we have to worry about motivating students to sit long hours in the classroom or wonder whether they understood the lesson. The future of technology and the future of mankind’s development are ultimately linked.  Quite possibly more closely than we could ever imagine.


Saturday, August 18, 2012

Where is training technology headed?

The future is such an uncertain place, but looking at some current trends can shed some light as to where the next innovation will lead.  Technology is more than just a simple tool, it would seem technology has shaped the course of human history, often pushing us in directions that we had not anticipated we would go as we had no concept of the possibilities that technology could open for us.  There is, however, one media that consistently provides and insight into the future that is often overlooked as a media of fantasy and entertainment; that is, movies, television, and books. 
There are examples throughout history in the works of Jules Verne, Arthur C. Clarke, H.G. Wells, E.M. Forster, Ray Bradbury, Gene Roddenberry, and to a disturbing extent George Orwell.  It is my guess that each of these artists looked not at the trends of technology; rather, the wants and desires of mankind to predict where we would expend our efforts to create the necessary technology to bring their predictions to life.  It is from this perspective, I believe, that we can envision where the future is going, as technology is applied to help us achieve our wants and desires as opposed to creating some random path that leads to fulfilling no need.
The complaint that I hear the most from students and sponsors is that training takes too long and requires too many resources.  Indeed, much of the time that we spend in a classroom is not spent actively learning as training is often targeted at the lowest common denominator in the classroom.  As a professional trainer, I am often lamenting that I am spending 90% of my productive time working with 10% of the class.  So, while 90% of the class “gets it” and wants to move on, there is a 10% contingent that needs to hear it again… leaving the other 90% in idle chat or silence awaiting the 10%.  It really is a vicious cycle that leads to reduced training content (to fit in an often arbitrary timeline), reduced resources (to fit within an inadequate budget), and reduced effectiveness of the use of that time and those resources.
There are two particular areas where learning is most persistent; trial and error, and collaborative peer learning.  The reason why trial and error is effectively persistent is because there is often an unpleasant consequence to our actions that we do not wish to repeat.  As a result, we commit those mistakes to memory so as not to repeat them and suffer the unpleasant consequence that we associate with that action.  The unfortunate side effect of the trial and error method is that it is often expensive and sometimes people get hurt (or killed) in the process; thus, not a preferred training technique.  That leaves us with collaborative peer learning.
Peer learning is effective for two reasons: first, peers are closer to that state of not understanding and as a result are more effective at relating to the perspective of those who do not know; second, peer pressure is effective because nobody really wants to be left behind and thus there is a motivation for students who fall behind to keep up.  Humans, with few exceptions, tend to be pack animals.  We thrive and perform best in a group setting through collaboration and cooperation.  In a learning environment, social collaboration can produce the most meaningful learning experience that is most persistent; however, capturing and guiding that experience to produce a specific outcome within a given timeframe and budget is often impossible as it is difficult to know exactly how long it will take for social collaboration to arrive at the desired destination.
To get this thread back on track, we are looking at two forces that are significantly influencing the future of training.  The first is the availability of technology to enhance and improve the learning experience, allowing students to experience trial and error without the risk of damaging equipment and endangering lives as well as incorporate social collaboration through media that allows peers to share ideas and perspectives.  The second, a continued push to make training more effective while at the same time reduce the time and resources necessary to conduct training and get people working.  The question is no longer “where; rather, it has become “how”.
Don Tapscott, a business strategy innovator, sums up the problem in this way, The Net generation uses technologies both for socializing and for working and learning, so their approach to tasks is less about competing and more about working as teams. Therefore, teachers should abandon the “drill and kill, sage on a stage” model of pedagogy, and managers should encourage greater freedom among employees to self-organize.  The team concept plays well with the human animal as it is a natural formation of people to accomplish a common goal.  Increasingly though, teams are not co-located in the same geographical location to facilitate face to face social collaboration learning; thus, geographically separated teams are often dysfunctional and inefficient.  Using available technology and lessons learned from creating and running social media such as FaceBook, MySpace, Twitter, and others, we can reconnect teams on a global scale to allow for free collaboration in both a real time and asynchronously.  The hurdle that needs to be jumped has little to do with the technology; rather, it has to do with changing the attitudes and traditional position of managers to allow employees to self-organize and collaborate in order to accomplish a given task.  Managers need to move from controlling every aspect of task management to a position of facilitating and guiding discussions and allowing employees to approach the problem from a fresh perspective.

Saturday, August 11, 2012

The Problem with Democracy...

I have always believed that in its purest form, Democracy is a step above anarchy.  In a democracy where everyone is given a vote and majority rules will lead to a tyranny of the majority where 51% of the people can decide what 100% of the people will do.  The minority voice gets lost in the cacophony that is the majority.  In this way, we all become lemmings and as the majority pour over the side of the cliff, the rest of us are along for the ride in spite of the fact that we do not necessarily think that this is the best course of action to take.  Of course, I am not alone in this line of thinking as our founding fathers recognized the problems of pure democratic forms of governance and opted to form a representative republic in which minority voices could still be heard and good sense could prevail over groupthink.
Aside from all of the above, during the time of our founding, a pure democracy where all the people could vote on every decision that needed to be made was impractical.  There were far too many decisions that had to be made on a daily basis and there was no means to allow all of the people an opportunity to understand the implications of the various choices that could be made, much less an opportunity to vote on those choices.  Fast-forward 236 years into the future from the signing of the Declaration of Independence, technology could potentially allow everyone a vote on every issue through the Internet.  We are now faced with another question… if now, because of the technology that is available to us a true democracy is now possible, is it practical to change our system of governance?
The one problem that our technological advances do not address is the problem that there is still a potential that the minority could be enslaved by the will of the majority.  In the words of my mother, “If everyone else jumps off a bridge, does it make sense for you to as well?”  My mom is a pretty sharp cookie.  To paraphrase her a bit, just because something works well in one part of the country, does it make sense for everyone in the country to do so as well?  As an example, if we can agree that most people live in the various large cities in our country and that fewer people live outside of those cities, does it make sense that a majority who live in cities make decisions about how people who live in the country live?  Just because a rule or a law makes sense in the City, it does not necessarily equate to making sense in the country.
Worse yet, in established semi-democracies where every citizen is entitled to vote, only a small minority actually exercise that right.  What this usually means is that the majority of a very small minority can decide the fate of everyone.  If our representative republic worked as the founding fathers had intended, representatives of the people would be beholden to the will of the people.  If they did not, the election cycle would ensure that those people would not be re-elected for another term.  Instead, more often than not, we end up with a system of government that is beholden to a small minority and impose their will upon the majority for the purposes of advancing the will of that minority.  The system eventually collapses on itself as the interests of the majority are ignored over the self-interest of the few controlling minorities.
Given the problems present in semi-democracies, it would seem that a movement towards a pure democracy using technology as the foundation is a suicidal race towards anarchy.  A democracy lives and dies by the level of participation of those who are entitled to vote.  In the 2008 Presidential election, a record 63% of the eligible electorate turned out to vote.  President Barack Obama won 52.92% of the popular vote.  When we do the math, it was estimated that the total eligible voter population in 2008 was about 208,323,000.  The popular vote count garnered by President Obama was 69,456,897.  In other words, President Obama was elected by approximately 33% of the electorate.  A far larger majority (37%) of the electorate remained silent by not exercising their right to vote and is now beholden to the will of the 33% who did exercise their right to vote.  A disinterested electorate has lead down a path of an oligarchy where a small group of people now decide the fate of an entire country.  How can a democracy survive when those who are responsible for its fate do not participate?
In their paper, New Agora: New Geometry of Languaging And New Technology of Democracy: The Structured Design Dialogue Process, Vigdor Schreibman and Alexander Christakis seek to address some of the problems I address of the tyranny of the majority by proposing a structured design dialog process (SDDP) that prevents the voice of the majority from undermining good decisions by advancing meaningful dialog between disagreeing parties.  The SDDP architecture proposed by Schreibman and Christakis consists of 31 component constructs grouped into seven modules:
  1. 6 Consensus Methods: (1) Nominal Group Technique (NGT), (2) Interpretive Strutural Modeling (ISM), (3) DELPHI, (4) Options Field, (5) Options Profile, and (6) Trade-off Analysis;
  2. 7 Language Patterns: (1) Elemental observations; (3) Problématique, (3) Influence tree-pattern, (4) Options field pattern, (5) Options profile/scenario pattern, (6) Superposition pattern, and (7) Action plan pattern;
  3. 3 Application Time Phases: (1) Discovery, (2) Designing, and (3) Action;
  4. 3 Key Role Responsibilities: (1) Context-Inquiry Design Team, (2) Content-Stakeholders/Designers, and (3) Process-Facilitation;
  5. 4 Stages of Interactive Inquiry: (1) Definition or Anticipation, (2) Design of Alternatives, (3) Decision, and (4) Action Planning;
  6. Collaborative Software and Facility; and
  7. 6 Dialogue Laws: (1) Requisite Variety, (2) Parsimony, (3) Saliency, (4) Meaning and Wisdom, (5) Authenticity and Autonomy, and (6) Evolutionary Learning.
While it would be impossible in this very small blog to explore all of the component constructs (I did mention Nominal Group Technique and Delphi in an earlier post), I would like to kind of look at the 6 dialog laws that the authors’ refer to.  The six laws of dialog are broken down thusly:
  1. Appreciation of the diversity of perspectives of observers is essential to embrace the many dimensions of a complex situation.
  2. Disciplined dialogue is required so that observers are not subjected to information overloaded.
  3. The relative importance of an observer's ideas can be understood only when they are compared with others in the group.
  4. Meaning and wisdom of an observer's ideas are produced in a dialogue only when they begin to understand the relationships such as similarity, priority, influence, etc., of different people's ideas.
  5. Every person matters, so it is necessary to protect the autonomy and authenticity of each observer in drawing distinctions.
  6. Evolutionary learning occurs in a dialogue as the observers learn how their ideas relate to one another.
The six dialog laws identified by the authors are intended to allow participants to fully understand the various perspectives of the other participants and then to allow for a majority consensus on a decision that creates an outcome that serves the greater good of everyone involved in the process.  An educated, concerned, and considerate electorate can utilize these six laws of dialog to arrive at an equitable solution that in fact looks out for the interests of the whole.  While this does not fit the current paradigm of the human condition as it exists today, it does represent a path forward to achieve a sustainable democracy once we find a way to overcome the issues that prevent us from achieving a meaningful dialog between disagreeing parties.

Wednesday, August 8, 2012

Effective Training Evaluation


Okay, so you have spent a great deal of time and effort to develop lesson plans, curriculums, and course materials and you are executing training as planned… but is it good training?  Objectively evaluating training to determine where improvements can be made to increase effectiveness is always a challenge.  There are two methods that are most commonly used, each has its strengths and weaknesses, and each may or may not produce a useful data.

The Nominal Group Technique

Simply put, the end of course survey given to students to evaluate the effectiveness of the training received.  A properly structured end of course critique using the nominal group technique should focus student comments on specific areas to be evaluated.  This way we do not end up with random comments about the quality of the coffee or the temperature of the classroom and we can focus the comments on areas of the training that we are looking to improve.  Ideally, the comments should allow the student to quantify the experience to quantify the results for analysis.  In this way, we can perform an apples to apples comparison between classes on the same subject areas.  While student comments can be invaluable, the comments need to be examined and analyzed on the basis of the context in which the survey was given.  In other words, it is difficult to establish how much learning actually occurred and how well the student will retain that training on the basis of end of course critique.  The critique’s strength will illuminate areas of the training where the delivery was weak and could be improved; however, even poorly delivered training that is properly planned can have the desired impact on the student with regard to retention (we tend to remember the extremes… good and bad).  The weakness of the survey has to do with asking the right questions and removing ambiguity.  An improperly constructed survey will result in less predictable or analyzable data because each student will interpret the question differently.  A question not asked is an answer that we will simply not know or hear.  Either defect can result in an incomplete picture in the final analysis.

The Delphi Method

Another method of evaluating training is using the Delphi Method… or finding subject matter experts (SMEs) who have the expertise and/or experience to evaluate the training to be performed or to evaluate the training that has been performed to determine weaknesses in the program or curriculum.  To keep the evaluation honest, the expert comments should be anonymous; thus, free to offer open criticism without fear of confrontation or reprisals.  The benefit of the Delphi Method is that SMEs often have different perspectives that may not have considered in the development or delivery phases.  Additionally, the evaluation is free to be completely objective as the SMEs have not connection to or responsibility for the success or failure of the training to be evaluated.  Where the Delphi Method can fall flat is that SMEs tend to forget what it is like to be a student and that perspective can taint the evaluation process by focusing on the structure or the delivery rather than the technical content where their observations will tend to be most valuable.  Additionally, SME experience and unfamiliarity with the non-traditional training methods may lead the SME to believe that the training is less effective than traditional methods and to evaluate the potential effectiveness harshly.

Putting it all together

For a complete evaluation of training, a mixture of techniques combined with building metrics into the course structure (e.g. course written and/or hands on performance examinations, outcome-based practical exercises, etc.).  Additionally, follow up field surveys issued to managers and supervisors on the quality of trained personnel conducted months after the training is complete can yield a wealth of information on training gaps.  No single metric is going to provide a complete picture of the total effectiveness of the training performed.  Combining the various crowd sourcing techniques using the students, subject matter experts, managers, and supervisors will most likely provide enough perspectives to create a more complete picture of the overall effectiveness of the training performed.

Wednesday, August 1, 2012

Teaching 100,000 Students

The best teaching method uses a one to one tutoring personal touch.  The problem is that if you are teaching 100,000 students geographically and temporally separated in different parts of the world, your options are pretty limited.  Peter Norvig, renowned expert in Artificial Intelligence and current Director of Research for Google, taught a Distance Learning Artificial Intelligence class for Stanford University utilizing techniques that simulated the one on one learning environment.  In the video below, Mr. Norvig discusses the motivation, method, and results of this distance learning experiment.  The most innovative aspect of his method was combining different interactive techniques to keep students engaged and breaking down lessons to very small vignettes that allows students to fully absorb smaller bits of information rather than conducting very long lectures where students get lost as they are being pounded by tomes of information.