Let’s take a moment and look at the idealistic, hopeful “promises” (the promise so many still speak of and fight for at least those who haven’t gone “corporate” so to speak) we saw emerge from around 2007 and compare them against the “common reality” we see in many organizations today.
Promise: Organization-wide transparency & openness
Common Reality: Organization-wide monitoring, measuring, judging and manipulating
Promise: B2B and B2C networks
Common Reality: Another sales channel
Promise: Social platforms to make work easier
Common Reality: Social platforms are another layer of work
Promise: Social Leadership
Common Reality: Executive broadcasting
Promise: Online customer communities
Common Reality: Customer service system
Promise: Platform owned by the workforce
Common Reality: Platform owned by IT
Promise: Increased connection for employee community building
Common Reality: Increased connection for expected employee work collaboration
Promise: Make work more human
Common Reality: Make humans work more (always connected is expected)
Of course this is not the truth for all organizations, some are meeting many of the promises but I don’t think that is the norm by a long shot. And this post isn’t meant to be a cry of surrender but rather a call to action. If you see it this way too, we need to be asking – Can we ever reach the true promise of (enterprise) social technology and if so, how?
I’ve always loved history. I studied it in school and the prospect of a career in history led me to be a Social Studies teacher for 8 years. In my first 3 years I was a miserable failure. I lectured way too much, drew up regurgitate the facts assignments, used a textbook exclusively, and watched the kids lights go out. They didn’t share my joy, I made it joyless and met their expectation that history was a bore, something to suffer through. Simply, I had put my love of history before their problem; a lack of respect and control.
In my 4th year I discovered the writings of Sam Wineburg and the theory of Constructivism (no, this wasn’t taught at university). I shifted my curriculum to one where the students became the historians, I lectured little, they explored more. My love of the past turned to a love of guidance as my students passing history tests wasn’t the goal, them doing history was. I had shifted from loving my knowledge to loving their need and success followed.
The bigger lesson here is for many professionals and businesses alike. You’re a training expert? An ReL tool guru? A video genius? So what? Don’t lose sight of who you work for, don’t choose your dream over their reality. Your knowledge and skills are of little interest to your clients, learners or supervisor. Your real value is in helping people see their problems more clearly, understanding their wants and needs and exploring paths of least resistance to gain the solution. What they want, what they need, is THEIR problem solved. Your work is to help them keep working.
If AI will grow to dwarf our intelligence capability, the general (maybe irrational) fear is that AI will not tolerate the inconsistent, illogical, highly emotive humans and prefer to stomp us out like a pesky insect.
Maybe I’m naive but I just don’t buy into this narrative. I feel that so many cultural references have filled us with fear and awakened the Luddite ghosts. So, I choose to disagree with the ideas perpetuated in film and books such as The Matrix, The Terminator, Ex Machina, etc; those that try to convince that we will be eliminated. Here are my 3 basic slightly philosophical counter-arguments.
1. To machines, humans will be poetry in motion; unattainable and unique. We will be preserved but not for AI’s amusement but rather for appreciation. AI will see us as living art.
2. Purpose. Every intelligent being functions beyond instinct. Intelligence seeks purpose and if we are the only other intelligent life form in the universe, I expect a more intelligent race of beings (AI) to not follow in our footsteps by indiscriminately eradicating life. The world ecosystems are perfect machines and this will be respected more than humans ever did.
3. If AI succeeds humans and becomes the greater in all ways, then it will be the first to do so and as a level up, it will become in essence a God. All Gods in history have ultimately been benevolent to their “children”. I expect that more or less we’d be in some type of Greek mythology mother-children relationship; an unbreakable bond of silicon and carbon.
The future is undefined of course but the path we are on seems pretty clear, AI is growing quickly and it’s pace won’t slow down. Yet my hopeful outlook is only tempered by the fact that the creators of this new intelligence is the same that created gun powder, TNT, atom splitting, genocide, and global warming and well… this does gives me pause.
Do we live in a magical age or do we merely live among many magicians?
working out loud requires guidance
“micro-learning” is a new approach for a new age
the year you were born determines your values and needs
community is any group of people using social tools
we learn differently in the last 10 years than we did in the previous 10,000
the experience API (xAPI) tracks what you’ve learned
social learning requires a platform
Now, you’re looking for the secret. But you won’t find it because, of course, you’re not really looking. You don’t really want to work it out. You want to be fooled.- The Prestige (Film, 2005)
I was recently asked privately a question that I hadn’t really answered myself.
Do you use any particular methodologies or models [in your personal practice]?
So I thought about it and responded that there are two “models” that rather unconsciously or habitually guided my work practices as I began to shift away from a training-centric mindset. Note that all of which I share here can be read in depth at the respected sites noted.
The first is Wirearchy. Since my focus can now be said to be more in digital learning (Which I see as learning through technology vs. learning with technology) helping surface these powerful undercurrents of knowledge exchange is key today and social technology certainly can aid in this effort. My known affinity for 702010 should be obvious too as it certainly aligns very well to this emergent organizing principle. Wirearchy though is not about learning per se, according to Jon Husband, who authored the principle, it’s a dynamic two-way flow of power. When realized and supported in organizations, I believe Wirearchy can change the actual design of the organization. Learning is ultimately about behavior change and if you truly desire long-term change in behaviors, I believe the systems in organizations need to be addressed (human systems related to authority, communication, rewards for example). Additionally, I look to Cynefin Framework (admittedly I’m still quite a student of it) but it does help in identifying current states (habitats). One of the four domains is labeled Complex. Much of the work being done today and even organizations themselves are complex. Navigating in complexity according to Cynefin demands a Probe-Sense-Respond approach as there is no one right answer and/or the many interconnected parts can be impacted by just changing one effort. Therefore, run small experiments, gather and asses the data and take action all apply to help shift behavior in dynamic situations.
Both Wirearchy and Cynefin are larger than strategy of course and far beyond just organizational learning. I prefer them though as each are flexible and today’s world of work is much more fluid. Additionally we cannot see learning apart from work any longer. Many of the “tried & true” used by consultants arose during the last century and are honestly not valid or just too rigid. Typically they are much too slow to enact and build through best practices rather than best principles. This seems unacceptable to me as each organization is as unique as a fingerprint, one must be flexible, understand and leverage the power of networks, and draw on best principles not practices to succeed.