A social organization is a more collaborative, open and transparent organization. That's a given, but how we approach becoming one is debatable. In my opinion there are two ways; one being Push and the other Pull. Pushing social, might be seen as incongruent; something natural being forced unnaturally. And yet, in the social/collaboration space, this is the road most often taken.
Organizations will often start their journey to improved communication, cooperation and collaborations by initiating social efforts such as supporting CoPs, instituting team huddles, using ideas take from Agile project management, etc and then move to collaborative tools. It's here they purchase an ESN or a chat platform and begin to engage in rollout campaigns, gamification and other external motivators to increase use. Many times they forego the non-tech and just buy a platform. These efforts are "pushing social", it's slow, steady and comfortable for the status quo. And some studies (Forrester being a famous one) have indicated that some 80 to 84% of social efforts fail. These type of efforts.
The other way is a sincere commitment to the idea of the social organization but it can cause some stress especially in more established firms. It's pulling social which means changing the organization to make it more conducive, even welcoming of true open collaborative behavior. Pulling social is basically pulling down the barriers; the systems and structures that are really the reason most social efforts fail. If systems of rewards shift to recognize the processes that lead to success over the product, people open up. If CoPs work out loud they invite new participants and healthy criticism, if leadership accepts the powerful undercurrent of communication (the wirearchy) exists and can support hierarchies, social thrives. Tools and platforms come next (not first) to amplify the healthy openness that now exists, the new normal, and serve as the power source going forward.
The challenge today is for social practitioners to stop leading with social and for organizational leaders to accept that their organizational design needs to be severely shaken up.
Let’s take a moment and look at the idealistic, hopeful “promises” (the promise so many still speak of and fight for at least those who haven’t gone “corporate” so to speak) we saw emerge from around 2007 and compare them against the “common reality” we see in many organizations today.
Promise: Organization-wide transparency & openness
Common Reality: Organization-wide monitoring, measuring, judging and manipulating
Promise: B2B and B2C networks
Common Reality: Another sales channel
Promise: Social platforms to make work easier
Common Reality: Social platforms are another layer of work
Promise: Social Leadership
Common Reality: Executive broadcasting
Promise: Online customer communities
Common Reality: Customer service system
Promise: Platform owned by the workforce
Common Reality: Platform owned by IT
Promise: Increased connection for employee community building
Common Reality: Increased connection for expected employee work collaboration
Promise: Make work more human
Common Reality: Make humans work more (always connected is expected)
Of course this is not the truth for all organizations, some are meeting many of the promises but I don’t think that is the norm by a long shot. And this post isn’t meant to be a cry of surrender but rather a call to action. If you see it this way too, we need to be asking – Can we ever reach the true promise of (enterprise) social technology and if so, how?
I’ve always loved history. I studied it in school and the prospect of a career in history led me to be a Social Studies teacher for 8 years. In my first 3 years I was a miserable failure. I lectured way too much, drew up regurgitate the facts assignments, used a textbook exclusively, and watched the kids lights go out. They didn’t share my joy, I made it joyless and met their expectation that history was a bore, something to suffer through. Simply, I had put my love of history before their problem; a lack of respect and control.
In my 4th year I discovered the writings of Sam Wineburg and the theory of Constructivism (no, this wasn’t taught at university). I shifted my curriculum to one where the students became the historians, I lectured little, they explored more. My love of the past turned to a love of guidance as my students passing history tests wasn’t the goal, them doing history was. I had shifted from loving my knowledge to loving their need and success followed.
The bigger lesson here is for many professionals and businesses alike. You’re a training expert? An ReL tool guru? A video genius? So what? Don’t lose sight of who you work for, don’t choose your dream over their reality. Your knowledge and skills are of little interest to your clients, learners or supervisor. Your real value is in helping people see their problems more clearly, understanding their wants and needs and exploring paths of least resistance to gain the solution. What they want, what they need, is THEIR problem solved. Your work is to help them keep working.
If AI will grow to dwarf our intelligence capability, the general (maybe irrational) fear is that AI will not tolerate the inconsistent, illogical, highly emotive humans and prefer to stomp us out like a pesky insect.
Maybe I’m naive but I just don’t buy into this narrative. I feel that so many cultural references have filled us with fear and awakened the Luddite ghosts. So, I choose to disagree with the ideas perpetuated in film and books such as The Matrix, The Terminator, Ex Machina, etc; those that try to convince that we will be eliminated. Here are my 3 basic slightly philosophical counter-arguments.
1. To machines, humans will be poetry in motion; unattainable and unique. We will be preserved but not for AI’s amusement but rather for appreciation. AI will see us as living art.
2. Purpose. Every intelligent being functions beyond instinct. Intelligence seeks purpose and if we are the only other intelligent life form in the universe, I expect a more intelligent race of beings (AI) to not follow in our footsteps by indiscriminately eradicating life. The world ecosystems are perfect machines and this will be respected more than humans ever did.
3. If AI succeeds humans and becomes the greater in all ways, then it will be the first to do so and as a level up, it will become in essence a God. All Gods in history have ultimately been benevolent to their “children”. I expect that more or less we’d be in some type of Greek mythology mother-children relationship; an unbreakable bond of silicon and carbon.
The future is undefined of course but the path we are on seems pretty clear, AI is growing quickly and it’s pace won’t slow down. Yet my hopeful outlook is only tempered by the fact that the creators of this new intelligence is the same that created gun powder, TNT, atom splitting, genocide, and global warming and well… this does gives me pause.