Changing One’s Own Behavior

“Everyone thinks of changing the world,
but no one thinks of changing himself.”

— CAL THOMAS

 Although I look much like my mother, and seem to have her healthy genes too, when it comes to personality – especially temperament – I was born on my father’s side of the family: fiery. Alongside many positive traits, the Wilzigs have a temper. So did I. Which leads to my question: can someone change one’s own basic personality and behavior?

Neuroscientific research in the past few decades has come to quite a clear conclusion: we have great brain plasticity i.e., the brain can change in response to external challenges; we can also change our brains by practicing. For instance, the area of the brain connecting the two hemispheres – the corpus callosum – is larger among musicians than the average non-musician. The causation is “practice  corpus callosum,” and not that people born with a larger corpus callosum tend to become musicians.

Does this apply to personality and general behavior as well? Not so much regarding the first, but certainly yes for the second. The kind of people we are seems to be relatively “hard-wired”: introvert vs. extrovert, adventurous vs home-body, etc. However, none of this means that with some effort we can’t “overcome” some personal trait that we (or others around us) find unbecoming or problematic. To put it in “philosophical” terms: just because we do not have complete free will does not mean that we are slaves to our essential being.

However, in such cases, precisely because we are attempting to do something that goes against our innate personality, it takes a lot of work to change a habitual behavioral trait or pattern. And motivation.

At the relatively young age of 57 my father suffered a massive heart attack and passed away half a year later. His sister, my aunt Rosa (a warm, wonderful woman, but another “yeller”) also died relatively early – and my father’s brother Freddy (the biggest yeller of all, but with a big heart) also didn’t live past his early 70s. As I approached my 50s, I began to think about the connection. Type A personalities (intensely temperamental) don’t have the longest lifespans. Could I do anything about this?

This was motivation enough, even if it was somewhat selfish, as the other major problem with a temper is that it makes people living around you very uncomfortable – and the bigger/more frequent the temper, the tenser family (and work) life becomes. I did think about this as well, but being brutally honest (in retrospect), it was self-preservation that constituted the main push to change.

In any case, I figuratively “sat down with myself” and made a conscious effort to control my temper whenever something (in the past, LOTS of things) would get my goat. It took a while. In fact, I found this not too much different than “practicing” other more mundane things in life, like shooting a basketball from the foul line, or improving my writing skills. I tried different approaches, keeping those that worked best for me. After a few years (yes, YEARS), I had succeeded in cutting down my temper outbursts by about 80% – to the extent that one day my son Boaz actually asked my wife Tami: “what happened to Abba? He hardly yells anymore…”

Should I have tried to do this earlier in life? Absolutely. Could I have succeeded earlier? I doubt it. Successful self-change can really come about only after a certain amount of “living life” (some call this “maturity” – but that’s not right because maturity is the outcome and not the driver of change; perhaps “aging” is better).

I didn’t intend (or think about) this but there are ancillary benefits to such change. First, success begets success. If we succeed in changing one problematic aspect of our behavior, it then becomes more likely that we will attempt to change another one as well. That doesn’t mean we will always succeed (I have had no success in stopping my finger-picking habit), but even trying to change can be a salutary enterprise. Second, not only will such change improve us personally, but as noted above it will also benefit our loved ones who have had to “put up” with the problem. And if they are happier, then we become even more satisfied with our personal effort.

I’ll conclude with what really is the hardest part of this whole “project”: admitting to ourselves in the first place that we have a serious personality flaw! Looking in the mirror is not for the weak-hearted (or much fun) but it’s crucial for self-growth. Once we get past that emotional obstacle, the rest of the self-change project is almost easy by comparison. In short, the expression “physician, heal thyself” is intended for all of us to rectify our basest behavior.

“Most of us are about as eager to be changed as we were to be born,
and go through our changes in a similar state of shock.”

— JAMES BALDWIN

 The Process Should Be the Product

I was accepted to an experimental program for my freshman year at City College of New York, with our “campus” in the CUNY Graduate Center. We had the very best lecturers, one of whom taught English Composition: FIVE days a week for the entire academic year!

 You probably think that there can’t be a worse college experience than that. I certainly did. In fact, it was quite painful – but “boring” certainly not. Indeed, looking back on this after five decades, it probably was the most important course I took in my whole academic career – not merely for the skills it gave me, but for its life lessons as well.

On the very first day of class, Mr. Gordon C. Lea (a well-tanned Brit) asked us to sit there and write a two-page essay on anything we chose, as long as it contained an argument for or against something. We handed them in before the class was over. By the very next day, he had already marked all thirty essays and handed them back in class, one student after another in alphabetical order. Being a “Wilzig,” I was going to be one of the last to receive mine. As the “returns” went along I began to hear sobbing in the classroom; several students were silently tearing up and others had gone white in the face. With growing apprehension, I waited for mine to be returned. 

I had always known how to write. I liked words, I had a “Germanic” disposition for grammar, and my elementary school and high school made us sweat “book reports.” Moreover, I was chosen to be one of the editors of my high school yearbook…

“Wilzig!” he called out. I went up to Mr. Lea and took my paper without peeking at it until I returned to my seat, although out of the corner of my eye, I could see a lot of red markings. Then I looked directly: the grade was C-! I was shocked. Later I discovered that this was the second-best grade in the class!!

Here was the first life lesson: failure is a relative matter. Relative to one’s expectations (subjective), and also relative to what others have done (objective). Which is more important? As we go through life, most of us tend to focus on the subjective aspect: how did we do compared to others? Where does this leave me on the social (or professional) totem pole? But that’s not the way to go through life, because we can never be the “best” at anything, or (in almost all cases) even close to the best. Yes, it pays to have an external benchmark, but this should be set by what each of us is capable of reaching. In short, success in life has to be based on some inner-directed criterion.

“Most of you came here thinking that you knew how to write. Hopefully, now you understand otherwise. But…” Mr. Lea stopped for dramatic effect, “by the time this course is over, you WILL know how to write – as long as you put in the effort. And I’m going to give you an incentive for that: your final grade will not be an average of all your paper grades throughout the course but rather it will be based on your degree of IMPROVEMENT from now to then.” Hearing this, I didn’t know whether to be sad or happy: sad, because as one of the “top” scorers, I would have less “improvement” to make; happy, because I was closer to an “A” than the others.

That was life lesson number two: in the end, success should be measured more by the process, by the effort, than the final product. Of course, if we finish the “job” with failure, that’s not enough; however, we shouldn’t measure our success only by what we accomplished, but also (primarily?) by how well we did relative to our ability.

As the year went along, almost everyone in class made great strides in improving their writing skills. At the end, we arrived at the final exam – only to have one last surprise waiting for us. The main essay question was this: “In 500 words or less, make an argument to convince me [the teacher] that you should receive the course grade that you think you deserve.”

Life lesson number three: technical skills are useless without accompanying cognitive skills – and even more so, without a clear goal. I am sure that when you read “English Composition” at the start of this essay, you said to yourself: grammar, spelling, punctuation… Of course, those rudimentary skills are the foundation for good writing, but far from enough. On top of that, we need three additional elements: the ability to organize our thoughts into a meaningful whole (“composition”), the ability to think logically and persuasively (“rhetoric”), and the ability to know what you want to achieve i.e., where you want to “go” with what you’re writing (“goal-orientation”). In fact, these should be undertaken in reverse order: First, where am I heading here? Second, how can I organize the argument? Third, what specifically should I argue and how to put it into words?

I argued in my essay that I deserved an “A,“ and that’s what he gave me for the course. I was more pleased by this “A” than any other I received in college precisely because I had to “sweat bullets” to achieve it.

That was the main (fourth) life lesson I “processed” successfully: no pain, no gain…

Without God (2)

As a practicing social scientist, I eagerly devour the latest research on religion and…war; and also religion and…peace. Although academics are used to some ambiguity in research results (after all, humans don’t behave predictably like most atoms), on the issue of the relationship between religion and violence the results and “opinions” are all over the place.

After last week’s post on individual religious belief, the time has come to tackle some broader, societal question(s).

First, notice the word “relationship” above. There might well be some connection between religion and violence/war, but that doesn’t in any way “prove” that the first causes the second. As the truism goes: “correlation is not causality.” One can think of a huge number of leaders who use religion as a (false) basis of going to war e.g., Saddam Hussein. The power-hungry will turn to any useful “ideology” to further their own ambitions.

Second, it is also clear that if religion does cause people to go to war (or act violently), it certainly isn’t the only “ism” to do that. In the 20th century, Marxism (Stalin and Mao) killed far more people than all the religion-based wars put together (admittedly, in Stalin’s case, Marxism was a cover for pure self-aggrandizement; Mao, though, probably believed in Marxism). Not to mention Hitler, whose war-craze was partly based on anti-religion (expunging Judaism and Jews). So even if religion can be said to be an important factor in world history for causing war – and that is correct only from Christianity onwards; in the BCE era, religion per se almost never played a part in international warfare, although it had some influence on civil wars e.g., Jews vs. local idol worshipping Canaanites – it is certainly not the only, or even the major cause.

Having said all that, we do live in an era of intense religious strife – mostly trans-religious i.e., between different religions (Moslem vs Christian) instead of what previously was the dominant paradigm: inter-religious (Catholic vs. Protestant; Christian vs. Jew), although the latter still exists: Sunni vs. Shiite Islam (located exclusively in the Middle East and northern Africa).

With all that, why are so many people in the world still religious? Well, first of all it has to be noted that the vast majority of them are peaceful, so it’s not as if “religious belief” automatically drives people to violence; there is no evidence for that whatsoever. Just as a relatively mere handful of secular extremist ideologues (Marxists, Libertarians, etc.) use violence to further their ideals, so too the same small proportion of religious extremists give religion a bad name.

The answer to religion’s continuing popularity is that it provides several advantages from a societal standpoint. First, it turns out that on average, regular worshippers have an added few years lifespan! Speaking as social scientist, that’s not because God is looking out for them, but rather because of the health benefits (mental and physical) of sociality and communality. Going to synagogue, church, mosque, temple on a regular basis brings people together. If the number one killer of older people is “loneliness” (that’s so great a problem that both Great Britain and Japan have established a Minister for Loneliness!), then clearly regular, communal religious gatherings are going to alleviate that critical problem, whatever the religion or level of belief.

Second, as I discussed in my previous post, another possible factor in increased lifespan is the believer’s reduction of existential angst; we know how much “stress” in general can cause illness and general bodily malfunction. Thus, instead of “rage, rage against the dying of the light” (Dylan Thomas’s immortal verse – pun intended), a true believer can face the eventual prospect of death with greater tranquility.

Third, religion has given humanity most of its moral code – or at least has provided a strong underpinning to buttress Homo Sapiens’ “natural” moral tendencies. It’s one thing to fear the government’s threat of punishment for transgressions, but those police powers cannot be everywhere; for true believers, God is everywhere and sees everything, so that further encourages rightful behavior. Having said that, the latest data clearly show something “peculiar”: the countries with the lowest levels of criminality and violence are the most secular (e.g., Scandinavia); the ones with the highest levels of corruption are the most religious (Middle East and parts of Africa). Of course, that might have nothing to do with religion per se, but rather a function of socio-economic level (that is also negatively correlated with religion – see the next paragraph).

Which brings us to the social downsides of religiosity – at least from a modern standpoint. It can be a stultifying, overly conservative force: maintaining patriarchy, continuing homophobia, undermining personal freedom, and in general leaving the population behind socio-economically. It is not a coincidence that the further one goes from south to north in Europe, the less religious and the wealthier/socially-advanced are the countries. But again, is that because secularism leads to more wealth, or because more wealth (and especially education) leads to less religiosity? Probably more the latter than the former, but there’s no unequivocal evidence either way.

Confusing? That’s precisely the point I started out with: there is no clear, one-way relationship – positive or negative – between religion and positive/negative societal outcomes. Like every other area of life, one factor – as important as it might be – cannot explain or influence the rest of our very complex, social world.

You better believe that!

Without God (1)

Many years ago, as I was putting my (then) 7-year-old, youngest son to bed, I was ready to relate another nightly story. But before I could start, he turned to me with a serious face and said: “Abba, I have something to tell you.” I thought to myself: “here comes a confession of some minor infraction; after all, Avihai didn’t like to lie.”

“What is it?” I asked, with an inner smile.

“God doesn’t exist,” he responded.

To say that I was dumbfounded would be an understatement – by the very thought, and even more that a 7-year-old was even considering such a weighty matter!

But then again, why not? We all constantly mention the Almighty – whether in saying “My God!” or “God help you…”. Moreover, who doesn’t think about the big question: “who’s really in charge here?” And for children of religiously observant parents, it’s hard to avoid God altogether.

So let’s return to the substance of what my son said. I’ve been thinking about that for decades and have come to the conclusion (well, one of many on this broad issue), that the ability or tendency to Believe in God (or not) is something that we are born with i.e., it’s part of people’s nature. Of course, deep in their hearts many are somewhere in the middle: agnostic – not sure that He (or She) exists, but not sure either that an Almighty Being doesn’t exist.

To be sure, this isn’t only a matter of a person’s nature. Socialization is a large part of it as well (family and peer environment) – although that tends to influence the way we express our religion more than the actual belief itself. In short, it’s Nature and Nurture.

However, there’s a third aspect that is less talked about, but in my estimation is central to a person’s level of belief: existential angst. I know, that’s a mouthful. In simpler language this involves two quite different things: 1- the fear of “nothingness” that humans have regarding what happens after they die; 2- is there any rhyme or reason to life (or for that matter, the universe)?

The first fear can be felt at an early age, especially if a young person sees someone in the family (or close friend’s family member) pass away. (In Avihai’s case, it might not be coincidental that he made his declaration quite soon after Israel’s Gulf War in 1991, when everyone in the country was deathly worried about Scuds landing on their home.) This doesn’t mean that everyone – child and adult – will find succor in God’s hands. Many people will not accept a Being that they can’t see, hear, or touch – indeed, one (not One) who’s completely invisible. For a 3-year-old, the “virtual” is real (make-believe characters they converse with); by the age of 7 or thereabouts, the only real is the really real – for them, “make-believe is for babies.”

But for much of humanity, it’s not “make-believe” but rather they really believe. Or should I say, really need to believe. Which brings me to the “war” between atheists and believers. My first reaction to Avihai’s statement was to try and convince him otherwise. But I stopped myself, for if such a young boy can come to such a conclusion by himself (he certainly didn’t get this from our home!), why try at this stage to argue with what is (for him) a very natural conclusion? Yet, there’s a converse lesson here too: why should atheists try to convince religious believers of their “error”?

Here’s why they shouldn’t. Let’s try a “thought experiment.” You are told by a doctor friend with many years of experience as a family GP, that for so-called ailments, the best thing she does is to prescribe for the “patient” a placebo pill to be taken three times a day. “It’s amazing how many of my patients return after a while thanking me for the great medicine I had them take!”

Do you run off to inform his patients that it is all a scam; the “medicine” is a sugar pill? Of course not! Why not? Because in fact, it works! Now for the above doctor’s vignette, substitute “existential angst” for “ailment”, and “God and prayer” for the “placebo pill.” You as an atheist might feel that they are being duped; they feel (and in actuality, receive) relief from their spiritual “ailment.”

Thus, there is nothing so exasperating to me as religious people who try to convince atheists that God exists – and equally maddening, atheists who try undercut religious belief of the observant and the worshipper. If both sides are happy with their (un)belief, they should leave the other side alone.

Which leaves open one gigantic question. All this might be fine and dandy on the micro-individual level, but don’t religious belief and conversely atheism have consequences on the macro-societal level? I’ll relate to that question in my next post.

New

In my lifetime, I have had several significant changes that in a sense made me start “anew.” Leaving the cloistered world of Jewish Day School education to go to City College; moving to Israel from comfortable America; changing my academic research and teaching discipline mid-career from Political Science to Communications; and so on. Which gets me thinking – as we enter the Jewish NEW Year – about the concept of “new” in our life.

 

Human beings like to feel comfortable, another word for “habitual.” Radical change – except for those whose lives are truly miserable – is not something sought after. Think of the expression “tried and true.” That doesn’t merely mean that based on past experience it’s the correct thing to do (or that “it works”); it can also mean that what we have done in the past is the “true me.”

 

Unfortunately, people also tend to get bored doing the same thing over and over again. Assembly-line work is definitely alienating (a la Charley Chaplin’s hilarious Modern Times scene where he can’t keep up with the objects flying by). Office work can also be mind-numbing. Thus, at some point we need to find something “new.” But that runs the gamut from the trivial to the truly life-altering.

 

The question for each of us is finding the right balance between doing something new and continuing the tried and true. A lot depends on personality: some are thrill seekers; others, safe and sound bodies. Some are very good at finding the right amount of “new” by themselves; others need some outside push.

 

Which is where a New Year comes into play. We are well aware that Rosh Hashana or January 1 do not mark something really “new.” They mainly signify starting the same old cycle (“calendar”) all over again. But what they do offer is the opportunity for each of us to really think about whether – and to what extent – we do want to have something “new” in our life.

 

Once a year is obviously not enough, so we invent other “new-thinking” devices: a birthday; an anniversary. These are what I would call “potential-new”: getting us to consider what could be new the coming year, if we so willed it. Then there are the “already-new” events: engagement party, wedding, housewarming: these symbolize that we have already decided and undertaken to begin something new – but that still leaves the question of how to “manage” this new life.

 

Overall, there are three main “new” events in our life: marriage; children; retirement. (Of course, they can include some variations: divorce; empty nesting; spouse’s passing.) In each of these, we are never completely ready – or fully cognizant – of what this “new” entails, but we are willing to jump in. However, there are two differences between the first two and the third. First, marriage and children are almost always events that we have control over (excepting shotgun weddings and pregnancy “mistakes”). On the other hand, retirement is largely “forced” on us by law or physical/mental frailty.

 

That’s the bad news. The second difference is better news: whereas marriage and children restrict our ability to pursue the “new,” retirement opens up a whole world of new opportunities, without many of the life encumbrances we had pre-retirement. For the thrill- seeking types, that’s great; for more conservative individuals it can be a big problem because after 65-75 years of habitual life, it isn’t easy to switch to new types of activity, new outlooks, new ways of looking at our personal world.

 

The bottom line: we all hope to retire someday (that’s much better than the alternative, except for the “lucky” few who want to, and can, work until they drop). That’s a new situation – but it won’t be very successful unless we prepare ourselves way ahead of time psychologically and practically (hobbies; interests; etc.). And if already the Jewish New Year, then this coming one also happens to be a once-in-seven “Shmitta” year when traditionally the land lies fallow, and we all take a long rest from work. A good time as any to think not only of the coming year but future retirement as well, for a successful and fruitful new start.

Perfection(ism)

I am a rehabilitated perfectionist. How did I originally get to be that type of person? Probably because I was born with that personality trait. Or perhaps it’s a result of my German-origin parents (what’s called in Yiddish a “yekke” – but that can degenerate into “yekke-putz”). I can recall only one incident when I was young that might have egged it on: I came home from school one day with a 98 on my math test, and my father asked me: “why didn’t you get 100?” But that was the only time I ever heard that line at home (although it has stuck in my head, so who knows?). “Tiger parents” they were not, although they expected that at least I always put in a good effort.

Perfectionism is a silent “disease” – not anatomical but rather psychological. When we are faced with such a person, it’s also almost always hidden from sight – or perhaps I should say that most of us can’t see it hiding in front of our very eyes. That’s because we tend to look with envy or awe at successful people, or at least those who produce things that are way above what we are capable of. But all that hides the inner turmoil – or at least nagging angst – of the “successful” person. Their “problem” is that they set an impossibly high bar for themselves, and since they can’t really reach it, they get disappointed with what they have produced. That at times can lead them to waste inordinate amounts of time “fixing,” “improving,” “redoing” or other types of “productive procrastinations” that are actually very unproductive.

There are two reasons for such added unproductivity. First, perfectionists can never fill their own Olympian expectations. The attempt is akin to Xeno’s Paradox: you can keep on getting closer and closer to your goal, but each “half increment advance” only brings you that much closer – you don’t ever “arrive.” A second reason, as with almost every other area of life: the “Last Mile” is the most difficult or most expensive (in time or money).

But let’s say that ultimately a perfectionist does succeed in reaching perfection after great effort. What was gained? What was lost? The gain is minimal, if we are to compare the “really good” (even “great”) initial product with the final one of “perfection.” Meanwhile, what is lost is precisely that: WHILE s/he was redoing and refining the product again and again (and again), s/he could have produced all sorts of other very good/great things – worth far more than the minor incremental improvement of that initial, one product.

There are, of course some exceptions. If we are working on a work of art (fiction, article, computer program, or any other “product” for which we hope/expect it to last for a very long time), then it does make some sense to take the time to refine over and over. Mozart was notorious for simply dashing off whole symphonies and sonatas with nary a second look – until he worked on his later quartets and quintets (revolutionary for their time), for which we see in his score a huge amount of changes. For understandable reasons, composers in general can be given lots of leeway in their path to musical “perfection.” As the 20th century compositional giant Arnold Schoenberg once explained: “A composer’s most important tool is an eraser.”

Furthermore, none of this is to suggest that we shouldn’t look at our “first draft” as a rough sample of the final product. Some people (“hares”) work straight through in a creative frenzy; they need to go back and carefully polish their work. Other people (“turtles”) slog through an initial creation; their work might be closer to “ready for prime time,” but here they have to consider the totality of what they have done, given how much time elapsed between the beginning part and end section of their work – two poles that might be less connected than warranted. In either case (and other working styles as well), checking and refining is not neurotic perfectionism but rather good work practice. Checking and refining several times over is a problem, probably hiding such a psychological issue.

So how did I “heal” myself from my “disease”? I simply set a hard and fast rule: maximum TWO additional “polishing/proofing” run-throughs. After that, no matter what, I send it off. Therefore, if you find a minor error here and there in this essay, so be it. My “extra” time – as will be yours, whatever you’re doing – is better spent moving on to the next project.

In the final analysis (hopefully, not too much analysis), it behooves all of us to internalize the immortal saying: “perfection is the enemy of accomplishment.”

Solving Problems by Looking Elsewhere

I usually start off a post here with a personal vignette from my life. This time it will be a news item that I read almost 50 years ago (!)– that obviously made a deep impression.

Back in the 1970s the Federal Trade Commission set up a committee to find a solution to the (relatively rare) disaster of babies dying while sitting on the lap of their parents during a plane’s takeoff and emergency braking (whereupon the baby would go flying forward). After due deliberation and much expert testimony, the committee announced that their recommendation was…. to do NOTHING!

I was taken aback – as you probably are right now. Then I read on. Of course, a few babies’ lives could be saved by demanding parents to buy a reduced rate ticket for their tot and strap them into a bassinet or whatever. That would lead to zero deaths. But the committee members went further: what happens when you demand a ticket be bought for the baby? The answer (they had economists figure this out): a certain percentage of people will decide to drive to their destination instead of flying (especially for relatively short, inter-city distances) and then the number of babies dying in automobile accidents will be far higher than the airplane deaths! (As is well known, air transportation – per mile – is the SAFEST mode of travel in the world.)

That was a lesson that I have never forgotten – when faced with a problem, don’t only concentrate on the data within the problem proper, but rather think about the (seemingly irrelevant) “ancillary” elements that could be decisive.

This all came back to me recently in our “post-Corona” situation in which the following question is being bandied about: should companies and organizations force their workers to return to the office, or enable them to continue working from home?

The main elements seem clear cut regarding both options:

Return to the office: a- organizations need an esprit de corps i.e., a sense of community (or common purpose) that can’t be maintained if everyone is at home or elsewhere; b- managers need to stay on top of workers for guidance and some supervision; c- the “water cooler effect” – serendipitous conversations that lead to insights and breakthroughs, only happen when people are physically together.

Work from home: a- huge savings in rent for the company with far less office space necessary; b- higher worker morale (and productivity?) when workers don’t have to spend lots of time on the road to and from work; c- flexible worktime, something especially advantageous for parents (usually mothers) of young children.

As one can see, these are all directly related to the work situation itself. However, if one takes a wider perspective, then some other considerations bubble to the surface. Here are several.

Saving money? Not necessarily. Electricity bills can skyrocket for the company, with far greater use of computers and other appliances to communicate from afar. And does anyone think that workers will pay for their own increased electricity bills (computers, air-conditioning, etc.)? In fact, many home A/Cs will use far more electricity than one gigantic central A/C at the office. Thus, even without considering the economic cost for company or worker, society as a whole might actually LOSE out here with INCREASED carbon emissions!

And then there’s a possible, unintended tertiary effect. If workers can work from anywhere, many will then move to more “amenable” (for them) places to live. But no company will allow workers never to meet each other face-to-face, so then (here we go again with the airplanes!) we have to factor airfare costs into the company’s “savings” equation, for the quarterly “ingathering” of the worker self-exiles.

Moreover, as we already see with many Silicon Valley employees moving to Colorado and other rural and suburban areas – away from the denser cities – here too society exacerbates the carbon footprint problem, as it is now well documented that city folk use far less “carbon” than their suburban/exurban compatriots. Why so? Not only far less transportation expenditures, but the housing carbon footprint (urban apartment buildings vs suburban one-family homes) is much lower in the city. So that society loses out again…

Improved worker morale? Not for everyone. Many (perhaps most) people need to get away from “the house” as a break from the humdrum of everyday life. Others are “physical presence” type of people who need the visceral immediacy of “corporeal” interaction, so that meeting virtually might actually be depressing, not morale-boosting. And still others have marriages that are holding on by a thread mainly because one spouse (or both) are out of the house for many hours. (Which is why divorce rates spike immediately after retirement.) Then there are those who like the clear differentiation between private life and workplace – getting a frantic email from the boss late in the evening is not what they bargained for. Not to mention the vastly different “working at home experience” between parents with children at home, and empty nesters who can work in quiet. (Then again, there might be more parenting being done – good for the kids and society at large; bad for the parents’ employers.)

Organizational culture: The ones probably most against permanent work-from-home are… managers! It’s not only more challenging to manage workers from afar, but more “problematic” (from their standpoint): what if it turns out that many workers don’t need managers at all?! Today’s “information worker” is well educated and socialized to think independently. That’s great for the organization as a whole – but not necessarily for many who run the organization.

I’m sure that with a little thought, you too could come up with several additional, broader societal considerations, unintended consequences, and wider ramifications for this burning issue. More important, though, is to look beyond whatever other narrow problem you face now and in the future. What you see is not necessarily what you’ll get if you focus only on what you see right in front of you.

Intuition?

The brain is a strange organ. I have a good sense of humor – love to hear/read joke, and also tell them (if they’re good). But if you ask me to tell you a good joke that I heard or read recently, I can’t do it. However, ask me to tell you a joke on any specific topic that you choose – a joke I heard years ago will pop up and I have no problem relating it on the spot.

Notice that I started off with “the brain” and not “my brain”. Although there are some minor differences in capabilities between people, overall the human brain works basically the same way for everyone. In the case of memory, passive always beats active i.e., we remember things a lot better when given a “prompt” than when we forced to dredge something up without some external aid.

Which brings me (very circuitously) to the topic and title of this essay. Every one of us has had experiences where we “intuitively” thought something would happen. Let’s leave for now the fact that we “intuit” lots of things and selectively (and conveniently) forget those that didn’t come to pass). What exactly is this “intuition”?

It does not exist – at least not in the usual way it’s understood, as something “innately” natural to some people as opposed to others. Indeed, if I can make a pun, it’s a figment of our imagination.

So what is “intuition”? Simple: knowledge (mostly based on personal experience) that we are not aware that we have. Brain research has now conclusively proven that the brain is a “prediction machine” – constantly thinking about “what is about to happen” to prepare itself (and you) for the immediately upcoming environmental challenges. That could be when to cross the street based on the oncoming traffic, all the way to how much risk to take every minute in a Covid-19 environment.

Our brain is doing this on a second-to-second basis, outside of our consciousness. In fact, the situation is even weirder – and some would say “scarier” – than that. It turns out that when we have to make a decision (not just trying to “predict” the very-near-term future), our brain makes the decision four-tenths of a second before we are even aware that it/we have made that decision!

Indeed, it is at this point that we arrive at a semantic conundrum. If our brain is deciding things before we consciously “make the decision”, who is in control of whom? Indeed, can one separate the brain (or mind, if you wish) from “us”? Who am “I” if not my brain? And if my conscious thoughts are somehow different from my unconscious decision-making process, who is in charge of the matter? That’s another intentional pun, but serious too, because are decisions undertaken within the organic brain “matter”, or are they a function of the metaphysical “mind”?

Intuitively, we want to think that “we” are in charge – but that’s precisely where intuition goes astray. The knowledge underlying our “intuition” is somewhere beyond our conscious sense – almost like a computer’s external hard drive, separate from the CPU and internal memory disk.

Why, then, do some people seem to have a better intuition than others? They don’t; they just have more “hidden” (from themselves) experience and knowledge – or are better at forgetting all the times that their intuition did not materialize!

To be sure, all this is distressing because raises the question of whether we are really in charge of ourselves – indeed, whether there is such a thing as free will, at least in the extended sense of the term. In any case, the next time you have a hunch about something about to occur, don’t say “my intuition tells me”, but rather: “my brain predicts…”.

(Un)Talented

As I entered the first grade, my mother thought it would be a good time to also get me started on some pastime. So she signed me up for piano lessons. I dutifully went by myself once a week to the next building where the piano teacher lived. I learned, practiced, and… after two years came to the conclusion that I was vastly untalented.

I come from a very musical family, albeit not professionally. I have two uncles who could play the piano without a score, even if they heard the piece only once. Moreover, I loved music, so all in all my non-talent “epiphany” was somewhat disappointing. What to do?

In the third grade, we could join the school choir, which I did. I loved it and even discovered that extemporaneous harmonizing came easy to me. This became my joyful extra-curricular activity for the next six years.

The first lesson I learned from this was that “talent” is something we are born with. There’s a “rule of thumb” going around that to become expertly proficient in any field of endeavor, one has to practice approximately 10,000 hours. But “proficiency” is a murky concept. In fact, we have to distinguish between “talent” and “skill”. The 10,000-hour rule relates to skill – another word for turning raw talent into something effective at the highest level. But if one doesn’t have the initial talent, then even 100,000 hours of practice won’t do it.

Which leads to the second lesson: even if we don’t have talent in a field that we love, that doesn’t mean that we can’t still do something, albeit different, in that same field. I couldn’t (still can’t) play the piano; but I do harmonize every chance I get when in a singing environment. (Synagogue is an ideal place for that; God forgives the occasional warble.)

And then there’s the in-between situation: great love mixed with minor talent. I have been playing basketball almost without interruption for close to sixty years – high school varsity, pickup league, semi-pro in Israel for a couple of annum, and twice-a-week full-court games with a steady group of “guys” for the past several decades. That’s way beyond 10,000 hours. The result? I’m a great shooter, a decent passer, a bad rebounder, and a horrible dribbler.

Very, very few of us will ever make it to the top of our profession or “hobby” – but so what? If I compared myself to Michael Jordan, I’d never have the joy of simply doing something that I love, in decent fashion. Indeed, had I actually continued practicing the piano, I might have been “decent” – but the point is not to be decent in something “expected” of you, but to get the most out of yourself in something that you much enjoy.

So here’s a dilemma for each of us to chew on. Which is preferable: to be really good in an activity that we aren’t crazy about? or to be passably decent doing something that we really enjoy? If that sounds like a trivial question regarding hobbies, then try this: what if the question related to your work life? Add to this the monetary compensation issue: the better you are professionally, the more money you’ll make. So, regarding the preference question, now what do you choose to do professionally? Make more money working in a field that “humdrums” you steadily – or earn less money but be excited every day you go to work?

Talent is great, but we shouldn’t let it be the overwhelming consideration in our life decisions – whether regarding which “trivial” hobby to be involved in, or about the “central” profession that we choose to spend decades performing. Muddling along (10,000 hours will raise the muddling to at least middling) while doing something we really love to do is preferable to following our “talent” to an emotional dead end.

As my piano lesson failure taught me – that’s the key to life…

From Meshuggeh to Mainstream

Upon our engagement, Tami and I decided to combine our family names. That necessitated going to court for an official name change. To save her the time, I did it in a Boston court a couple of weeks before the wedding, so that from “Lehman” she “automatically” became a Lehman-Wilzig upon our marriage.

These days there’s a different “craziness” going on in the Western world (mostly the U.S.), with millennials and some older folk in their wake signing their emails with “instructions” such as I wish to be called “they”, not she, her… – or other, similar such nomenclature. For most post-50-year-olds, this strikes us as pure craziness.

The Boston courtroom was filled with Irish: the security guard at the entrance, the stenographer, and the policeman standing next to each witness in the courtroom. With my “luck”, though, I got a Jewish judge. I handed him my papers, in which I had to explain and justify my name change. I had written that “my wife is an only child and wants to keep her parent’s name” and that “I’m a feminist and wish to have an equal marriage, in name too.”

Every generation has its own way of asserting its identity. One can call it “youthful rebellion” or “generational progress” or some other more or less positive description, but it seems to be a set pattern. What accompanies this, though, is generational pushback as well. The older ones don’t “get it”, mainly because that wasn’t the way they did it when growing up and taking over the world.

My judge took the papers in hand, read it with a stolid face, picked up his pen, and as he was signing the papers, declaimed out loud the beginning of Tennyson’s famous quote: “Ours not to reason why…”. I wanly smiled, expecting to hear the second part: “ours but to do and die.” Instead, he continued: “but you’re meshuggeh!” I couldn’t help laughing out loud, to the consternation of the Irish cop at my side – probably more perplexed at what the judge said than my guffaw.

So I now find myself in the role of my Jewish judge: “they” instead of “her”?!? How far can this gender fluidity take us? I won’t join this trend, nor do I think it will have much staying power. But then again, my last name is still Lehman-Wilzig – forty-seven years later.

The point of all this is that “you never know”. Linguistic, cultural, social trends arrive by the dozens. Most come and go (remember bell-bottom pants?); a few arrive and stay (Black has definitely overtaken Negro). Several are clear signs of moral progress (I do believe feminism – once called “Women’s Liberation” – is permanently here); conversely, a few are evidence of philosophical backsliding (Cancel Culture is antithetical to freedom of expression).

A decade ago, our son Boaz hesitatingly came to us with “bad news”: he wanted to change his last name from Lehman-Wilzig to something that sounded Israeli and was simpler to say. The irony, and reason we were actually not upset by this, is that Tami’s father’s original family name was Sprinzeles, which he changed to Lehman in Vienna back in the 1930’s due to anti-Semitism (“Lehman” was a neutral German name). We suggested “Zeevi” – the Hebrew rendition of “little wolf” which is what my family name “Vilchek” means in Polish.

Change is inevitable – linguistic and otherwise. Each of us has the right to be called whatever s/he wishes (within limits; New York State will not allow a child to be given the name “Hitler”). That doesn’t mean that the rest of us have to like an individual’s decision – but it has to be respected. Respect, though, also doesn’t mean complete acquiescence but rather a measure of tolerance on our part to what seems at first to be bizarre. In any case, the passage of time and social “convention” will determine whether – and to what extent – the new trend will stick. That’s the name of the game – or the game of the name…