I don’t recall all that much of my youth (although it was generally pleasant), but one “regular” event sticks in my mind. Every Friday night – after our traditional erev-Shabbat dinner with all the Orthodox accoutrements (Kiddush, Blessing the Bread, singing Shabbat songs, Grace After Meals), we would all “retire” to the living room (right next to the dining room). My brother David and I would play some board game – and my parents would sit in “their” lounge chairs, intently (and contentedly) reading the newspaper.

I am an addict. Not for drugs nor for sex. It’s a curious and mostly unharmful form of addiction, one that I’m pretty sure affects other people. But I can’t say how many; no one has done any research on the matter. In any case, I’m addicted to information.

That’s not only what’s called “news”, or as we put in the academic world of mass communications “hard news”.[1] Rather, I constantly seek dopamine gratification from learning about something new: scientific discovery or idea, philosophical argument, social phenomenon, historical analysis or finding – you name it and as long as it has some intrinsic worth or even surprise, I’m ready to absorb it.

Of course, I don’t spend my entire waking hours “sponging” for new information – just most of the time. My purpose? Much as a person might take an “upper” in order to boost performance at work, study, or even sports, so too these info-bites (occasionally full-size meals) serve me as food for thought and action – in my case, researching, writing, and “getting through the vagaries of life”.

You might be asking: how is this an “addiction”? In my case, we can start with my post-breakfast routine: several newsletters filling my email inbox from highbrow and middlebrow intellectual, political, and scientific sites. I spend at least an hour omnivorously consuming such brain food before turning to “my life”. Once chores are out of the way, it’s back to scanning the online horizon for more mind nutrition. Indeed, other than quality time with my wife Tami or playing a game of basketball with my hoop buddies, there’s not much out there I would rather do than read the latest… whatever (of intellectual novelty or practical use). I literally have to drag myself away from a book or computer screen to see a good movie (for me, “good” almost always meaning “thought-provoking”); the latest David Brooks opinion piece will attract me far more than some juicy “news” about this or that politician’s goings-on. Overall, each day I will spend hours surfing the Net to find interesting (to me) reviews of “challenging” books (in the double meaning of the term: going against conventional wisdom; complex in substance) – and from there (occasionally) hitting my Amazon button to download the tome itself.

Another sign of “addiction”: what can be called multinforming. Ingesting information while doing something else. Some examples: driving my car and listening to the radio news or hearing a lecture; ditto (the latter) when in the fitness gym (doing weights or on the treadmill); morning walks in the park while smartphone viewing a lecture series on Oceanography or The Evolution of Birds. Even “worse”: speeding up these lectures (usually to 1.5 or 1.75 speed) to be able to finish two 30-minute lectures during my 40-minute park walk. In my life, there is no “wasted time”: the radio news is my best friend while washing the dishes and taking a shower; mini-articles on my iPhone are consumed at the supermarket checkout line before reaching the cash register.

By coincidence, in the middle of my penning these lines, the magazine Scientific American just reported on medical research that shows doing exercise AND simultaneously “exercising” the mind does more for our brain than simple aerobic exercise (e.g. running on a treadmill). So, if you’re going to jog, do it outside (the brain has to keep track of the terrain) and if you can handle it, listen to a lecture or do some other mentally challenging work.

Back to the overall issue with my next question: is such overall “addiction” unusual? Probably yes and no. Let’s start with the “no”. All children are born with an insatiable hunger to learn. They are literally information sponges (being cute doesn’t hurt either – it magnetizes others to feed them with constant stimuli). Their curiosity is boundless; anything and everything is fair game to learn and understand. And even if there is little stimulus to be had at the moment, they have two other tactics: first, use their curiosity to grab things and figure out what they are and how they work; second, if a human is around, ask “why?” – again and again and again and… “Enough already! Go play with your sister…”.

When does such info-sponging become not normal (as opposed to “abnormal”), i.e. when does it become “yes, it’s unusual”? Gradually, as we become older and “life takes over”. An older friend of ours once served on the New York City Board of Education. When we told her that our oldest son was about to enter first grade, we were taken aback at her response: “Too bad; school will spoil his curiosity.”

Some of this is inevitable. After all, not every child will want to know the basics of physics or how to calculate the radius of a circle, but things like that need to be learned in order to function in our increasingly complex world. The bigger problem is that most educational systems are still more oriented to rote-learning than to teaching through figure-it-out-for yourself education.

And then, of course, there’s life: making a living, raising a family, and the like, that takes up most of our time and energy after leaving the educational system. In short, if school didn’t kill our kids’ curiosity and thirst for learning, life tends to do the job just as effectively.

And yet, this isn’t the whole explanation. We do seek out “information”, just of a different type: celebrity goings-on, political machinations (some important; others far less so), cute cat videos – you get the (YouTube/Instagram/TikTok) picture. Of course, there’s nothing wrong with harmless fun; the question is one of degree. If a person’s life is taken up mostly (or completely) by fun and games – “bread and circus” as the Romans put it – then the important information simply collapses under the weight of the weightless.

Can the opposite be true too? If over my lifetime I have spent about 90% of my free (i.e. non-working/eating/sleeping/parenting/hygiene) time sponging “hard” information and news, could that too be considered an imbalanced life? Perhaps. But that sort of depends, among other things, on the nature of the information and to what use the 90% is put: to better one’s health? professional expertise? parental and social capability? civic action? Or simply to quench the information craving, however esoteric and useless it might be? As with eating culinary food, so too with ingesting food for thought there’s a difference between gourmand and glutton.

I want to believe that I have found some semblance of balance here: a good part of my self-education is admittedly a function of what catches my intellectual fancy at the moment. But a not inconsiderable amount is purposeful – or as we academics call it: utilitarian. I read lots about medicine that helps keep me very healthy and fit; about evolutionary biology that (perhaps surprisingly) is a boon in being a better communicator with my fellow bipedal primates (also called humans); about economics, obviously helpful in investing and otherwise keeping my bank balance in the black; and so on. Even strange esoterica can be useful if one knows how to dole it out (in small amounts) during social get-togethers.

Does this make me a better person? Not at all. There’s no correlation – let alone causation – between intellectual curiosity (or even brilliance) and social-mindedness, good-heartedness, or any other definition of what a “good” person is. However – admittedly one can argue with me on this – “informaddiction” properly activated can lead to the “good life” in Platonic terms. If the term “mind your own business” is familiar to all of us, I believe that “your business should be mind” is even more apt.

To be sure, not everyone has the capability for this sort of life predilection. And many people who have a relatively high IQ might still prefer to live the entertained life rather than one of sustained self-education. If that makes them happy, fine with me (and hopefully them). To a certain extent, this type of addiction is environmentally and culturally learned: a home with shelves of books; growing up with dinner conversations about the wonders of the world; stimulating teachers. Nevertheless, it seems to me that in the final analysis, to put it simply, simplistically, and also truthfully: you are what you are.

In my case, if that makes me an egghead, so be it. The egg preceded the chicken by about 150,000,000 years (see: esoterica can be interesting!), so I figure that I have a pretty good head start continuing the advance of homo sapiens sapiens – and even (if I and/or they are lucky) resurrecting others’ curiosity.

[1] From the standpoint of journalism, this is what I (along with my co-author Michal Seletzky) called “general news” – not yesterday’s “political” event or economic datum (hard news), nor soft news items found in the middle and back pages of even the most erudite news institutions: food, travel, sports, culture, gossip, and the like.

Freedom of Choice? Beyond Nature & Nurture

As I entered my senior year in college (CCNY), I had to start seriously thinking what I was going to do with my life “when I grew up”. With an almost straight “A” average in my college grades, I understood that I could get into almost any graduate school of my choice – but in what? For the first time in my life, I had to “introspect” – not something easy to do for a 21-year-old “whippersnapper” (for those too young to have heard this term, it means a young and inexperienced person considered to be presumptuous or overconfident). I had a good mind, and even “better” mouth – indeed, my cousin babysitters, Ruthie and Naomi used to say about me when I toddled into the room: “here comes the mouth”!

After a while and some “consultation” with a few of my college professors, I whittled down my choices to two relevant (for me) possibilities: Law School or Graduate School (for a PhD). With my grades, I was almost a sure-bet to get into Harvard Law and Harvard Graduate School for Arts & Sciences (GSAS). Which was it to be?

The considerations were pretty straightforward (I am only slightly exaggerating here): if I go to Harvard Law, within 5 years of graduating I would probably be earning $500,000 a year (and that’s back in the mid-1970s!). If I attend Harvard GSAS, then I would be earning about $50,000 annually. A no-brainer? Not exactly. A high-powered career in Law meant (still means) that I would be working close to 24/7/365: oodles of money and no personal, free time. A career as a professor meant far less money but a lot more freedom to do what I want professionally, when I want, and how I want.

“Prof. Sam” provides the clue to my ultimate choice. And I have never regretted that decision.

 One of the most hotly debated issues in academia these days is the Nature/Nurture divide. Simply put: in the way we behave and think, are humans mostly/completely a product of our biological-genetic makeup, or mainly influenced by our environment (social and physical), e.g. parental upbringing, education, societal norms, weather etc? I do not intend here to dive into this very thorny controversy, but only remark that the latest scientific research (e.g. epigenetics) clearly shows the interaction between the two.

However, there is a third factor here that is not given much attention: personal choice, otherwise known as “free will”. That too is a highly fraught term in contemporary scholarship, with serious arguments – philosophical and neuro-scientific – on both sides of the issue. Some argue that our decisions are a deterministic product of all the internal and external forces that impinge on us. For instance, why do I choose to eat a banana right now? First, because I am hungry (internal pressure); second, because I read somewhere (external) that bananas have potassium which I need after playing an hour or so of intensive basketball. Others claim that “we” don’t really have free will because it turns out (incontrovertible empirical research) that our brains make a decision (for us?) a split second before we are aware that we have decided!

Our common sense understanding of free will, however, accepts that at the extreme margins, we do not have “free will”. We are all aware that we can’t decide to have our bodies fly through the air or see through walls; without the necessary wealth, most of us can’t simply decide to take a round-the-world cruise over the next two years; and so on. Our internal, physical makeup and external, social environment puts quite a lot of restrictions on our “free will”. We live with that because “that’s the way it is”.

Between those polar extremes, though, we do feel that we can make choices large and small – even if they are in some loose way influenced by other life factors. After all, if instead of educating us our parents had put us in an isolation cage for eighteen years with almost no external stimuli, our choices in life would be far more circumscribed (no language, no education, no exercise etc). By being “out in the world”, we are pushed and pulled by an almost infinite number of “influences”. And yet, we are not a planet stuck in orbit around a sun for time immemorial; we can determine to some – and even a large extent – our personal life orbit.

However – and here’s the key point – in order to make choices based on some measure of free will, we have to be aware of the “deterministic” factors around and within us that might cause us to act more like a gravity-captured planet rather than individuals with freedom of choice. In short, it is not so much political dictators that prevent us from acting freely; it is our own lack of self-cognizance regarding what is pushing and pulling us down a “pre-determined” path.

This involves several things:

First, as Nobel-prize winner Daniel Kahneman (and his pre-Nobel-deceased research partner Amos Twersky) have shown, our cognitive apparatus (aka: “brain”) is full of traps and obstacles to clear thinking. We leap (in this case, “jump” is too mild a word) to conclusions without sufficient evidence or logical thought. Not only the “uneducated”; professors and researchers/scientists are almost as guilty of this – especially regarding almost any topic not within their field of expertise. (By the way, expert knowledge is not the reason that at least within their expertise they don’t fall into mental traps, but rather that in such fields they have been trained to weigh the evidence in “reasonable” fashion; unfortunately, such thinking patterns are not easily transferable to other topics.)

Second, we are all influenced in some way by social norms: some blatant, others subtle. Blatant: in theory, there is no reason why we couldn’t all walk around without clothes on (when it’s warm enough), but we don’t because society clearly does not approve – which is why “nudist colonies” are almost always found in remote regions. Subtle: in America, we expect our conversationalist partner to stand about a yard/meter away from us; in Latin countries, that’s considered to be “distant” (literally and figuratively). No one in any of these countries ever thinks about the norm of “personal space” until meeting another culture where the norm is different. But in our own society we all act (a bit more or less) in accordance with that norm. I won’t repeat here what I have already noted in a previous “Prof. Sam” essay (“Connections”) regarding the great impact living within a social environment has upon us.

Third, if societal norms reflect the macro-situation, then our family and close friends also have expectations regarding our behavior. Indeed, as Muzafer Sherif’s famous pre-teenager, color war experiment found in the 1950s, it doesn’t even have to be someone we are close to – it’s enough to feel “part of the group”. We are all social animals, having evolved eons ago in groups of around 50 people – so when we’re in a group of any “small” size, we will quickly adhere to what’s “expected from us” in order to “survive” (other researchers have now found that the maximum number of people we can be truly friendly-with/close-to is around 150). To take but one kind of example, movies such as “My Big Fat Greek Wedding” and “The Big Sick” seriously (and hilariously) show how family pressure can significantly restrict our ability to make major life decisions for ourselves.

Luckily for me, my mother was not the “my son the lawyer” type. Would my decision to pursue a career in academia have been different were she that sort of mom? Who knows? But if I had to make that decision thirty years earlier (the 1930s), when antisemitism ran rampant in professorial academia (not that the legal profession back then was a bastion of tolerance), I probably would not have chosen this career.

Fourth and finally, the matter of free will and our conscious decision-making process is greatly complicated by a phenomenon that only recently have neuroscientists become aware of, one that I mentioned above: it is not clear who/what is the “I” making the decision! Before I continue, apologies for the semantic confusion this might cause, because our language has not caught up with the latest research. I will use the first person here to make things easier.

When I am faced with a decision of any sort, I eventually decide what to do. But it turns out that my brain makes the decision about four-tenths of a second before “I” (consciously) do! Of course, my brain “belongs” to me, but still there is a difference between “me” deciding and my brain deciding for me before I am aware of it. How does this affect our concept of “free will”? You decide (pun intended): either our free will stays intact (I am my brain, so in fact I made that decision), or it becomes a slippery concept (I was not aware that my brain was “deciding for me”).

The bottom line: we are always “free” in theory to make our own decision, but the degree of such freedom is heavily dependent on the number and power of the cognitive and social obstacles we have to overcome to make such a personal choice – not to mention our understanding of what/who exactly “we” are when deciding something. Greater personal freedom, then, is not only a matter of “freeing” ourself but also (perhaps primarily?) reducing society’s strictures and expectations of what each of us should be doing and deciding, as well as being aware of subconscious processes deep in the recesses of “our” mind.

Two Bad Can be G😃😃d

After several years of marriage, it became clear that Tami and I were having problems getting pregnant. After both of us underwent all sorts of tests (and subsequent “procedures”), the doctors asked us to query our mothers about a drug called DES that decades earlier were given to pregnant mothers. Amazingly, both my mother and Tami’s mom had been given DES in the very late 1940s – the cause of our mutual infertility.

 In math, we are all taught that multiplying two negative numbers renders a positive. But in real life, it turns out that adding two negatives can also end up as a “positive”.

A couple’s infertility has the potential of being a marriage-breaker – for two reasons. First, it demands of the couple some soul-searching and heavy decision-making: Do they go childless? Do they try to adopt – and if so, how and who? Or perhaps surrogacy?

These are very difficult choices, each with substantial advantages and downsides. Childless through life? Lots of freedom and secure finances, but with a “hole” in the family unit, not to mention serious familial loneliness in old age. Adopt through an agency? Not that expensive, but not too much choice of the type of child unless you are willing to forego a baby for a kid somewhat older. A legal, private adoption? More control over what you are getting – but frightfully expensive (lawyers’ fees, biological mother’s medical costs etc). Hiring a female surrogate and/or donated ovum – or using a sperm bank (depending on who is infertile)? Cost and/or parentage issues. In short, any one of these questions can lead to a serious rift between a married couple.

By far the worst issue, though, is the “blame game”: who is the infertile one? Whether husband or wife, emotions can run riot. On the part of the infertile spouse, a major blow to self-esteem and perhaps jealousy of the “healthy” partner. The fertile spouse has a tough choice – almost Solomonic: to continue with the marriage at the cost of never having biological progeny or sacrificing a marriage partner for “genetic continuity” (if not biological immortality). In short, minus one added to plus one = a huge negative.

But if both spouses are negative, the equation pretty much straightens itself out! Neither is jealous of the other. Surrogacy is out (except for sperm and ovum donations). Only the quandary of childless freedom vs. (type of) adoption remains as a tough decision. In our case, we quickly agreed on adoption (although the process for each of our two sons was wildly different).

While not at the same level of “severity”, another ostensible double whammy had no less an impact on my life. Indeed, I considered it then to be so “horrendous” that it was the only time in my life that I was really furious at my mother. In the 8th grade of my Jewish Day School, we had the choice of taking the entrance exam to Bronx High School of Science – the highest ranked and most well-regarded high school in all of New York City. My mom allowed me to take the test which I passed; and then she wouldn’t let me go! “You need to continue getting a good Jewish education,” and that was that.

Anger and frustration hardly begin to describe my feelings back then – for two reasons (the “double whammy”). First, which kid would not want to be in such an elite high school? I always had a keen interest in science and was pretty good at math. Second (the other side of the coin), eight years of Jewish education was quite enough for me; what was there still to learn? (Young teenagers are not known for their “wisdom”; they would be more correctly be called “wise?dumb!)  The thought of four more years studying Talmud and Hebrew (not to mention it being an all-boys school!) was not what I was looking forward to.

In retrospect, my mother’s decision changed my life in unintended, positive ways. Of course, one can never know “what if” I had gone to Bronx Science. But this was what happened at Yeshiva University High School in Manhattan. First and foremost, I did get a solid Jewish & Hebrew education that enabled me later on to offer a tentative “yes” to Tami’s “ridiculous” demand before she would go on a second date with me: would I consider making “aliyah” (moving to Israel)? That education also formed part of my secondary research agenda later in my academic career: writing on the Jewish Political Tradition.

Second, and this might seem to be a rather minor outcome, but in my eyes of major importance down the road, something that I already alluded to in my previous post, entitled Fa(r)ther – I was able to play on my high school basketball varsity (I most probably wasn’t good enough to make the Bronx Science team; who says “nerds” can’t be athletes? And anyway, they played many games on Saturday, my shabbat). Over 50 years later at age 71 (pre-Corona) I continue to play intensive hoops twice a week with some guys around half my age – and I’m one of the more energetic players among them.

What’s the big deal? As I have already mentioned, my father died at 57 from heart failure; similarly, his sister and brother passed away when only somewhat older than him. I have been aware of this “genetic threat” from my Twenties so that exercise became for me a potential lifesaver. However, as many exercise wannabees know only too well, if it’s drudgery you won’t stick to it. For me, basketball was always FUN – easy to stick to. Moreover, there was secondary element to this: as I got older into my 40s and 50s, it became clear that in order to be able to run up and down the court for an hour and a half, I had to stay relatively thin – thus impacting my eating habits: healthy and minimalistic. People constantly tell me “of course you’re thin – you play basketball.” They have the cause and effect backwards: in fact, I stay thin in order to play basketball! In short, what for me back then was my mother’s double “terrible” decision (letting me take the entrance exam – and then not allowing me to go to that school), ultimately turned out really well for all concerned.

These were my double “bads” that flipped into big positives – the first averting marital disaster, the second placing me in a life path that at the time (a 14-year-old whippersnapper!) I viewed as “calamitous” but turning out far better than I could have imagined. Obviously, there are many other life situations where a double whammy can ultimately end up being quite beneficial. To take a common one: I am sure that there are numerous men and women out there who become unemployed and/or get divorced, and as a result move to a new city – only to find the true love of their life or get hired for their dream job.

The lesson is universal: the paths of life are never linear; what seems negative at first – especially when doubled over – can hold profound (and unexpected), positive consequences. Does this mean that everything bad ends up good? Of course not! But it does suggest that whatever knocks we encounter in life, it pays to maintain a long-term perspective. What we feel at the time they occur might not at all be how we view them in hindsight decades later. And sometimes, despair twice-over shared with another person close to us can have its own mutual, positively reinforcing benefits.


Like most immigrant men back in the mid-20th century, my father – Arthur Wilzig – did not have any easy time supporting his family economically. And like most native-born teenagers of immigrant parents, not only was I rather clueless about that, but also not very interested in his past life, or then-present difficulties. Indeed, our relationship might have been more fraught than most – mostly my fault (in hindsight). But what is most striking (and paradoxical) to me today is how much of myself is a reflection of what he did – and also what he didn’t succeed in doing – back then.

Mark Twain was said to have said (but didn’t): “history doesn’t repeat, but it sure rhymes”. That’s one of those fake aphorisms that contain more truth than many that were actually penned. It’s true on a societal level, and in my case, quite true on a personal level as well – up to a point.

In thinking back to “dad”, several striking parallelisms become clear. First, he was not a religious man but became heavily involved in our New York synagogue as a Trustee. My ideational “religiosity” is quite unorthodox (or should I spell it unOrthodox), but I too have served as president of one synagogue in Israel, and chair of the Ritual Committee in another.

Second, despite becoming a well-respected figure in Havana, Cuba – not to mention a man-about-town and popular ladies’ man – Arthur fell in love with my mom (Jenny) and after marriage agreed to immigrate once again (he was born in then eastern Germany; after WW2 the territory was transferred Poland), this time to New York City. I too fell in love with a “Zionist nut” and agreed (pretty soon after our first date!!) to eventually move to Israel – which Tami and I did three years after marriage.

Third, in New York, my father struggled to support us – among other things, at the expense of spending time with the family, given that he would get home quite late in the evenings from work. I had less economic pressure than my father, but perhaps even more (self)pressure professionally – “publish or perish” – and frankly spent far less time with my two kids than I should have.

Fourth, as “compensation”, dad would spend whatever “surplus” money the family had on summer vacations for my mother, brother and I – in bungalow resorts, overseas visits, or summer camps. On a somewhat different plane, Tami and I spent a huge amount of money on child psychologists to try and keep our two kids on an even keel.

Fifth, the famous Wilzig family “temper” was in evidence with dad’s raising his voice throughout my childhood, but here I unfortunately “outdid” him. More on this in a moment.

Finally, a sort of “reverse” parallel: my father’s death at the early age of 57 (heart attack; cigarettes; meat & potatoes; no recreation or sports; etc) incentivized me to do the opposite in quite radical fashion – the main subject of this essay.

It’s here that we arrive at the “pun” in the title of this specific essay: Fa(r)ther. Our parents can/should teach us many things directly, and they certainly do indirectly, as behavioral models. But in the final analysis it is up to each of us to take what we find useful and learn what not to do with other things our parents perhaps mishandled. Of course, 20-20 hindsight is easy; we do things that we think is fine – and in the future our kids will look back and say “what was s/he thinking?” Notwithstanding this generational truism, regarding our parents we should always try to go one or two steps farther than they were able to in their actions and general behavior. It’s called “maturity”.

My example here is just that – one of many that other people have to “overcome” in their own child-to-parent (and on to our next generation’s children) cycle. The following remarks, then, should be seen as merely illustrative.

It doesn’t take a rocket scientist to understand that an explosive temper is not good for the heart (not to mention for everyone else around). But “understanding” is one thing; behavioral change is another. On rare occasions this can be accomplished purely from self-reflection (i.e. lots of looking in the metaphorical mirror). Normally, though, it needs some external push or motivation. By the time I was approaching my 50s – the same decade my father had his heart attack, I started worrying about my “prognosis” in this regard. This led to some soul searching (perhaps in this specific case, “heart-searching” would be a better term). And from there, I made a really big effort to temper my temper. I succeeded (about 90%), as my oldest son, Boaz, acknowledged to Tami several years later.

Easy? Not at all. But I had “help”: Tami. Not only did she constantly remind me (“complain” might be more appropriate) of my temper, but she pointed out something quite strange. When I was at work, even as a “boss” (chairing academic divisions, departments etc), I almost never raised my voice, forget about losing my temper! In short, this wasn’t an “impossible” goal; it merely meant transferring my behavior from one part of my life to another, more important, sphere.

There was another area of my life that helped as well. I have played basketball (and sporadically, tennis too) on a pretty steady basis since my early teens. Why? Mainly out of love of the sport(s), but also for health reasons. A byproduct, though, is “pressure-reduction”, otherwise called “letting off steam”. Actually, for this specific purpose tennis is superior – physically smashing a ball is a lot better than verbally smashing other people! In any case, I am now 14 years past the age of my father’s demise – still playing basketball (a topic for a future essay).

To be sure, every parent wants their children to “surpass” them. This usually means professionally, or in some cultures with more progeny. Less thought about – but in my opinion, of far greater importance – is behavior. This can entail how we treat family members; how we behave at work (customers, co-workers, employees); and in general, to what extent we find it easier to “look in the mirror”.

Just as there is no such thing as a perfectly smooth mirror, there is also no perfectly behaved human being. (The Bible goes out of its way to even find some minor fault with Moses!) But mirrors have become smoother over time; with lots of self-reflection, we too can attain a level that is a somewhat better reflection of our own parents.


During my doctoral coursework, I learned from several truly outstanding (and world famous) professors: Sam Huntington (who would become my dissertation advisor), Seymour Martin Lipset, Louis Hartz, Harvey Mansfield, and Judith Sklar. However, the professor who made the most profound (long-term) impact on me was Daniel Bell. Not because he was a great lecturer (that was Hartz), nor an unusual “mensch” (that was Lipset). What made Bell stand apart was his unbelievable multi-disciplinarity – the man was a polymath.

We’ve all met people who know a lot about a lot. However, what turns such knowledge into something special are the connections made between seemingly non-related fields of endeavor, and the novel insights such associations can engender. Prof. Bell connected the humanities and physical science in ways that were truly thought-provoking – and wildly unexpected – and he was a social science scholar! The clearest example of this, his seminal book The Coming of Post-Industrial Society (1973), the first time anyone clearly analyzed (and foresaw) what eventually came to be called “The Information Age”. And then he followed that with his no-less-prescient The Cultural Contradictions of Capitalism (1976) – a perspicacious (non-Marxian) prediction of the inherent weaknesses within the capitalist value system, or if you will: the psychological underpinning of the 2008 Great Recession.

For the past several millenia, political philosophers, sociology scholars, psychology analysts, ethicists, theologians – just about anyone who deals with the study of humanity – have argued about the foundational question: are humans, at base, individualists or social animals? To put it another way, are we basically people who cherish freedom above all else but need and use social contact with others for self-interested, functional reasons – or, do we first look for psychological comfort in the presence of others and self-identify first and foremost through a group identity, occasionally withdrawing into ourselves for some privacy and freedom from constraints?

There are two general approaches to answering such a question. The first approach: there isn’t a definitive answer, but rather it’s a matter of degree depending on the culture and society in which one is educated and lives. Some societies prioritize the individual (America), others place greater value on the group (most Far Eastern nations). The second approach: human nature is the same everywhere, so that whereas every person has a bit of one or the other (individuality vs sociability), ultimately, we will scientifically discover which of the these two constitute the bedrock of humanity.

Growing up in the U.S. and notwithstanding my strongly Jewish education, I started out not only believing in the second approach but I was convinced that we already had the answer: the individual über alles. After all, hadn’t the individualist ethos turned the U.S. into the greatest power on earth – not only militarily but also culturally, intellectually, scientifically, etc?

But the more I studied and researched politics & society – especially other cultures, historical periods, and social-behavioral patterns – the more I moved into the first camp: human behavior and thought was contextual, situational, and relative to its time. Sure, America the Great was highly individualistic (as perhaps was Greece to a more limited extent) – but the Chinese and also the Mongol Empires were “Great” in their time as well, yet far more group-oriented than individualistic.

So I taught for a few decades (perhaps even pontificating at times). However, as I gradually moved in mid-career from researching politics to studying communications and especially new media – and from there broadening my horizons into technology and science writ large – I have changed my opinion once again. I am convinced that the second approach is correct – a core kernel to human psychology and behavior does exist – but this time it falls on the side of sociality. The reason? Connections. (I am happy that you stuck with me to this point, probably asking yourself: what does all this have to do with the chapter title?)

Let’s start with one of the keywords in the field of sociology as well as new media: networks. It is a banal truism to state that babies need parents and other caretakers to learn and grow up; it is less banal (although to me quite obvious) to state that throughout our entire life we continue to “grow”/change (not to mention survive) by developing and nurturing relationships with other people. Indeed, among other facts that science has uncovered are these two: babies who have little physical or social contact tend to get far sicker and have greater infant mortality than others who are properly nourished socially (even controlling for similar nutrition); the number one “killer” of older people (i.e. dying before their cohort’s life expectancy) is……LONELINESS!!

We are “social” animals by nature. However, society can involve competition or cooperation. Darwin believed that competition was the driving force behind evolution; we now know that cooperation was (and continues to be) at least as important. Indeed, that is what human speech is all about – enabling us to communicate and thus cooperate far more than any animal is capable of. You don’t need speech to kill; you do need it to help others.

It took hundreds of thousands of years for hominids (our ancient fore-species) to slowly evolve into homo sapiens (thinking Man), but once we developed speech about 100,000 years ago evolution started speeding up, increasingly so. In less than 50,000 years we had developed “culture” (burial sites, cave art), and within another 40,000 years we reached “civilization” – leaving the small, extended family as our main social unit, and developing the village, town, city, city-state, and finally empire – with all their attendant revolutionary inventions and ideas as a result of so many more people coming into contact one with the other.

This is the picture regarding intelligence and creativity, on both the macro level (human race) and meso (individuals). It explains why we have far more inventions in the contemporary age than in the past; indeed, also why almost all truly creative work (artistic, scientific, technological etc) occurs in cities and not in rural areas. The one-word answer: “connections”. Just as we moved millenia ago from extended family (50 people) to city (thousands), in the contemporary age there are far more people in the world (close to 8 billion) than in the past (in 1800 there were only one billion people on Earth). To this we also have to add that the means of communication between these social units are far greater and more efficient. The result: collective “social intelligence” has skyrocketed.

I mentioned above that “individuals” = “meso” (intermediate) level. What, then, is the “micro” plane? Our brain! Once again, we find here the same story. It isn’t the 100 billion neuron (brain cells) that render us intelligent, but rather the fact that each neuron is connected to thousands (!) of other neurons, i.e. we have trillions of connections in one brain!! (Neurobiology studies also show that people with a faster “connection time” between brain cells also happen to be more intelligent; the micro equivalent of the macro-world’s modern media and hyper-fast means of communication.)

That’s “intelligence”. What about “creativity”? (The two are not synonymous; above approximately 120 IQ, there is no correlation between the two. In short, you have to be mildly intelligent in order to be creative, but certainly do not have to be an intellectual genius.) Here too we find that one of the main bases of creativity is the ability to bring together very different types of ideas or concepts into creating a new whole – sort of 1 + 4 + 3 = 13. In other words, the act of creation is a function of making something from many prior somethings; only God can perform Creatio ex Nihilo: creation from nothing.

Which is why I laugh (to myself) and groan (out loud) when my students state: who needs to know all this stuff when we have Wikipedia or Google? My reply: “If you need to know some specific facts, then Wikipedia is great; but if you want to create something new, then those facts have to be already in your head and not simply out there for retrieval, because you won’t even realize that these specific facts constitute the building blocks of the new ideational house that you want to build.”

In most cases, such facts are not from the same field but rather quite distant from each other – it’s the very “incongruity” of these facts that makes their “miscegenation” so productive! One example: George Lucas (of Star Wars directorial fame) received the National Medal of Technology in 2004 and the National Medal of Arts in 2012. One would be hard put to come up with two disciplines – technology and arts – more distant from each other, at least on the face of it. But that’s precisely what made him so “creative”.

Indeed, this explains why most “Eureka” moments occur when we’re not trying to solve a problem / come up with a novel solution. At rest (napping, showering, daydreaming, etc) our brain is furiously making connections between all sorts of disparate “factoids” stored in our memory – the vast majority of such connections being useless and thus immediately discarded by our brain. But once in a while the brain manages to come up with a “worthwhile” connection between ideas, however “distant” from each other – and voila! we have the creative solution.

Interestingly, this does not happen with the same frequency when we are actively trying to solve the problem, because our conscious mind is not aware of the myriad “factoids” stored in our brain. As strange as it might sound, our brain does a better job of problem-solving (connections production) when we are not directing it – similar to a decentralized corporation whose workers come up with more and better solutions through unfettered, horizontal contact between themselves than if upper management directed them how to act and think about the issue. Corporate management would do well to take lessons from neurobiology.

The bottom line of all this is not that we are all ants in a gigantic colony (although they too are an excellent example of the connectivity phenomenon: each ant by himself is as dumb as they get; but put them together, connecting through chemical pheromones, and “miraculously” one gets a highly intelligent animal “society”). Humans are individuals and have specific – and occasionally idiosyncratic – wishes, thoughts and behavior patterns. But in some parts of the world we have taken this concept to its illogical extreme: the individual uber alles. Rather, it’s our connection to others that enables us to truly express our full humanity – and it’s our ability to connect seemingly unrelated ideas and concepts that is the fount of our intelligence and creativity.

You’re Incredibly Lucky to be Alive

My parents met in Cuba. My mother Jenny Weinreb was a refugee, escaping Nazi Germany in 1939 (alone on a German boat at age 13!) to Lisbon (where my Aunt Eva was married with two baby twins), and from there to a British refugee camp in Jamaica (1942?), and finally to Cuba in 1944. My father Arthur Wilzig left Poland in 1936 straight to Cuba (where a large expatriate community had established itself in 1926 after widespread Polish pogroms).

They married in 1946 and I was conceived in Cuba in 1948, but born in May 1949 in the U.S. where they had immigrated because “there was no future Jewish life in Cuba”, as my mother explained to me decades later. Thus, I definitely qualified as an “international baby” from several perspectives.

How often have we heard others say that they’re “lucky to be alive” because they just missed being killed in some accident? Indeed, Judaism has a special prayer for someone escaping danger: “gomel”.

But this is missing the main point: EVERYONE in the world is unbelievably lucky to be alive! From several standpoints.

First, although we do not know for sure if there is life anywhere else in the universe, it is clear that the chances of life “evolving” on any one planet are very, very small (but as there are trillions upon trillions of stars and planets in the universe, overall the probability is that there is “life” elsewhere: see “Drake’s Law”). You and I happen to be living on a planet where this did happen – the chance of that is less than winning the national Powerball Lottery. So thank your lucky star (pun intended)!

Second, in many cases the chances of your parents meeting when they did is also quite small (notwithstanding the occasional “marrying one’s high school sweetheart”). In the course of a person’s youth and young adulthood, we will meet – even superficially – a few thousand people, among the hundreds of millions that they could have met, and of them the thousands they reasonably could have married. And if they had met someone else, you wouldn’t be here! Doubly lucky.

Third, and something of a head scratcher, is the most important “lucky event” of them all. In a nutshell, whereas your mother dropped one ovum each month down her fallopian tubes to be (potentially) impregnated, your father’s ejaculate contained close to one hundred million (100,000,000!) sperm. So, think about this: if the specific sperm that impregnated your mom’s egg had been beaten in that swimming race by one other among the “99,999,999” sperm, would YOU be alive at all? Probably not – just someone pretty similar to you (at least, similar to when you were a newborn).

Never thought of that, did you? And that’s the point. Life is largely a matter of attitude and perspective. You can look at your cup that’s missing a few drops on top, or at the cup that’s brimming with life (yours). I am certainly aware that this is not the way people usually think about their life, but that’s my point: as human beings, we are capable of shifting our viewpoint once we look at the mirror straight on, instead of from the side. Some people don’t need to do this – their natural predisposition (what is usually called “attitude”) is to be positive and optimistic. Others are naturally pessimistic, seeing more night than day. And then there’s the broad middle who can go either way, depending…

On what? On their social circle; on their socio-economic status; on their social happiness (quality of marriage; relationship with their kids; etc); and dare I say it? On how they consume media information.

No, I am not about to partake in modernity’s national sport: media-bashing. Actually, the “people” (that’s you and me) are at fault. When was the last time you read this headline (change the country name as you wish): “Yesterday, 8 million Israelis had an Uneventful Day.” Never. Because we, the readers, listeners, and viewers, want to get “bad news” – almost the only kind that we consider “news” at all.

Why? For that we have to go back several hundred thousand years (or even millions). All creatures on earth – and certainly evolving humans who were not particularly strong or fast – had (and have) to be on constant lookout for danger that could annihilate them: back then, tigers and human enemies; today, technology (e.g. cars) and human enemies. Thus, danger-sniffing is planted deeply in our genes. And what constitutes “danger” (socially and not only personally) is not only the usual stuff (plane crashes; war breaking out) but also – even increasingly – unusual stuff that we are not familiar with. Flying machines? (Early 20th century) People changing their sex?? (Mid-20th century) Robots building cars faster than humans can??? (Late 20th century) Designer babies?!?! (21st Century).

The media – old (“legacy”) and new (“digital”) – are doing an increasingly better job of ferreting out each and every “potentially dangerous”/ unusual item of information from around the world. And as we absorb more and more of this (with the occasional respite through sports scores and fashion fads), our perception of the world is that things are going to hell in a basket faster than ever. Ah, nostalgia: “when I was growing up, the world was a better place”. Baloney. As a matter of fact, there is less violence in the world today (per capita) than ever before in human history! (Don’t take my word for this; read Steven Pinker’s The Better Nature of Our Angels.)

So we “moderns” are caught in a paradoxical situation: the better things get (objectively), the worse they seem (subjectively). The solution is not to avoid the media (we do want to know what real dangers lie out there). Rather, it is to choose wisely which media to consume – and then understand that 99% of what is happening in the world is positive, precisely what is NOT found in the media (because we don’t want to read “good news”).

Freed from the shackles of informational negativity, we can then more realistically work on developing a more positive and optimistic attitude towards the world – micro (ours personally) and macro (society and the world at large).

Coda: the latest research shows that people with a generally positive attitude have, on average, a lifespan that’s a few years longer than those with ingrained (or acquired) negativity. So not only are the former “lucky to be alive”, but by appreciating that fact they actually live longer – and that’s not a matter of luck.

So here’s my final “you’re really lucky booster”. Remember what I said about your parents’ egg and sperm? Now take that back every generation of your personal forebears! In other words, if any procreation act of any of your grand/grand/grand(etc)parents had a different sperm win that specific impregnation race to the ovum, you would not be here (or anywhere, ever)! So just this one, repeated, “generational” piece of good fortune should be enough to get you to count your blessings every minute of your day. 

Only Politics as News?

I spent half my 40 years or so in academia teaching and researching Politics. The other (second) half was spent doing the same in Communications, mainly Journalism. Scholars tend to focus on rather narrow topics for their research, and I was usually not an exception. But this “reflective memoir” offers me a chance to ask a BIG question.

Imagine an alien coming down to Earth for some “anthropological” study. Among the places it visits is academia, where it finds among the larger universities approximately 40-50 departments, each specializing in a different subject or field of life. One of these is Political Science; perhaps another is International Relations. It then opens the Bureau of Labor Statistics annual reports and discovers year after year that approximately 5% of the work force is employed in government at all levels.

After a hard day’s work, the alien sits down to read a few newspapers (print or screen). “Very strange,” it muses out loud after scanning several of them. “All these news report media seem to spend almost half their time and energy on only one field: politics. I wonder why – after all, far fewer work in politics and even fewer study the subject.”

Indeed! Open any newspaper (not counting the National Enquirer and its ilk) and that’s what you will find: politics, politics, education, politics, sports, politics, science, politics, politics, politics, technology, politics…

Why is this so? One obvious reason is that other than war – itself a continuation of politics by other means, as Clausewitz opined – there is no other area of life with as much “Action & Drama” as the political world. By its very nature it is based on, and suffused with, conflict. And as we are well aware, we are evolutionarily primed to “notice” human conflict as opposed to other forms of human interaction. In short, the news media are simply “playing to the crowd”.

A second reason for the dominance of political news is the influence of politics on our lives: taxes, health policy, environmental regulation, macro-economic policy, etc etc – all obviously have a significant impact on our lives. Moreover, in a democracy the elected leaders are doing what we (supposedly) (s)elected them to do for us. Political news is not just what “they are doing to/for us” but also – or rather – whether they are doing what we agreed that they would do. In short, it is a public mirror of our (un)met desires.

So it is not surprising that politics takes up more space and time on the news than any other field of human endeavor. And yet…

When one looks at what truly affects us, it turns out that a huge amount of influential goings-on aren’t in the political sphere at all. There are almost countless scientific and technological advances reported each day in professional journals and other non-“news” venues that will change our lives down the road: pharmaceuticals & health (new vaccines, drugs, and bio-tech); transportation (drones, flying cars, autonomous vehicles); economics (cyber-currency, cybercrime, online banking, telework); environment (rising sea levels, catastrophic weather, global warming, bio-extinction); and so on. Of course, these are covered by the news, but with a frequency nowhere near the impact that they (will) have on our lives. So what’s the problem?

I think that there are a few explanations. First – as Tevya the Milkman once said: “tradition”. Newspapers, as we broadly understand the term, commenced about 400 years ago; the profession of journalism started about 200 years ago (earlier than that the publisher/printer doubled as “reporter”). Back then, “politics” was just about all that was happening! Modern science was barely getting started; there were almost no macro-economics to talk about (or understand, until Adam Smith’s 1776 Wealth of Nations); the lives of common folk interested no one (most couldn’t even afford to buy a newspaper); not even team sports existed to fill news space!

In that news black hole, politics – war, diplomacy, rulers’ machinations – was all there was to report on (other than announcing which ship just docked with what imports for local sale). This “tradition” then self-reproduced itself even when more non-politics did emerge in the 1900s, and continues to this day. In fact, there is a general term for the generational continuation of an original practice: “path dependence”.

The most famous example of this is the QWERTY keyboard – completely illogical, as its layout has no connection whatsoever to the frequency of letters typed or even letter contiguity in the English language. How, then, did that get started? The first typewriters had an “arm” for each letter that struck the ink ribbon on top of the blank paper. But as that early typewriter was quite primitive, if one typed too fast the arms would get stuck together – so the keyboard was designed to make typing MORE difficult (and slower)!!! By the time this physical problem was fixed on ensuing versions the secretaries had already learned “touch-typing” on that QWERTY keyboard and refused to change to a more sensible layout. So look at your advanced computer keyboard – still back in the mid-19th century!

One can add to the “tradition” of political journalism the fact that reporters and editors are like almost all other humans – they follow the herd. Indeed, one could argue that news is merely “I herd it through the grapevine”, with each journalist reporting on those matters that other reporters are covering. What about their constant quest for the “scoop”? That’s new news within the same general subject area, i.e. politics!

A second (or third, if the “herd” counts as separate) factor turns the mirror from journalists to each of us, the news consumer. Human beings are invariably “here and now” creatures. From an evolutionary standpoint (until the modern age), there were too many present obstacles and dangers to contend with; thinking about “the future” was a luxury that almost none could afford (which is why Pharaoh was so taken aback by, and taken with, Joseph’s highly unusual plan to save grain for 7-14 years hence). And on top of that, predicting the future was such an “iffy” affair that other than professional “predictors” (oracles, prophets, seers etc), no one even tried it.

Given this built-in human myopia, it is hardly surprising that journalists too would focus on the immediate present – not only because most understand the “iffiness” of future prognostication, but because their readers, listeners, and viewers just can’t get too worked up over the “future”. A perfect example: it has taken decades for journalism to really start focusing on global warming and other serious environmental issues – in large part due to the “ho-hum” response of its audience to bad things that will take place “decades from now”. When did the “environment” finally get serious public and journalistic traction? When the politicians started fighting over it!

In short, getting journalists to consistently focus on any serious or significant topic is a lost cause – unless someone (economists, scientists, pressure groups etc) can turn it into a “political issue”. This “topic myopia” is certainly short-sighted (by definition), but it’s what we – journalists and public alike – have been trained and accustomed to.

I guess by focusing most of my academic work on political science and mass communications I made the correct professional decision – which doesn’t mean that in practice, combining the two should be the be-all and end-all of our news production and consumption practices.


My father Arthur Wilzig was born in 1910 in the tiny Polish-German (the nationality depended on the era) town Krojanke; my mother Jenny Weinreb (1925), in Hamburg, Germany, moving to Berlin at age 2. From there the “wandering Jew” syndrome took over for both: he moved to Cuba in 1936 (where there was a large Polish-Jewish expat community, fleeing the Polish pogroms in the mid-20’s), and she escaped from Nazi Germany in 1939 to Lisbon, and from there to a British-run refugee camp in Jamaica, and from there to Cuba in 1944, where she met my father who by then was Vice Chairman of the Joint Distribution Committee, greeting Jewish refugees in his official capacity. They married in 1946 and I was conceived in Cuba in late 1948, when they immigrated to the U.S. with me in utero – making me a truly “international baby” (1949). No surprise, then, that 28 years later I too moved halfway round the world to Israel. But more on all this later… 

This is not a memoir. I might have lived a fairly interesting life – interesting, that is, to me and perhaps a few loved ones – but certainly not enough to entice a wider audience than that. Rather, I am using certain aspects of my life as a diving board to jump into some deeper ruminative waters that I trust will be of interest to many people. Each “chapter” starts with (or includes) a short personal event or happenstance – and from there I delve into what I believe is the “larger, more universal meaning” that others can learn from these. Thus, you need not read these in order – it’s altogether fine to cherry-pick the ones whose title and topic catch your fancy.

Obviously, I have no pretensions to completeness or comprehensiveness. A person can live only a very small number of the life possibilities afforded by modern society. But within that circumscribed life, each of us comes up against many of the same problems, issues, dilemmas, and other choices that make living in our era so challenging (and interesting). It is to these aspects that I devote my attention here – indirectly suggesting some of what made, and makes, me who I (S)am.

Which leads to my first warning: I love puns, and am known in my circle of friends as being a real groan-up. I’ll try not to overdo it here, but as I really do not have any addictions, let this be my worst vice.

Indeed, if you didn’t notice, the title of this series is a pun (I just did this twice: the first and also the last time I explain a pun of mine). As suggested earlier, I am trying to do two things here: reflect on life in general through specific mirror reflections of things that I have done or that happened to me. These are not necessarily the most important things in my life, but rather events that set me thinking about important issues of living in general.

The reader will very quickly discover that I have very wide-ranging knowledge in quite a number of disparate disciplines. To use Isaiah Berlin’s terminology, I am not a hedgehog (digging deep in one field) but rather a fox (moving hither and thither to gather food – for thought, in my case – among several fields).

“Wide-ranging knowledge”? Doesn’t that sound somewhat conceited? Forthrightly, I have tried my very best in this book to be completely honest about myself for better and for worse. You will read about several of my “accomplishments” but also discover a not inconsiderable number of “failures”. Temperamentally, I am very averse to telling anyone what I really think about them; but I have no problem doing the same about myself. As we grow older, we prefer less and less to look at the mirror image of our physical visage, understandably so; however, maturity demands of us to metaphorically look into the mirror about who we really are. If I have succeeded in anything, it is this willingness to see my character warts and try to smooth them over, if not eliminate them altogether. Why and how – I leave that for later elucidation.

And now for a few “apologies”:

1) Everything here will be the truth, but not necessarily the whole truth. We all have “dark secrets” – some very significant and evil, others minor and simply non-normative. Although “letting it all hang out” seems to be the new zeitgeist in our social-network-driven world, I come from the old school believing that there should be clear limits to self-baring selfies. Indeed, it would be better if we spelled the word “sell-fees” because in trying to sell ourselves there is usually a heavy price to pay down the road.

Moreover, many things in our life involve others – spouse, children etc – so that their privacy and feelings have to be taken into account, even if the memoirist was willing to “bare all”. This is not a “bug” of any (auto)biography but rather part of the code. Just as a newspaper editor will not let the reporter write a 5000-word description of yesterday’s event that includes every minute detail (and even some, not so minute), so too the (auto)biographer has to be selective in what to display. But again, this book is not an autobiography in the classic sense; rather, it’s a vehicle for significant life ruminations based on selected elements of my far less significant life.

2) Some people tell me I have an annoying habit of writing with too many parenthetical asides (the ones inside parentheses, just like this one). This could be annoying, but there are two good reasons for this.

a- I have been trained to think “associatively” (or maybe I was simply born that way?), and coupled with my wide range of knowledge, I see (too?) many connections between seemingly disparate facts and phenomena. I could put these into footnotes – but as an academic I have had more than my fill of that!

b- Details, details: too often readers misconstrue or misunderstand (or are simply confused by) a comment without sufficient context or explanation; I prefer to err on the side of “over-explanation” so that you don’t err in understanding what I write.

3) As an academic, I know full well the importance of “sourcing”: where does this fact come from? who said it or verified it? is it speculative or proven? On the other hand, as a reader of much academic work I am also aware of how all this can complicate and even undermine comprehension, especially for (even highly educated) lay readers. So I shall forego citations, sourcing and other academic paraphernalia. In the age of Prof. Google (Scholar), it is quite easy to find relevant sources for any idea, argument, theory, factoid and the like. If any idea among my ruminations strikes your curiosity bone, feel free to do some intellectual detective work by yourself. What I reflect, you are welcome to refract….