edmx

One in a thousand

Couldn’t pull myself away from reading Rajini Krish’s posts all of yesterday. Context: Krish, or Muthukrishanan Jeevanantham, an MPhil student at JNU, reportedly hung himself to his death on the morning of March 13, 2017 (I say ‘reportedly’ because Krish’s mother has alleged that it couldn’t have been suicide). After reading his posts – on Facebook and his blog – I wrote about them for The Wire here. I couldn’t add a paragraph in it because the copy had already been passed to the editor and was being processed for publication, so I’m putting it down below. It’s about an intricate relationship between equality and self-respect, particularly epitomised by India’s elderly (though not always): when their children get married and split off into nuclear families living separately, it has become a matter of self-esteem in many households for older members to be seen to be independent, depending on no one else for their living but themselves. Similarly, in Krish’s story, he recalls a conversation he’d had with his grandmother, Sellammal, in Salem (where he lived) before he moved to Delhi. He asks Sellammal why she has to make her living cleaning “kid’s asses” at a local school, earning a paltry Rs 750, when she snaps back:

“Paiya [boy], don’t talk too much like a big man, We old people have some reasons to work here, and I don’t want to disturb my sons. That’s why I [sit] in silence, always in my room, though my sons are nearby.”

In the Tamil film Aayirathil Oruvan (‘One in a thousand’, 2010), which explores the adjacency of freedom and self-respect, a historical war between the Pandyas and the Cholas has ended with the Cholas going into hiding. When their hideout is finally discovered by three adventurers, they are appalled by what the once-resplendent kingdom has been reduced to: a collection of a few hundred people living in squalor underground, with no apparent sense of dignity and with the false belief that they are still revered by the world outside. At one point, the Chola king sings a song telling the visitors that, though it might appear humiliating to preside “over a kingdom of skulls”, and for his ‘subjects’ to see him so, his people and he have been carrying on because they are used to their freedom to determine their own fate – and intend to hold on to it even when they emerge from their cocoon. And for as long as such a deal doesn’t seem to materialise, they will continue to be the way they are. The song is called ‘Thaai thindra mannae‘ (‘The earth the mother ate’) – and its last four verses (before the final refrain) are heartrending. The lyricist was Vairamuthu.

I can only offer two lines of the four in scant consolation to the spirit and soul of Rajini Krish.

Endro oru naal vidiyum endrae iravai chumakkum naalae, azhadhe / The day holding on to the night in the hope that it will dawn someday, don’t cry

Endhan kannin kanneer kazhuva ennodazhum yaazhae, azhadhe / The youth who would cry to wash away my tears with yours, don’t cry

Some notes on empiricism, etc.

The Wire published a story about the ‘atoms of Acharya Kanad‘ (background here; tl;dr: Folks at a university in Gujarat claimed an ancient Indian sage had put forth the theory of atoms centuries before John Dalton showed up). The story in question was by a professor of philosophy at IISER, Mohali, and he makes a solid case (not unfamiliar to many of us) as to why Kanad, the sage, didn’t talk about atoms specifically because he was making a speculative statement under the Vaisheshika school of Hindu philosophy that he founded. What got me thinking were the last few lines of his piece, where he insists that empiricism is the foundation of modern science, and that something that doesn’t cater to it can’t be scientific. And you probably know what I’m going to say next. “String theory”, right?

No. Well, maybe. While string theory has become something of a fashionable example of non-empirical science, it isn’t the only example. It’s in fact a subset of a larger group of systems that don’t rely on empirical evidence to progress. These systems are called formal systems, or formal sciences, and they include logic, mathematics, information theory and linguistics. (String theory’s reliance on advanced mathematics makes it more formal than natural – as in the natural sciences.) And the dichotomous characterisation of formal and natural sciences (the latter including the social sciences) is superseded by a larger, more authoritative dichotomy*: between rationalism and empiricism. Rationalism prefers knowledge that has been deduced through logic and reasoning; empiricism prioritises knowledge that has been experienced. As a result, it shouldn’t be a surprise at all that debates about which side is right (insofar as it’s possible to be absolutely right – which I don’t think everwill happen) play out in the realm of science. And squarely within the realm of science, I’d like to use a recent example to provide some perspective.

Last week, scientists discovered that time crystals exist. I wrote a longish piece here tracing the origins and evolution of this exotic form of matter, and what it is that scientists have really discovered. Again, a tl;dr version: in 2012, Frank Wilczek and Alfred Shapere posited that a certain arrangement of atoms (a so-called ‘time crystal’) in their ground state could be in motion. This could sound pithy to you if you were unfamiliar with what ground state meant: absolute zero, the thermodynamic condition wherein an object has no energy whatsoever to do anything else but simply exist. So how could such a thing be in motion? The interesting thing here is that though Shapere-Wilczek’s original paper did not identify a natural scenario in which this could be made to happen, they were able to prove that it could happen formally. That is, they found that the mathematics of the physics underlying the phenomenon did not disallow the existence of time crystals (as they’d posited it).

It’s pertinent that Shapere and Wilczek turned out to be wrong. By late 2013, rigorous proofs had showed up in the scientific literature demonstrating that ground-state, or equilibrium, time crystals could not exist – but that non-equilibrium time crystals with their own unique properties could. The discovery made last week was of the latter kind. Shapere and Wilczek have both acknowledged that their math was wrong. But what I’m pointing at here is the conviction behind the claim that forms of matter called time crystals could exist, motivated by the fact that mathematics did not prohibit it. Yes, Shapere and Wilczek did have to modify their theory based on empirical evidence (indirectly, as it contributed to the rise of the first counter-arguments), but it’s undeniable that the original idea was born, and persisted with, simply through a process of discovery that did not involve sense-experience.

In the same vein, much of the disappointment experienced by many particle physicists today is because of a grating mismatch between formalism – in the form of theories of physics that predict as-yet undiscovered particles – and empiricism – the inability of the LHC to find these particles despite looking repeatedly and hard in the areas where the math says they should be. The physicists wouldn’t be disappointed if they thought empiricism was the be-all of modern science; they’d in fact have been rebuffed much earlier. For another example, this also applies to the idea of naturalness, an aesthetically (and more formally) enshrined idea that the forces of nature should have certain values, whereas in reality they don’t. As a result, physicists think something about their reality is broken instead of thinking something about their way of reasoning is broken. And so they’re sitting at an impasse, as if at the threshold of a higher-dimensional universe they may never be allowed to enter.

I think this is important in the study of the philosophy of science because if we’re able to keep in mind that humans are emotional and that our emotions have significant real-world consequences, we’d not only be better at understanding where knowledge comes from. We’d also become more sensitive to the various sources of knowledge (whether scientific, social, cultural or religious) and their unique domains of applicability, even if we’re pretty picky, and often silly, at the moment about how each of them ought to be treated (Related/recommended: Hilary Putnam’s way of thinking).

*I don’t like dichotomies. They’re too cut-and-dried a conceptualisation.

Time to fire myself

Not my office. Credit: mcgraths/Flickr, CC BY 2.0

Letting go turned out to be harder than I thought it would.

Next week, a big project begins at the office – and it will be the first project that won’t be led by me. Instead, it will be led by a person we hired to do just such a thing (among other things).

Since The Wire launched and until now, I was its science editor and product manager. I was also a social media manager and its sole developer but haven’t been since the start of 2017. And now, with this project, I will finally be just the science editor. The project will be led by our product manager who joined in December.

Nonetheless, I didn’t notice the reluctance to let go until earlier this month. As the information necessary to make decisions was moving from one person to another, like signals moving through nodes in a network, I realised then that I had embedded myself in certain places in the chain with no demonstrable effect on the outcomes themselves.

For example, I would’ve asked a colleague on one branch of this network to consult with me before making a decision simply because I’d wanted to feel included. In another situation, I would’ve asked another colleague to keep me posted on the proceedings of some review meetings for the same reason. If I hadn’t been a part of these things, nothing would’ve changed – except perhaps some people would’ve had more time on their hands.

My removing myself from such networks began earlier this week and culminated today with the final move. Now, I’m just that guy in the office who will have occasional doubts – but will not be expected to be responsible for their existence.

It’s particularly stressful to lead projects that involve bigger teams, more coordination and more consequential decisions, so people usually think that when the time comes, they’d let go in a jiffy. That’s what I thought, too, and I was wrong. Things like this become hard to let go people either get used to being in power or because they become addicted to the excitement.

I was never in power, so to speak (our team is small and I encourage everyone to question everything). For me, it was definitely the addiction, especially to solving unique problems that no one else was tasked with, that at times no one even knew existed.

But it’s okay. I think it’s more important now to fire myself. The problem-solving me needs to leave so it can be replaced by someone who solves problems about problems, who strategises about which ones to solve and why. There’s always bigger fish, isn’t there?

Featured image: Not my office. Credit: mcgraths/Flickr, CC BY 2.0.

Is it so blasphemous to think ISRO ought not to be compared to other space agencies?

ISRO is one of those few public sector organisations in India that actually do well and are (relatively) free of bureaucratic interference. Perhaps it was only a matter of time before we latched on to its success and even started projecting our yearning to be the “world’s best” upon it – whether or not it chose to be in a particular enterprise. I’m not sure if asserting the latter or not affects ISRO (of course not, who am I kidding) but its exposition is a way to understand what ISRO might be thinking, and what might be the best way to interpret and judge its efforts.

So last evening, I wrote and published an article on The Wire titled ‘Apples and Oranges: Why ISRO Rockets Aren’t Comparable to Falcons or Arianes‘. Gist: PSLV/GSLV can’t be compared to the rockets they’re usually compared to (Proton, Falcon 9, Ariane 5) because:

  1. PSLV is low-lift, the three foreign rockets are medium- to -heavy-lift; in fact, each of them can lift at least 1,000 kg more to the GTO than the GSLV Mk-III will be able to
  2. PSLV is cheaper to launch (and probably the Mk-III too) but this is only in terms of the rocket’s cost. The price of launching a kilogram on the rocket is thought to be higher
  3. PSLV and GSLV were both conceived in the 1970s and 1980s to meet India’s demands; they were never built to compete internationally like the Falcon 9 or the Ariane 5
  4. ISRO’s biggest source of income is the Indian government; Arianespace and SpaceX depend on the market and launch contracts from the EU and the US

While spelling out any of these points, never was I thinking that ISRO was inferior to the rest. My goal was to describe a different kind of pride, one that didn’t rest on comparisons but drew its significance from the idea that it was self-fulfilling. This is something I’ve tried to do before as well, for example with one of the ASTROSAT instruments as well as with ASTROSAT itself.

In fact, when discussing #3, it became quite apparent to me (thanks to the books I was quoting from) that comparing PSLV/GSLV with foreign rockets was almost fallacious. The PSLV was born out of a proposal Vikram Sarabhai drew up, before he died in 1970, to launch satellites into polar Sun-synchronous orbits – a need that became acute when ISRO began to develop its first remote-sensing satellites. The GSLV was born when ISRO realised the importance of its multipurpose INSAT satellites and the need to have a homegrown launcher for them.

Twitter, however, disagreed – often vehemently. While there’s no point discussing what the trolls had to say, all of the feedback I received there, as well as on comments on The Wire, seemed intent ISRO would have to be competing with foreign players and that simply was the best. (We moderate comments on The Wire, but in this case, I’m inclined to disapprove even the politely phrased ones because they’re just missing the point.) And this is exactly what I was trying to dispel through my article, so either I haven’t done my job well or there’s no swaying some people as to what ISRO ought to be doing.

screen-shot-2017-02-19-at-6-57-38-am

We’re not the BPO of the space industry nor is there a higher or lower from where we’re standing. And we don’t get the job done at a lower cost than F9 or A5 because, hey, completely different launch scenarios.

screen-shot-2017-02-19-at-7-03-10-am

Again, the same mistake. Don’t compare! At this point, I began to wonder if people were simply taking one look at the headline and going “Yay/Ugh, another comparison”. And I’m also pretty sure that this isn’t a social/political-spectrum thing. Quite a few comments I received were from people I know are liberal, progressive, leftist, etc., and they all said what this person ↑ had to say.

screen-shot-2017-02-19-at-7-07-12-am

Compete? Grab market? What else? Colonise Mars? Send probes to Jupiter? Provide internet to Africa? Save the world?

screen-shot-2017-02-19-at-7-54-05-am

Now you’re comparing the engines of two different kinds of rockets. Dear tweeter: the PSLV uses alternating solid and liquid fuel motors; the Falcon 9 uses a semi-cryogenic engine (like the SCE-200 ISRO is trying to develop). Do you remember how many failures we’ve had of the cryogenic engine? It’s a complex device to build and operate, so you need to make concessions for it in its first few years of use.

screen-shot-2017-02-19-at-7-09-18-am

“If [make comparison] why you want comparison?” After I’ve made point by [said comparison]: “Let ISRO do its thing.” Well done.

screen-shot-2017-02-19-at-7-11-55-am

This tweet was from a friend – who I knew for a fact was also trying to establish that Indian and foreign launchers are incomparable in that they are not meant to be compared. But I think it’s also an example of how the narrative has become skewed, often expressed only in terms of a hierarchy of engineering capabilities and market share, and not in terms of self-fulfilment. And in many other situations, this might have been a simple fact to state. In the one we’re discussing, however, words have become awfully polarised, twisted. Now, it seems, “different” means “crap”, “good” means nothing and “record” means “good”.

screen-shot-2017-02-19-at-7-18-56-am

Comments like this, representative of a whole bunch of them I received all of last evening, seem tinged with an inferiority complex, that we once launched sounding rockets carried on bicycles and now we’re doing things you – YOU – ought to be jealous of. And if you aren’t, and if you disagree that C37 was a huge deal, off you go with the rocket the next time!

3d91753a-ec8d-4cd6-82f2-76b1560b6108

The Times of India even had a cartoon to celebrate the C37 launch: it mocked the New York Times‘s attempt to mock ISRO when the Mars Orbiter Mission injected itself into an orbit around the red planet on September 27, 2014. The NYT cartoon had, in the first place, been a cheap shot; now, TOI is just saying cheap shots are a legitimate way of expressing something. It never was. Moreover, the cartoons also made a mess of what it means to be elite – and disrupted conversations about whether there ought to be such a designation at all.

As for comments on The Wire:

screen-shot-2017-02-19-at-8-39-13-am

Obviously this is going to get the cut.

screen-shot-2017-02-19-at-8-38-31-am

As it happens, this one is going to get the cut, too.

I do think the media shares a large chunk of the blame when it comes to how ISRO is perceived. News portals, newspapers, TV channels, etc., have all fed the ISRO hype over the years: here, after all, was a PSU that was performing well, so let’s give it a leg up. In the process, the room for criticising ISRO shrank and has almost completely disappeared today. The organisation has morphed into a beacon of excellence that can do no wrong, attracting jingo-moths to fawn upon its light.

We spared it the criticisms (offered with civility, that is) that would have shaped the people’s perception of the many aspects of a space programme: political, social, cultural, etc. At the same time, it is also an organisation that hasn’t bothered with public outreach much and this works backwards. Media commentaries seem to bounce off its stony edifice with no effect. In all, it’s an interesting space in which to be engaged, as a researcher or even as an enthusiast, but I will say I did like it better when the trolls were not interested in what ISRO was up to.

Featured image credit: dlr_de/Flickr, CC BY 2.0.

About AWS/Azure/GCP coming to India, etc.

A data centre in San Antonio, Texas. Credit: scobleizer/Flickr, CC BY 2.0

Featured image: A data centre in San Antonio, Texas. Credit: scobleizer/Flickr, CC BY 2.0.

Interesting story by The Ken (paywall) on the effects AWS, Azure and GCP will have in India once Amazon, Microsoft and Google turn their gaze this way.

Data centre companies at least have 30-35% margins.The bigger companies like Netmagic, CtrlS, Tata Comm and Reliance have data centres in India. They provide colocation services—they let other cloud providers run their servers in their data centres. They lease it to everyone—be it Amazon Web Services (AWS), Azure, Google,  E2E or even smaller companies. That is their cash cow.  Of course, this is in addition to private cloud (dedicated resources for end users) and public cloud (shared resources) they offer.

Business has been stellar for the last 10 years or so. Well, up until recently.

With the overall push to digitisation, from banking to government, global cloud firms have doubled-down on their investments. Microsoft set up three data centres in September 2015; AWS settled for two data centres in July 2016, and Google plans to debut this year. For an everyday business, the focus has shifted to a concept called Infrastructure-as-a-Service (IaaS)—where you pay for what you use—something that was being used only by core tech companies and IT services providers so far.

A few points on it:

1. I feel this awareness, the intensifying of competition, may not be as sudden or as recent as we think. I’m not sure about AWS and Azure but I remember using GCP in 2013 and they already had a credits system going, especially for small-scale developers. And even without that, it was still very cost-effective but more importantly it was the security it offered that cut it. But when I think of Indian cloud providers, security is the last thing that comes to mind (and uptime the second-last and UX the third).

2. Questions of data sovereignty and privacy are moot to me – the former because the bulk of data that moves around India that can’t be serviced by foreign IaaS providers is simply going to be self-hosted; the latter because there’s no reason to believe AWS/Azure/GCP will let my data be compromised. (Obviously I’m not factoring in NSA-level snooping because, even though it happened, the problem wasn’t the infrastructure.) Moreover, I’m also encouraged by Microsoft’s data trustee model it implemented in Germany cognisant of data sovereignty issues.

3. If I’m using AWS to run a small blog – like a static site – then it’s going to cost me about $10 a month and almost no technical work to keep it going (after setting it up). But the moment I scale up and start using more than one EC2 instance, and also start looking at things like ELB, WAF and VPCs to make my site more efficient, I will either have to be a developer myself or hire one. And if I’m hiring a developer, I’m likelier to find better talent that works with AWS or Azure than with any other service. So if an Indian company has to beat them, then it has to be PaaS-like with its offering to grow.

4. Because of the security issues outlined by The Ken, it’s curious to think small-scale cloud providers, such as those offering ‘packaged apps’ like WordPress, etc. to run individual blogs, etc., are only threatened by the likes of AWS/Azure/GCP. To me, they’re already under threat – if they already haven’t lost – if they’re not factoring in Digital Ocean, Vultr, Linode and even Bitnami (which provides a soup-to-nuts tour to deploy popular stacks like, say, LAMP using AWS). The Wire was launched on Digital Ocean for $10 a month.

Establishing trust across the aisle on issues of climate change

An image from a shipborne NASA investigation to study how changing conditions in the Arctic affect the ocean's chemistry and ecosystems. Credit: gsfc/Flickr, CC BY 2.0.

Featured image: An image from a shipborne NASA investigation to study how changing conditions in the Arctic affect the ocean’s chemistry and ecosystems. Credit: gsfc/Flickr, CC BY 2.0.

I met someone over the weekend who wasn’t sure:

  1. That there is scientific consensus on the magnitude of anthropogenic global warming (AGW), and
  2. What the level of human contribution is to rising temperatures (or, how much natural variations could/couldn’t account for)

I believe that AGW is valid and that, if we don’t do something about the way we’re using Earth’s natural resources, AGW will be extremely damaging to the environment as soon as a century from now (to be even more proper about it: that AGW will force nature to adapt in ways that will no longer preserve characteristics that we have been able to attribute to it for thousands of years). This said: I’m not here to describe how the conversation with my friend went but to highlight two specific sources of information that were in play last night and which I think are worth discussing because of their attempts at coming off as trustworthy.

An ivory tower from the inside

In May 2013, John Cook et al published a paper titled ‘Quantifying the consensus on anthropogenic global warming in the scientific literature’. It was a literature review of 11,944 papers published in 1,980 journals, all papers dealing with climate change. Using a large team of volunteers, the authors then classified each paper into one of five groups depending on what its abstract said about the paper’s position on climate change. These were the results:

rucyo-1

(Obviously the links within the image aren’t clickable, so if you’re looking for the data: the paper’s open access.) At the time of publication, the paper received a lot of play in the media – largely because of the numbers in the first row, columns two and four. According to it, 97.1% of all papers that have a position on AGW endorse AGW and 98.% of all authors that have a position on AGW endorse AGW. However, both the giant numbers don’t correspond to the 11,944 abstracts surveyed but the 3,893 (32.6%) that the authors qualified as having a position on AGW.

Clearly, the way to interpret John Cook et al would’ve been to say it like Der Spiegel did: ‘Von knapp 4000 Studien, die die Ursachen der Klimaerwärmung thematisierten, stützen 97 Prozent die Annahme vom menschgemachten Klimawandel’ (“Of nearly 4,000 studies dealing with the causes of climate warming, 97 percent support the assumption of human-driven climate change”). However, my friend – during the course of his arguments – often lingered on the 66.7% (7,966) of all papers that were uncertain about or refused to take a position on AGW. Specifically, he took the exclusion of these papers from the calculation that arrived at a number like “97.1%” to be misguided. After all, he reasoned, ~8,000 papers out of ~12,000 had seen it fit to not explicitly endorse AGW.

Dana Nuccitelli and John Cook, two of the paper’s authors, tried to explain these numbers thus on the Skeptical Science blog:

We found that about two-thirds of papers didn’t express a position on the subject in the abstract, which confirms that we were conservative in our initial abstract ratings. This result isn’t surprising for two reasons: 1) most journals have strict word limits for their abstracts, and 2) frankly, every scientist doing climate research knows humans are causing global warming. There’s no longer a need to state something so obvious. For example, would you expect every geological paper to note in its abstract that the Earth is a spherical body that orbits the sun?

I don’t buy it. The first sentence – “We found that about two-thirds of papers didn’t express a position on the subject in the abstract, which confirms that we were conservative in our initial abstract ratings” – is more of a self-fulfilling prophecy than anything else. The first part of the second sentence requires even more analysis to verify, considering the 11,944 papers they parsed appeared in 1,980 journals, and the fraction of journals that set a word-limit for the abstract might just be non-trivial. The second part is, to me, the display of off-putting arrogance. Doesn’t saying “frankly, every scientist doing climate research knows humans are causing global warming” imply the authors are being dismissive of their own conclusions? And finally, that Earth orbits the Sun is far more obvious than a thesis the defence of which rests on the presumption that the thesis is right – a circularity that renders all facts moot.

While none of this makes me question the validity of AGW, which I still endorse for various reasons, Nuccitelli-Cook’s pseudo-defence doesn’t help me trust them in particular. In fact, their position makes me more suspicious of why they arrived at a number like 32.6% when they were assuming at the outset that it would really be 100%.

An attempt to escape the tower

As it happens, Nuccitelli-Cook don’t appear to be in the minority. To assume that all climate researchers know AGW is valid is also to presume that those who dispute its existence or extent are not really climate researchers (if they’re in the same field) – and this appears to be the case with Judith Curry’s detractors. Until a week ago, Curry was the chair of the School of Earth and Atmospheric Sciences at the Georgia Institute of Technology, Atlanta (she quit on January 1). She shot into the limelight in 2005 after coauthoring a paper that linked a rising incidence of hurricanes with AGW. However, it wasn’t the conclusion of the paper itself but what it led to that put Curry on the climatological map: she began to engage actively with climate skeptics on blogs and other fora in an effort to defend the methods of her paper. And this, for some reason, infuriated her colleagues. A profile of Curry in Nature in 2010 said:

Climate skeptics have seized on Curry’s statements to cast doubt on the basic science of climate change. So it is important to emphasize that nothing she encountered led her to question the science; she still has no doubt that the planet is warming, that human-generated greenhouse gases, including carbon dioxide, are in large part to blame, or that the plausible worst-case scenario could be catastrophic. She does not believe that the Climategate e-mails are evidence of fraud or that the IPCC is some kind of grand international conspiracy. What she does believe is that the mainstream climate science community has moved beyond the ivory tower into a type of fortress mentality, in which insiders can do no wrong and outsiders are forbidden entry.

But Curry’s position has diverged further since: On April 15, 2015, Curry testified before the US House of Representatives Committee on Space, Science and Technology that she didn’t think scientists knew how much humans influenced the climate, especially since the 1950s. This was discomfiting to discover because now I’m suspecting what qualms Curry had with climate science itself instead of only with the attitudes subsection of it. Ken Rice, a computational astrophysicist at the University of Edinburgh, commented at the time:

Again with all the we don’t knows. Yes, we might not know but we have a pretty good idea of what caused the Little Ice Age (reduced solar insolation and increased volcanic activity) and it was obviously not attributed to humans. Why is that even worth mentioning? Again, we might not know what will happen in the 21st century, but we have a fairly good idea of what will happen if we continue to increase our emissions.

So, if we’re going to move forward by acknowledging that what we’ve been trying so far has failed and that others should have a stronger voice, why would we do so if some of those others don’t appear to know anything? Given this, I’ll expand a little on my thoughts with regards to [Steven] Mosher’s point that with regards to policy, science doesn’t much matter. Yes, in some sense I agree with this; let’s stop arguing about science and just get on with deciding on the optimal policies. However, science does inform policy and I fail to see how we can develop sensible policy if we start with the view that we don’t know anything.

In the same vein: what reason is there to get out of the ivory tower at all if, from within, climate scientists have been able to accomplish so much? The simplest answer would be that Donald Trump is set become the 45th president of the US about eleven days from now, and the millions who voted him to power don’t care that he’s a climate skeptic. Even if outgoing president Barack Obama believes that the American adoption of clean energy is irreversible, what Trump could do is destabilise American leadership of international climate negotiations. AGW-endorsers sitting within their comfort zones of Numbers Don’t Lie could find this a particularly difficult battle to win because the IPCC and its brand of questionable integrity is doing no one any favours either. Even if the body’s on the “right” side of things, its attitude has been damaging to say the least (sort of like GMO and Monsanto).

Keith Kloor, former editor of Audubon, recently wrote on Issues of Science and Technology,

Donald Trump’s improbable march to the White House shocked many, but the tactics that made it possible undoubtedly looked familiar to those of us who have navigated the topsy-turvy landscape of contested science. For Trump’s success was predicated on techniques that are used by advocates across the ideological spectrum to dispute or at least muddy established truths in science. … With the ascension of Trump in 2016, have we graduated from truthiness to what some political observers are now calling the post-truth era? Post-truth is defined by Oxford Dictionary as a state in which “objective facts are less influential in shaping public opinion than appeals to emotion.” But this doesn’t do justice to the bending of reality by Trump en route to the White House. You can’t do that simply with appeals to emotion; you need, as his triumph suggests, a made-for-media narrative, with villains, accomplices, and heroes. You need to do what has already been proven to work in warping public perceptions and discussion of certain fields of science.

Those who believe Curry shouldn’t engage with skeptics because her decision could be interpreted as a prominent academic exiting the pro-AGW camp is difficult to buy into – even if Curry did switch camps. It’s hard to arbitrate because there are two variables: the uncertainties inherent in climate modelling (even if the bigger picture still endorses AGW) and how that proselytised someone of the calibre of Judith Curry. Surely the (former) head of a reputed department at Georgia Tech is not the same as any other skeptic?

I thought it was common sense to engage with people from across the aisle instead of letting them persist with information they think is credible but which you think is incredible – to the point that, over time, you become habituated to disregard them irrespective of the legitimacy of their demands. Moreover, giving room for people to disagree with you, to engage with them by making your methods and data available, and working with them to conduct replication studies that test the robustness of your own methods are all features of research and publishing that are being increasingly adopted to everyone’s benefit, most of all science’s.

It’s not hard from here-now to see that moving the other way – by making people anxious even to ask honest questions, by robbing them of the opportunity to respectfully disagree – isn’t going to do much good. Being nice also helps maintain a non-fragmented community that doesn’t further legitimise the impression that “science doesn’t matter when it comes to policy”.

The science in Netflix’s ‘Spectral’

A scene from the film 'Spectral' (2016). Source: Netflix

I watched Spectral, the movie that released on Netflix on December 9, 2016, after Universal Studios got cold feet about releasing it on the big screen – the same place where a previous offering, Warcraft, had been gutted. Spectral is sci-fi and has a few great moments but mostly it’s bland and begging for some tabasco. The premise: an elite group of American soldiers deployed in Moldova come upon some belligerent ghost-like creatures in a city they’re fighting in. They’ve no clue how to stop them, so they fly in an engineer to consult from DARPA, the same guy who built the goggles that detected the creatures in the first place. Together, they do things. Now, I’d like to talk about the science in the film and not the plot itself, though the former feeds the latter.

SPOILERS AHEAD

A scene from the film 'Spectral' (2016). Source: Netflix

A scene from the film ‘Spectral’ (2016). Source: Netflix

Towards the middle of the movie, the engineer realises that the ghost-like creatures have the same limitations as – wait for it – a Bose-Einstein condensate (BEC). They can pass through walls but not ceramic or heavy metal (not the music), they rapidly freeze objects in their path, and conventional weapons, typically projectiles of some kind, can’t stop them. Frankly, it’s fabulous that Ian Fried, the film’s writer, thought to use creatures made of BECs as villains.

A BEC is an exotic state of matter in which a group of ultra-cold particles condense into a superfluid (i.e., it flows without viscosity). Once a BEC forms, a subsection of a BEC can’t be removed from it without breaking the whole BEC state down. You’d think this makes the BEC especially fragile – because it’s susceptible to so many ‘liabilities’ – but it’s the exact opposite. In a BEC, the energy required to ‘kick’ a single particle out of its special state is equal to the energy that’s required to ‘kick’ all the particles out, making BECs as a whole that much more durable.

This property is apparently beneficial for the creatures of Spectral, and that’s where the similarity ends because BECs have other properties that are inimical to the portrayal of the creatures. Two immediately came to mind: first, BECs are attainable only at ultra-cold temperatures; and second, the creatures can’t be seen by the naked eye but are revealed by UV light. There’s a third and relevant property but which we’ll come to later: that BECs have to be composed of bosons or bosonic particles.

It’s not clear why Spectral‘s creatures are visible only when exposed to light of a certain kind. Clyne, the DARPA engineer, says in a scene, “If I can turn it inside out, by reversing the polarity of some of the components, I might be able to turn it from a camera [that, he earlier says, is one that “projects the right wavelength of UV light”] into a searchlight. We’ll [then] be able to see them with our own eyes.” However, the documented ability of BECs to slow down light to a great extent (5.7-million times more than lead can, in certain conditions) should make them appear extremely opaque. More specifically, while a BEC can be created that is transparent to a very narrow range of frequencies of electromagnetic radiation, it will stonewall all frequencies outside of this range on the flipside. That the BECs in Spectral are opaque to a single frequency and transparent to all others is weird.

Obviating the need for special filters or torches to be able to see the creatures simplifies Spectral by removing one entire layer of complexity. However, it would remove the need for the DARPA engineer also, who comes up with the hyperspectral camera and, its inside-out version, the “right wavelength of UV” searchlight. Additionally, the complexity serves another purpose. Ahead of the climax, Clyne builds an energy-discharging gun whose plasma-bullets of heat can rip through the BECs (fair enough). This tech is also slightly futuristic. If the sci-fi/futurism of the rest of Spectral leading up to that moment (when he invents the gun) was absent, then the second-half of the movie would’ve become way more sci-fi than the first-half, effectively leaving Spectral split between two genres: sci-fi and wtf. Thus the need for the “right wavelength of UV” condition?

Now, to the third property. Not all particles can be used to make BECs. Its two predictors, Satyendra Nath Bose and Albert Einstein, were working (on paper) with kinds of particles since called bosons. In nature, bosons are force-carriers, acting against matter-maker particles called fermions. A more technical distinction between them is that the behaviour of bosons is explained using Bose-Einstein statistics while the behaviour of fermions is explained using Fermi-Dirac statistics. And only Bose-Einstein statistics predicts the existence of states of matter called condensates, not Femi-Dirac statistics.

(Aside: Clyne, when explaining what BECs are in Spectral, says its predictors are “Nath Bose and Albert Einstein”. Both ‘Nath’ and ‘Bose’ are surnames in India, so “Nath Bose” is both anyone and no one at all. Ugh. Another thing is I’ve never heard anyone refer to S.N. Bose as “Nath Bose”, only ‘Satyendranath Bose’ or, simply, ‘Satyen Bose’. Why do Clyne/Fried stick to “Nath Bose”? Was “Satyendra” too hard to pronounce?)

All particles constitute a certain amount of energy, which under some circumstances can increase or decrease. However, the increments of energy in which this happens are well-defined and fixed (hence the ‘quantum’ of quantum mechanics). So, for an oversimplified example, a particle can be said to occupy energy levels constituting 2, 4 or 6 units but never of 1, 2.5 or 3 units. Now, when a very-low-density collection of bosons is cooled to an ultra-cold temperature (a few hundredths of kelvins or cooler), the bosons increasingly prefer occupying fewer and fewer energy levels. At one point, they will all occupy a single and common level – flouting a fundamental rule that there’s a maximum limit for the number of particles that can be in the same level at once. (In technical parlance, the wavefunctions of all the bosons will merge.)

When this condition is achieved, a BEC will have been formed. And in this condition, even if a new boson is added to the condensate, it will be forced into occupying the same level as every other boson in the condensate. This condition is also out of limits for all fermions – except in very special circumstances, and circumstances whose exceptionalism perhaps makes way for Spectral‘s more fantastic condensate-creatures. We known one such as superconductivity.

In a superconducting material, electrons flow without any resistance whatsoever at very low temperatures. The most widely applied theory of superconductivity interprets this flow as being that of a superfluid, and the ‘sea’ of electrons flowing as such to be a BEC. However, electrons are fermions. To overcome this barrier, Leon Cooper proposed in 1956 that the electrons didn’t form a condensate straight away but that there was an intervening state called a Cooper pair. A Cooper pair is a pair of electrons that had become bound, overcoming their like-charges repulsion because of the vibration of atoms of the superconducting metal surrounding them. The electrons in a Cooper pair also can’t easily quit their embrace because, once they become bound, the total energy they constitute as a pair is lower than the energy that would be destabilising in any other circumstances.

Could Spectral‘s creatures have represented such superconducting states of matter? It’s definitely science fiction because it’s not too far beyond the bounds of what we know about BEC today (at least in terms of a concept). And in being science fiction, Spectral assumes the liberty to make certain leaps of reasoning – one being, for example, how a BEC-creature is able to ram against an M1 Abrams and still not dissipate. Or how a BEC-creature is able to sit on an electric transformer without blowing up. I get that these in fact are the sort of liberties a sci-fi script is indeed allowed to take, so there’s little point harping on them. However, that Clyne figured the creatures ought to be BECs prompted way more disbelief than anything else because BECs are in the here and the now – and they haven’t been known to behave anything like the creatures in Spectral do.

For some, this information might even help decide if a movie is sci-fi or fantasy. To me, it’s sci-fi.

SPOILERS END

On the more imaginative side of things, Spectral also dwells for a bit on how these creatures might have been created in the first place and how they’re conscious. Any answers to these questions, I’m pretty sure, would be closer to fantasy than to sci-fi. For example, I wonder how the computing capabilities of a very large neural network seen at the end of the movie (not a spoiler, trust me) were available to the creatures wirelessly, or where the power source was that the soldiers were actually after. Spectral does try to skip the whys and hows by having Clyne declare, “I guess science doesn’t have the answer to everything” – but you’re just going “No shit, Sherlock.”

His character is, as this Verge review puts it, exemplarily shallow while the movie never suggests before the climax that science might indeed have all the answers. In fact, the movie as such, throughout its 108 minutes, wasn’t that great for me; it doesn’t ever live up to its billing as a “supernatural Black Hawk Down“. You think about BHD and you remember it being so emotional – Spectral has none of that. It was just obviously more fun to think about the implications of its antagonists being modelled after a phenomenon I’ve often read/written about but never thought about that way.

What’s common to #yesallwomen, scripta manent, good journalism and poka-yoke?

Featured image credit: renaissancechambara/Flickr, CC BY 2.0.

Featured image credit: renaissancechambara/Flickr, CC BY 2.0.

I’m a big fan of poka-yoke (“po-kuh yo-kay”), a Japanese quality control technique founded on a simple principle: if you don’t want mistakes to happen, don’t allow opportunities for them to happen. It’s evidently dictatorial and not fit for use with most human things, but it is quite useful when performing simple tasks, for setting up routines and, of course, when writing (i.e. “If you don’t want the reader to misinterpret a sentence, don’t give her an opportunity to misinterpret it”). However, I do wish something poka-yoke-ish was done with the concept of good journalism.

The industry of journalism is hinged on handling information and knowledge responsibly. While Article 19(1)(a) of the Indian Constitution protects every Indian citizen’s right to free speech (even if multiple amendments since 1951 have affected its conditionality), good journalists can’t – at least ought not to – get away with making dubious or easily falsifiable claims. Journalism, in one sense, is free speech plus a solid dose of poka-yoke that doesn’t allow its practitioners to be stupid or endorse stupidity, at least of the obvious kind. It must not indulge in the dissemination of doltishness irrespective of Article 19(1)(a)’s safeguarding of the expression of it. While John/Jane Doe can say silly things, a journalist must at least qualify them as such while discussing them.

Not doing that would be to fall prey to false balance: to assume that, in the pursuit of objectivity, one is presenting the Other Side of a debate that has, in fact, become outmoded. With that established: On January 5, The Quint published an opinion piece titled ‘Bengaluru Shame: You Can Choose to Be Safe, So Don’t Blame the Mob’. It was with reference to rampant molestation on the streets of Bengaluru of women on the night of December 31 despite the presence of the police. Its author first writes,

Being out on the streets exposes one to anti-social elements, like a mob. A mob is the most insensitive group of people imaginable and breeds unruly behaviour. As responsibilities are distributed within the group, accountability vanishes and inhibitions are shed.

… and then,

When you step out onto the street, you are fraught with an incumbent risk. You may meet with an accident. That’s why there are footpaths and zebra crossings. You may slip on the road if it is wet! Will you then blame the road because it is wet? This is the point I’m making: Precautions and rights are different things. I have a right to be on the roads. And I can also take the precaution to walk sensibly and not run in front of the oncoming traffic.

Because traffic and the mob are the same, yes? The author’s point is that the women who were molested should have known that there was going to be an unruly mob on the streets at some point and that the women – and not the mob or the police – should have taken precautions to, you know, avoid a molestation. The article brings to mind the uncomfortable Rowan Atkinson skit ‘Fatal Beatings’, where the voice of authority is so self-righteous that the humour is almost slapstick.

The article’s publication promptly revived the silly #notallmen trend on Twitter, admirably and effectively panned by many (of the people I follow, at least; if you aren’t yet on the #yesallwomen side, this by Annie Zaidi might change your mind). But my bigger problem was with a caveat that appeared atop the article on The Quint some time later. Here it is:

It has been brought to our attention by readers that the following “endorses” opinions that The Quint should not be carrying. While we understand your sentiments, and wish to reiterate that our own editorial stand is at complete variance with the views in this blog, … we also believe that we have a duty of care towards a full body of readers, some among whom may have very different points of view than ours. Since The Quint is an open, liberal platform, which believes in healthy debate among a rainbow of opinions (which saves us from becoming an echo chamber that is the exact opposite of an open, liberal platform), we do allow individual bloggers to publish their pieces. We would be happy to publish your criticism or opposition to any piece that is published on The Quint. Come and create a lively, intelligent, even confrontational, conversation with us. Even if we do not agree with a contributor’s view, we cannot not defend her right to express it.

(Emphasis added.) Does The Quint want us to celebrate its publishing opinions contrary to its own, or to highlight the possibility that The Quint isn’t really paying attention to the opinions it holds, or to notice that it is irresponsibly publishing opinions that don’t deserve an audience of thousands? It’s baffling.

Look at the language: “Lively” is fine, as is “confrontational” – but the editors may have tripped up in their parsing of the meaning of ‘intelligent’. They are indeed right to invite an intelligent conversation but the intent should have been accompanied by an ability to distinguish between intelligence and whatever else; without this, it’s simply a case of a misleading advertisement. Moreover, I’m also irked by their persistence with the misguided caveat, which, upon rereading, reinforces a wrong message. I’m reminded here of the German existentialist Franz Rosenzweig’s thoughts on the persistence of the written word, excerpted from a biography titled Franz Rosenzweig and Jehuda Halevi: Translating, Translations, and Translators:

Permanence depends more upon whether a word reaches reception or not, and less upon whether it is spoken or written. But the written word, because captured in a visible physicality, does offer a type of permanence that is denied to the spoken word. The written word can be read by those outside the “intimacy” of two speakers, such as letter writers; or of the “one-way intimacy” that arises between one speaker, such as the bookwriter and many readers. The permanence inherent in the written word is framed within boldness and daring on the part of the speaker: translated or not, there is a thereness to the written word, and this thereness is conducive to replay for the hearer through rereading.

TL;DR: Verba volant, scripta manent.

The Quint article was ‘engaged with’ at least 10,300-times at the time this post was written. Every time it was read, there will have been a (darkly) healthy chance of convincing a reader to abdicate from the decidedly anti-patriarchic #yesallwomen camp and move to the dispassionate and insensitive #notallmen camp. A professing of intelligence without continuous practice will every now and then legitimise immature thinking; a good example of one such trip-up is false balance. This post itself was pretty easy to write because it used to happen oh-so-regularly with climate change (and less regularly now): in both cases today, there is an Other Side – but it is not in denying climate change or refuting #yesallwomen but, for example, debating what the best measure could be to mitigate their adverse consequences.

The Indian Science Congress has gutted its own award by giving it to Appa Rao Podile

Credit: ratha/Flickr, CC BY 2.0

Featured image credit: ratha/Flickr, CC BY 2.0.

I hadn’t heard of the Millennium Plaque of Honour before yesterday, January 3. From what I was able to read up before filing my report in The Wire (about embattled Hyderabad University vice-chancellor Appa Rao Podile receiving the plaque at the ongoing Indian Science Congress):

  1. It has been awarded by the Indian Science Congress Association since 2003, when it was instituted as the ‘Science & Society Award’
  2. Its name was changed to the New Millennium Plaque of Honour in 2005
  3. It carries a citation, a literal plaque and a cash component of Rs 20,000 “to cover incidental expenses”
  4. It is awarded to two eminent scientists at the Science Congress every year

If the annual event was considered prestigious or even very laudable until 2014, I’m not entirely sure (although it certainly wasn’t a very gala affair). But in 2015 and after, it’s certainly taken a beating. In 2015, particularly, the congress was invaded by right-wing nuts convinced that Vedic age scholars had flown planes to Mars and transplanted animal heads onto human bodies. Proceedings were relatively free of controversy in 2016 before taking another turn for the worse in 2017: by giving a Millennium Plaque to Appa Rao (as well as to Avula Damodaram, but that’s a lesser problem we’ll come to later).

A day or so ago, in a conversation on Twitter, both R. Prasad (The Hindu‘s science editor) and Gautam Desiraju (a celebrated chemist at the Indian Institute of Science, Bengaluru) agreed that many Indian events had off late been banking on legitimacy ‘loaned’ from foreign institutions. For example, a large part of the Indian Science Congress’s public outreach every year involves blaring that X Nobel laureates will be in attendance. Nobel laureates are eminent people men, sure, but often they don’t do much other than give a talk and just be in attendance. And their presence doesn’t do much for the quality of the conference, overall in a decline, either (see footnote). In December 2012, P. Balaram, the director of IIT-Madras, wrote in an editorial in the journal Current Science,

… few practising scientists of note consider the Congress as an important event. Pomp and ceremony take precedence over substance. Over the years the Congress has been reduced to an occasion where the inaugural session appears to be the raison de etre for the meeting. The traditional opening address by the Prime Minister predictably reiterates governmental commitment to support science and invariably promises to remove the many bureaucratic hurdles that sometimes loom larger than life in the minds of many scientists. The presence of the executive head of government invests the inaugural event with an importance that is often not commensurate with the quality of the scientific sessions that follow. The occasion is also used to showcase a couple of Nobel laureates, who fly in to speak to audiences with little appetite for excessively technical talks. The organisers, bolstered by considerable government backing, are always good hosts; the distinguished foreign presence ensuring that the Congress always acquires a degree of respectability rarely supported by the scientific program.

In such times, the value of reinforcing local rewards, recognitions, symbols, ideals, etc. is as important as respecting and re-legitimising them as well. This means that an award like the Millennium Plaque of Honour (despite its pompous name), instituted as it has been by the Indian Science Congress, should be given on every occasion to scientists truly deserving of the award and, more importantly, never to anyone who will lower by association the prestige accorded to the award.

Appa Rao is capable of doing the latter. Particularly after Rohith Vemula’s suicide last year (and more generally for a half-year period before that), Appa Rao, as vice-chancellor, was responsible for allowing partisan interventions from the Bharatiya Janata Party (BJP) to interfere in university student politics as well as for violently quelling student protests that followed on the University of Hyderabad campus. Shortly after the news of Vemula’s death broke, the Times of India also reported that Appa Rao had acquired his vice-chancellorship through political connections, especially with BJP minister Venkaiah Naidu and Telugu Desam Party chief Chandrababu Naidu.

A relevant passage from our coverage of the incidents:

Police, CRPF and RAF forces came to the campus, and students assembled on the lawns outside the VC’s lodge were brutally removed and lathi charged. Some students were badly injured and had to be taken to hospitals, sources have said. Students have also said that they were abused and insulted, and female students were threatened with rape. Students from minority communities were allegedly called “terrorists”.

It’s impossible to overlook the fact that his only presence in recent memory was as a craven but powerful stooge, and in fact almost never for his work as a scientist. He hasn’t done anything memorable of late nor as he displayed the integrity due a vice-chancellor of a public institution. In fact, shortly after the student protests, I had also published evidence of plagiarism in three of his research papers. If he has won a Millennium Plaque, then it only means the ‘honour’ doesn’t stand for research excellence anymore as much as for neglecting one’s duties and for perverting the all-important autonomy of an important position.

Worse yet, it seems an award of the Indian Science Congress has become subverted into becoming an instrument of negotiation for political agents: “You let me interfere in your duties, I will give you a fancy-sounding award”. The other recipient of the same award this year, Avula Damodaram, doesn’t inspire confidence, either – although I concede I have no evidence following my suspicions (yet). Damodaram is the vice-chancellor of Sri Venkateswara University, Tirupati, the same institute that’s hosting the science congress this year. Binay Panda, a bioinformatician and friend, wasn’t surprised:

§

Footnote: Mukund Thattai, a biologist at the National Centre for Biological Sciences, Bengaluru, conducted a poll on Twitter asking why people took to science. The option ‘once saw a Nobel laureate’ clocked in last:

Seventy-four is not a great sample size but 1% is a far more abysmal number.

The metaphorical transparency of responsible media

Credit: dryfish/Flickr, CC BY 2.0

Featured image credit: dryfish/Flickr, CC BY 2.0.

I’d written a two-part essay (although they were both quite short; reproduced in full below) on The Wire about what science was like in 2016 and what we can look forward to in 2017. The first part was about how science journalism in India is a battle for relevance, both within journalistic circles and among audiences. The second was about how science journalism needs to be treated like other forms of journalism in 2017, and understood to be afflicted with the same ills that, say, political and business journalism are.

Other pieces on The Wire that had the same mandate, of looking back and looking forward, stuck to being roundups and retrospective analyses. My pieces were retrospective, too, but they – to use the parlance of calculus – addressed the second derivative of science journalism, in effect performing a meta-analysis of the producers and consumers of science writing. This blog post is a quick discussion (or rant) of why I chose to go the “science media” way.

We in India often complain about how the media doesn’t care enough to cover science stories. But when we’re looking back and forward in time, we become blind to the media’s efforts. And looking back is more apparently problematic than is looking forward.

Looking back is problematic because our roundup of the ‘best’ science (the ‘best’ being whatever adjective you want it to be) from the previous year is actually a roundup of the ‘best’ science we were able to discover or access from the previous year. Many of us may have walled ourselves off into digital echo-chambers, sitting within not-so-fragile filter bubbles and ensuring news we don’t want to read about doesn’t reach us at all. Even so, the stories that do reach us don’t make up the sum of all that is available to consume because of two reasons:

  1. We practically can’t consume everything, period.
  2. Unless you’re a journalist or someone who is at the zeroth step of the information dissemination pyramid, your submission to a source of information is simply your submission to another set of filters apart from your own. Without these filters, finding something you are looking for on the web would be a huge problem.

So becoming blind to media efforts at the time of the roundup is to let journalists (who sit higher up on the dissemination pyramid) who should’ve paid more attention to scientific developments off the hook. For example, assuming things were gloomy in 2016 is assuming one thing given another thing (like a partial differential): “while the mood of science news could’ve been anything between good and bad, it was bad” GIVEN “journalists mostly focused on the bad news over the good news”. This is only a simplistic example: more often than not, the ‘good’ and ‘bad’ can be replaced by ‘significant’ and ‘insignificant’. Significance is also a function of media attention. At the time of probing our sentiments on a specific topic, we should probe the information we have as well as how we acquired that information.

Looking forward without paying attention to how the media will likely deal with science is less apparently problematic because of the establishment of the ideal. For example, to look forward is also to hope: I can say an event X will be significant irrespective of whether the media chooses to cover it (i.e., “it should ideally be covered”); when the media doesn’t cover the event, then I can recall X as well as pull up journalists who turned a blind eye. In this sense, ignoring the media is to not hold its hand at the beginning of the period being monitored – and it’s okay. But this is also what I find problematic. Why not help journalists look out for an event when you know it’s going to happen instead of relying on their ‘news sense’, as well as expecting them to have the time and attention to spend at just the right time?

Effectively: pull us up in hindsight – but only if you helped us out in foresight. (The ‘us’ in this case is, of course, #notalljournalists. Be careful with whom you choose to help or you could be wasting your time.)


Part I: Why Independent Media is Essential to Good Science Journalism

What was 2016 like in science? Furious googling will give you the details you need to come to the clinical conclusion that it wasn’t so bad. After all, LIGO found gravitational waves; an Ebola vaccine was readied; ISRO began tests of its reusable launch vehicle; the LHC amassed particle collisions data; the Philae comet-hopping mission ended; New Horizons zipped past Pluto; Juno is zipping around Jupiter; scientists did amazing (but sometimes ethically questionable) things with CRISPR; etc. But if you’ve been reading science articles throughout the year, then please take a step back from everything and think about what your overall mood is like.

Because, just as easily as 2016 was about mega-science projects doing amazing things, it was also about climate-change action taking a step forward but not enough; about scientific communities becoming fragmented; about mainstream scientific wisdom becoming entirely sidelined in some parts of the world; about crucial environmental protections being eroded; about – undeniably – questionable practices receiving protection under the emotional cover of nationalism. As a result, and as always, it is difficult to capture what this year was to science in a single mood, unless that mood in turn captures anger, dismay, elation and bewilderment at various times.

So, to simplify our exercise, let’s do that furious googling – and then perform a meta-analysis to reflect on where each of us sees fit to stand with respect to what the Indian scientific enterprise has been up to this year. (Note: I’m hoping this exercise can also be a referendum on the type of science news The Wire chose to cover this year, and how that can be improved in 2017.) The three broad categories (and sub-categories) of stories that The Wire covered this year are:

GOOD BAD UGLY
Different kinds of ISRO rockets – sometimes with student-built sats onboard – took off Big cats in general, and leopards specifically, had a bad year Indian scientists continued to plagiarise and engage in other forms of research misconduct without consequence
ISRO decided to partially privatise PSLV missions by 2020 The JE/AES scourge struck again, their effects exacerbated by malnutrition The INO got effectively shut down
LIGO-India collaboration received govt. clearance; Indian scientists of the LIGO collaboration received a vote of confidence from the international community PM endorsed BGR-34, an anti-diabetic drug of dubious credentials Antibiotic resistance worsened in India (and other middle-income nations)
We supported ‘The Life of Science’ Govt. conceived misguided culling rules India succumbed to US pressure on curtailing generic drugs
Many new species of birds/animals discovered in India Ken-Betwa river linkup approved at the expense of a tiger sanctuary Important urban and rural waterways were disrupted, often to the detriment of millions
New telescopes were set up, further boosting Indian astronomy; ASTROSAT opened up for international scientists Many conservation efforts were hampered – while some were mooted that sounded like ministers hadn’t thought them through Ministers made dozens of pseudoscientific claims, often derailing important research
Otters returned to their habitats in Kerala and Goa A politician beat a horse to its death Fake-science-news was widely reported in the Indian media
Janaki Lenin continued her ‘Amazing Animals’ series Environmental regulations turned and/or stayed anti-environment Socio-environmental changes resulting from climate change affect many livelihoods around the country
We produced monthly columns on modern microbiology and the history of science We didn’t properly respond to human-wildlife conflicts Low investments in public healthcare, and focus on privatisation, short-changed Indian patients
Indian physicists discovered a new form of superconductivity in bismuth GM tech continues to polarise scientists, social scientists and activists Space, defence-research and nuclear power establishments continued to remain opaque
/ Conversations stuttered on eastern traditions of science /

I leave it to you to weigh each of these types of stories as you see fit. For me – as a journalist – science in the year 2016 was defined by two parallel narratives: first, science coverage in the mainstream media did not improve; second, the mainstream media in many instances remained obediently uncritical of the government’s many dubious claims. As a result, it was heartening on the first count to see ‘alternative’ publications like The Life of Science and The Intersection being set up or sustained (as the case may be).

On the latter count: the media’s submission paralleled, rather directly followed, its capitulation to pro-government interests (although some publications still held out). This is problematic for various reasons, but one that is often overlooked is that the “counterproductive continuity” that right-wing groups stress upon – between traditional wisdom and knowledge derived through modern modes of investigation – receives nothing short of a passive endorsement by uncritical media broadcasts.

From within The Wire, doing a good job of covering science has become a battle for relevance as a result. And this is a many-faceted problem: it’s as big a deal for a science journalist to come upon and then report a significant story as finding the story itself in the first place – and it’s as difficult to get every scientist you meet to trust you as it is to convince every reader who visits The Wire to read an article or two in the science section per visit. Fortunately (though let it not be said that this is simply a case of material fortunes), the ‘Science’ section on The Wire has enjoyed both emotional and financial support. To show for it, we have had the privilege of overseeing the publication of 830 articles, and counting, in 2016 (across science, health, environment, energy, space and tech). And I hope those who have written for this section will continue to write for it, even as those who have been reading this section will continue to read it.

Because it is a battle for relevance – a fight to be noticed and to be read, even when stories have nothing to do with national interests or immediate economic gains – the ideal of ‘speaking truth to power’ that other like-minded sections of the media cherish is preceded for science journalism in India by the ideals of ‘speaking’ first and then ‘speaking truth’ second. This is why an empowered media is as essential to the revival of that constitutionally enshrined scientific temperament as are productive scientists and scientific institutions.

The Wire‘s journalists have spent thousands of hours this year striving to be factually correct. The science writers and editors have also been especially conscientious of receiving feedback at all stages, engaging in conversations with our readers and taking prompt corrective action when necessary – even if that means a retraction. This will continue to be the case in 2017 as well in recognition of the fact that the elevation of Indian science on the global stage, long hailed to be overdue, will directly follow from empowering our readers to ask the right questions and be reasonably critical of all claims at all times, no matter who the maker.

Part II: If You’re Asking ‘What To Expect in Science in 2017’, You Have Missed the Point

While a science reporter at The Hindu, this author conducted an informal poll asking the newspaper’s readers to speak up about what their impressions were of science writing in India. The answers, received via email, Twitter and comments on the site, generally swung between saying there was no point and saying there was a need to fight an uphill battle to ‘bring science to everyone’. After the poll, however, it still wasn’t clear who this ‘everyone’ was, notwithstanding a consensus that it meant everyone who chanced upon a write-up. It still isn’t clear.

Moreover, much has been written about the importance of science, the value of engaging with it in any form without expectation of immediate value and even the usefulness of looking at it ‘from the outside in’ when the opportunity arises. With these theses in mind (which I don’t want to rehash; they’re available in countless articles on The Wire), the question of “What to expect in science in 2017?” immediately evolves into a two-part discussion. Why? Because not all science that happens is covered; not all science that is covered is consumed; and not all science that is consumed is remembered.

The two parts are delineated below.

What science will be covered in 2017?

Answering this question is an exercise in reinterpreting the meaning of ‘newsworthiness’ subject to the forces that will assail journalism in 2017. An immensely simplified way is to address the following factors: the audience, the business, the visible and the hidden.

The first two are closely linked. As print publications are shrinking and digital publications growing, a consideration of distribution channels online can’t ignore the social media – specifically, Twitter and Facebook – as well as Google News. This means that an increasing number of younger readers are available to target, which in turn means covering science in a way that interests this demographic. Qualities like coolness and virality will make an item immediately sellable to marketers whereas news items rich with nuance and depth will take more work.

Another way to address the question is in terms of what kind of science will be apparently visible, and available for journalists to easily chance upon, follow up and write about. The subjects of such writing typically are studies conducted and publicised by large labs or universities, involving scientists working in the global north, and often on topics that lend themselves immediately to bragging rights, short-lived discussions, etc. In being aware of ‘the visible’, we must be sure to remember ‘the invisible’. This can be defined as broadly as in terms of the scientists (say, from Latin America, the Middle East or Southeast Asia) or the studies (e.g., by asking how the results were arrived at, who funded the studies and so forth).

On the other hand, ‘the hidden’ is what will – or ought to – occupy those journalists interested in digging up what Big X (Pharma, Media, Science, etc.) doesn’t want publicised. What exactly is hidden changes continuously but is often centred on the abuse of privilege, the disregard of those we are responsible for and, of course, the money trail. The issues that will ultimately come to define 2017 will all have had dark undersides defined by these aspects and which we must strive to uncover.

For example: with the election of Donald Trump, and his bad-for-science clique of bureaucrats, there is a confused but dawning recognition among liberals of the demands of the American midwest. So to continue to write about climate change targeting an audience composed of left-wingers or east coast or west coast residents won’t work in 2017. We must figure out how to reach across the aisle and disabuse climate deniers of their beliefs using language they understand and using persuasions that motivate them to speak to their leaders about shaping climate policy.

What will be considered good science journalism in 2017?

Scientists are not magical creatures from another world – they’re humans, too. So is their collective enterprise riddled with human decisions and human mistakes. Similarly, despite all the travails unique to itself, science journalism is fundamentally similar to other topical forms of journalism. As a result, the broader social, political and media trends sweeping around the globe will inform novel – or at least evolving – interpretations of what will be good or bad in 2017. But instead of speculating, let’s discuss the new processes through which good and bad can be arrived at.

In this context, it might be useful to draw from a blog post by Jay Rosen, a noted media critic and professor of journalism at New York University. Though the post focuses on what political journalists could do to adapt to the Age of Trump, its implied lessons are applicable in many contexts. More specifically, the core effort is about avoiding those primary sources of information (out of which a story sprouts) the persistence with which has landed us in this mess. A wildly remixed excerpt:

Send interns to the daily briefing when it becomes a newsless mess. Move the experienced people to the rim. Seek and accept offers to speak on the radio in areas of Trump’s greatest support. Make common cause with scholars who have been there. Especially experts in authoritarianism and countries when democratic conditions have been undermined, so you know what to watch for— and report on. (Creeping authoritarianism is a beat: who do you have on it?). Keep an eye on the internationalization of these trends, and find spots to collaborate with journalists across borders. Find coverage patterns that cross [the aisle].

And then this:

[Washington Post reporter David] Fahrenthold explains what he’s doing as he does it. He lets the ultimate readers of his work see how painstakingly it is put together. He lets those who might have knowledge help him. People who follow along can see how much goes into one of his stories, which means they are more likely to trust it. … He’s also human, humble, approachable, and very, very determined. He never goes beyond the facts, but he calls bullshit when he has the facts. So impressive are the results that people tell me all the time that Fahrenthold by himself got them to subscribe.

Transparency is going to matter more than ever in 2017 because of how the people’s trust in the media was eroded in 2016. And there’s no reason science journalism should be an exception to these trends – especially given how science and ideology quickly locked horns in India following the disastrous Science Congress in 2015. More than any other event since the election of the Bharatiya Janata Party to the centre, and much like Trump’s victory caught everyone by surprise, the 2015 congress really spotlighted the extent of rational blight that had seeped into the minds of some of India’s most powerful ideologues. In the two years since, the reluctance of scientists to step forward and call bullshit out has also started to become more apparent, as a result exposing the different kinds of undercurrents that drastic shifts in policies have led to.

So whatever shape good science journalism is going to assume in 2017, it will surely benefit by being more honest and approachable in its construction. As will the science journalist who is willing to engage with her audience about the provenance of information and opinions capable of changing minds. As Jeff Leek, an associate professor at the Johns Hopkins Bloomberg School of Public Health, quoted (statistician Philip Stark) on his blog: “If I say just trust me and I’m wrong, I’m untrustworthy. If I say here’s my work and it’s wrong, I’m honest, human, and serving scientific progress.”

Here’s to a great 2017! 🙌🏾

Gravitational lensing and facial recognition

A Hubble telescope image of the galaxy cluster SDSS J1038+4849. Credit: JPL/NASA

These images of gravitational lensing, especially the one on the left, are pretty famous because apart from demonstrating the angular magnification effects of strong lensing very well, they’ve also been used by NASA in their Halloween promotional material. The ring-like arc that forms the ‘face’ is a result of a galaxy cluster, SDSS J1038+4849, lying directly in front of the object (on our line of sight) the light from which it is bending around itself. Because of the alignment, the light is bent all around it, forming what’s known as an Einstein ring. This particular instance was discovered in early 2015 by astronomer Judy Schmidt.

Seeing this image again prompted me to recall a post I’d written long ago on a different blog (no longer active) about our brains’ tendency to spot patterns in images that actually don’t exist – such as looking at an example of strong gravitational lensing and spotting a face. The universe didn’t intend to form a face but all our brain needs to see one is an approximate configuration of two eyes, a nose, a smile and a contour if it’s lucky. This tendency is called pareidolia. In more ambiguous images or noises, what each individual chooses to see is the basis of the (now-outmoded) Rorschach inkblot test.

A 2009 paper in the journal NeuroReport reported evidence that human adults identify a face when there is none only 35 milliseconds slower than when there is really a face (165 ms v. 130 ms) – and both through a region of the brain called the fusiform face area, which may have evolved to process faces. The finding speaks to our evolutionary need to identify these and similar visual configurations, a crucial part of social threat perception and social navigation. The authors of the 2009 paper have suggested using their findings to investigate forms of autism in which a person has trouble looking at face.

My favourite practical instance of pareidolia is in Google’s DeepDream project, which is a neural network used to over-process, differentiate between or recognise images. When software engineers at Google fed a random image into the network’s input layer and asked DeepDream to transform it into an image containing some specific, well-defined objects, the network engaged in a process called algorithmic pareidolia: picking out patterns that aren’t really there. Each layer of a neural network, understood to go bottom-up, analyses specific parts of an image, with the lower layers going after strokes and nodes and the higher layers, after entire objects and their arrangement.

In many instances, algorithmic pareidolia yielded images that looked similar to the work of the human visual cortex under the influence of LSD. This has prompted scientists to investigate whether psychedelic compounds cause electrochemical changes in the brain that are similar to instructions supplied to convolutional neural networks (of which DeepDream is a kind). In other words, when DeepDream dreamt, it was an acid trip. In June 2015, three software engineers from Google explained how pareidolia took shape inside such networks:

If we choose higher-level layers, which identify more sophisticated features in images, complex features or even whole objects tend to emerge. Again, we just start with an existing image and give it to our neural net. We ask the network: “Whatever you see there, I want more of it!” This creates a feedback loop: if a cloud looks a little bit like a bird, the network will make it look more like a bird. This in turn will make the network recognize the bird even more strongly on the next pass and so forth, until a highly detailed bird appears, seemingly out of nowhere.

Diving further into complex neural networks may eventually allow scientists to explore cognitive processes at a pace thousands of times slower than at which they happen in the brain.

An opportunity to understand the GPL license

Matt Mullenweg, 2009. Credit: loiclemeur/Flickr, CC BY 2.0

Featured image: Matt Mullenweg, 2009. Credit: loiclemeur/Flickr, CC BY 2.0.

Every December, I wander over to ma.tt, the blog of WordPress founder Matt Mullenweg, to check what he’s saying about how the CMS will be shaping up in the next year. Despite my cribbings as well as constant yearning to be on Ghost, I’m still on WordPress and no closer to leaving than I ever was. And WordPress isn’t all that bad either (it runs The Wire, for example). In fact, I’m reminded of the words of a very talented developer from earlier this year with whom we at The Wire were consulting. When I brought up the fact that PHP (the programming language on which WordPress is a script) isn’t very conducive to scaling, he replied, “Anything can be built on anything.” So, for all its problems, WordPress does do some other things well that other CMSs might usually struggle with.

Anyway, lest I digress further – On a post on October 28, Mullenweg described the impact of the GPL license on WordPress’s development as “fantastic” – possibly because, as Linus Torvalds, who created the Linux kernel, has noted, the GPL license enforces itself: code derived from GPL-licensed code also has to be GPL-licensed. As a result, those making modifications to WordPress for their own use could not wall themselves off, preventing fragmentation as well as, in the words of University of Pennsylvania law professor Christopher Yoo, persevere in an environment that allows “multiple actors to pursue parallel innovation, which can improve the quality of the technical solution as well as increase the rate of technological change”.

GPL stands for ‘general public license’, and is widely used on the creation, modification, deconstruction, use and distribution of software on the web. Mullenweg’s broader post was actually about him noticing how the UI of the mobile app of Wix, a platform that lets its users build websites with a few clicks, closely resembled WordPress’s own, and how there was – as a result – a glaring problem. In its composition, WordPress uses code that’s on the GPL. GPL’s self-enforcement feature makes it a copyleft license: works that are derived from GPL-licensed work also have to be copyleft and distributed on the same terms. As a result, the code behind Wix’s mobile app had to immediately be made available (by, say, making it available on GitHub) and publicly accessible. It wasn’t.

Last I checked, the post had one update from Wix CEO Avishai Abrahami and 120 comments. And all together, they illustrated how the terms of the license, though written in language that was lucid enough, were easy to confuse and the sort of impedance that poses to its useful implementation. I spent an hour (probably more) going through all of it; if you’re interested in this sort of thing – or learning something new – I highly recommend going through the comments yourself. Or, if you’d like something shorter (but also trying to be wider in scope), you could just keep reading.

The tale has four dramatis personae: the GPL license, the MIT license, Wix’s mobile app and WordPress’s code (plus a cameo appearance by the LGPL license). Code derived from GPL-compatible code also has to be GPL-compatible – a major requirement of which is that: “… you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things” (emphasis added). This is also the clause that’s missing from the MIT license. As a result, code that’s originally under the MIT license can later be licensed as GPL (i.e. its source code made available) but code that’s originally under the GPL cannot later be licensed as MIT (i.e. source code that a GPL license has made accessible cannot be hidden away by the MIT license) – unless all the relevant copyright holders are onboard with the shift.

Paul Sieminski, the general counsel of Automattic – the non-profit company that originally built WordPress – commented on Mullenweg’s post thus: “[Wix would] probably be in the clear if you had used just the original editor we started with (ZSSRichTextEditor, MIT licensed). Instead, Wix took our version of the editor which has 1000+ original commits on top of the original MIT editor, that took more than a year to write. We improved it. A lot. And Wix took those improvements, used them in their app… but then stripped out all of the important rights that they’re not legally allowed to take away. We’re just asking Wix to fix their mistake. Release the Wix Mobile App under a GPL license, and put the source code up on GitHub” (link added). So far so good.

Wix CEO Abrahami’s response – posted on his blog on Wix – though cordial, makes the mistake of being evasive and in denial at once. As many commenters pointed out, Mullenweg’s ask was simple and clearly articulated: bring the source code behind Wix’s mobile app under the GPL and upload it on GitHub. Abrahami, however, defended Wix’s decision to keep the source code proprietary by saying that it only used an open source library modified by WordPress (“that is the concept of open source right?”) for a “minor part” of the app, and that he would “release the app you [Matt] saw as well”. The latter statement should have resolved the dispute because GPL only mandates that the source code be made available when asked for – not necessarily on GitHub. George Stephanis, a developer at Automattic, added: “The source code has to be freely available to everyone that has the software. If you want a paywall, it has to treat the software and source as a unit — you can’t distribute the software, but then charge for the source code.”

Some commenters pointed out that Abrahami may have been confusing the GPL license with the Library GPL (LGPL), and as a result not be entirely clear about the “viral” nature of the GPL license. When code is LGPL-compatible, extensions to the code needn’t be GPL-compatible. For example, in the Wix case, if the WordPress-modified open-source library was LGPL-, instead of GPL-, compatible and the mobile app had used parts of it, then the app’s source code doesn’t have to be GPL-compatible. In colloquial terms, the LGPL doesn’t infect code it is associated with the way the GPL does; it is less “viral”.

Nonetheless, I’d think it’s arguably harder to know Wix’s code has to be GPL-compatible, or even to know what the license on it ought to be, if it isn’t publicly available at all times. In support: the relevant part from the license’s preamble, which I quoted earlier, is “that [the users] know [they] can do these things”. I use the word ‘arguably’ not in the legal sense but in the spiritual one – the spirit being that of the free-software movement. And this is why I’m glad Mullenweg chose to hammer this issue out in public (via his blog) instead of via email. Moreover, I’m also glad that he didn’t initiate legal action immediately either: the conversation between Mullenweg, Abrahami and all the commenters – despite the occasional passive-aggressive animus – deserved to happen instead of the groups splintering off and blocking each other. The open source community always needs more unity.

Then again, the licenses that help sustain these communities could do more harm than good if they become too restrictive – especially when they fall out of step with changing governance practices while striving to keep the open source ideals we’ve associated with Richard Stallman alive even as they don’t offer too much freedom to users, which could result in a proliferation of alternatives that deprive useful software of its coherence. For example, Yoo writes in the paper I quoted from above,

… some restrictions on what people can do with open source operating systems are necessary if consumers are to enjoy the full benefits of competition and innovation. My point is not to suggest that open source software is inherently superior to proprietary software or vice versa. Both approaches have distinct virtues that appeal to different users. Moreover, any attempt to cast the policy debate as a choice between those polar extremes [i.e. open source and modular development] is based on a false dichotomy. Instead, the different modes for producing software platforms are better regarded as occupying different locations along a continuum running from completely unrestricted open source to completely proprietary closed source. Indeed, companies may even choose to pursue hybrid strategies that occupy multiple locations on this continuum simultaneously. The diversity of advantages associated with these different approaches suggests that consumers benefit if different companies are given the latitude to experiment with different governance models, with the presence of one open source platform serving as an important competitive safety valve.

Subscribe to The Wire‘s weekly science newsletter, Infinite in All Directions (archive), where I curate interesting science news, blogs and analyses from around the web.