Wednesday, December 7, 2022

Butterflies

They say words are the strongest tools of an eloquent mind, but numbness is oft not very eloquent.


There is fear in my heart,

And a din in my ears,

They said it won’t last,

Like butterflies in the chest

 

Stop, say I – to my deep desire,

Why are you no more the fire?

That I once so profoundly knew,

With caress and fondness grew,

Swept away with my right to croon,

Like butterflies in the chest

 

Alas, they said - nothing is forever,

No musty fragrance, nor river,

They all fade away,

Like butterflies in the chest

 

When the sun rose again,

After a night full of fiery rain,

I wait for a second waft of petrichor,

But they said it won’t last,

Like butterflies in the chest

Wednesday, March 29, 2017

A critique of recent age forecast failures: An interpretation of Nate Silver's Signal and Noise

Forecasting traffic is widely regarded to be the first step in the process of any planning or renovation model. It is not only the primary step in determining the practicality of a project by figuring out the scale of benefit rendered by the outcome of a project, but also the extent of its criticality. Travel demand is also an important socio-economic indicator of development, functional capacity and administrative quality within an executive region. As with forecasting of any and every form, travel demand or traffic forecasting is a tricky business. The complexity of estimating this quantity is rendered no less intricate due to its multifaceted nature: travel demand is a function of a number of factors. These factors not only vary on the basis of location and the time period of the forecast, but also across the intrinsic values of the predictors used in the model.
Accuracy often is not the most sought after skill among forecasters, says Silver. The consequence of this becomes particularly dangerous in time-sensitive events like hurricane Katrina. Residents of New Orleans, a laid back Louisianan city, like people elsewhere, took neither the hurricane prediction nor the word of the city mayor very seriously, which led to the vastly infamous and large-scale devastation. Disbelief over superficially unimportant issues like weather, when aggregated over time, leads to skepticism about something as major as an all-engulfing hurricane. This, in today’s day and age, is a very challenging problem to resolve. This is due to the culmination of years of, for lack of a better word, sloppy priorities for evaluating forecasts. Forecasters belonging to most walks, are judged harsher over their skills of presentation and precision than accuracy and honesty. Though precision ostensibly looks like a highly sophisticated metric for the quality of a forecast, it often is bought at the price of accuracy. Precision should be used as a tool to adjudge the accuracy and confidence of a forecast, not the result of the forecast itself. Rampant misuse of precision and over-confidence in one’s predictions leads to the mass delusion of futility of the very exercise. For example, if weather forecasters are made to compete with climatologists over defending the precision of their estimate, objectively, they would lose more often than not. This wouldn’t make for a very convincing (read profitable) weather forecast show. It thus paves way for a discomfiting circular logic. Weathermen are not looking to hire overtly accurate weather forecasters because people have relatively lower confidence in TV weather forecasts anyway; people have little confidence in TV weather forecasts because they are not accurate most of the time. Similarly, traffic forecasts are used more as a tool for validating the personal benefits of pre-decided policies, rather than as a tool to objectively influence the decision-making process. Also, if the estimates made by forecasting pundits are presented in the form of a range than exact numbers, it would imply that policy makers (who, in this case, are evacuators and other disaster management personnel) have to formulate solutions and prepare for more than one scenario. In some contexts, they might even be expected to brace for more than a finite number of setups. In a world used to drawing distinct borderlines between abstract events, this becomes an uncomfortable predicament. The cost of addressing this hardship was paid by those trapped in the hurricane.
               In the chapter, “A Catastrophic Failure of Prediction”, Nate Silver outlines the series of events which led to the disreputable economic crisis of 2008. Housing, which since the very advent of the science of analyzing macroeconomic influence of different market commodities, was never chalked up to be a particularly lucrative investment. This directly transcribes its decidedly safe nature. The question which should have been asked, but no one did, was “Really how safe is safe?”. Credit rating agencies exist to answer precisely this question. Their sole purpose is to quantify the riskiness of investments (and mortgages). The process of answering this question is clearly not as straightforward as one, or in hindsight, the rating agencies might have hoped. The very intricate route to reach a conclusive answer to this question was conveniently ignored by the risk rating agencies as a result of the inherently “safe” nature of housing investments. Proprietors of this type of investments are generally considered law-abiding, high income citizens, and their credit-worthiness was believed to be solely a factor of outrageously under-researched and over-simplified ingredients like geographical location and income (for example, a software scion in Silicon Valley versus a damp shack dweller in Arkansas). Preposterously large and protracted ranges of credit-worthiness scores were abridged into rating ranges as a result of this faux pas. That neither investors, nor rating agencies questioned the quantitative nature of the bond ratings is egregious and that it happened because even a confidence score of 99% has a 1% chance of failing is a very flimsy line of defense indeed. It has been conclusively proven through numeric simulations, distributional graphics, and highly sophisticated multi-dimensional computations, that given a sufficiently large dataset, a 1% risk is indeed transformed into a 100 in 10000 statistic. If historical data is used to diagnose the validity of a statistical model, the given rate of risk is inbuilt. That is to say that if the forecaster has a history of successful predictions over a statistically significant period of time as well as an adequately large dataset, the risk percent is a known and quotable statistic. It should be noted that underestimation of risk in such magnanimously high stake markets is seldom a result of sampling biases and errors. It is instead, as Dean Baker said, ‘baked into the system’. It is usually a result of a faulty statistical model fueled with defective and incorrectly hypothesized assumptions. This gives the entire spectacle a wildly criminal undertone. The inviolable nature of this mortgage market did not invite a lot of investigative probing, which led to its slow but exponential rise in attracting colossal amounts of investments, hedge funds, speculation bets and the likes of it. Its share of fixed assets in the economy rose so much so that it became one of the major drivers of the economy.
The algorithmic structure of assigning the ratings to pools of mortgages to “bet” on reveals the expected percent default within each pool of investments. Assumptions are then made as to which pool of risk each combination of mortgages belong to. What the rating agencies failed to take into consideration was a very obviously delineable and existing connection between seemingly unrelated housing mortgages (and consequently their default rates): the skyrocketing prices of houses with no significant rise in incomes, essentially, what we call a Bubble. The money spent on buying these houses, instead of having a market currency value, was reduced to mere numbers further fueled by the constantly increasing number of nonpayers. Another argument for an apparently unaccounted for noise (error) hauntingly converting the 1% risk into a glaring reality could be the possible absence of context-specific evidence which could not have alerted any active seekers to address the inherent risk. This is not unlike how a driver with an untarnished, three-decade old driving record is unequipped with any evidence to adjudge how safely he might drive drunk after a party one night. But this situation was not applicable in case of the housing bubble fiasco. This is because despite having a precedence of two other similar housing bubbles almost 50 years ago, the first in USA, and another eerily similar case in Japan, the rise of this flourishing market continued without any checks. Partly to blame was the investors’ proclivity to the optimistic aversion from inspecting the number of defaulters within this market which prevented private rating agencies from performing basic legal quality assurance requisites. Most investment combinations that used to be remotely composed of housing mortgages and/or related to the housing sector was automatically given the highest (signifying the safest) rating (AAA). Any attempt at investigating the underlying cause of the increasingly prevalent AAA bonds in the market was either disregarded, discouraged or simply quelled with an air of poised incredulity.
After having established the necessity of paying ample attention to the often unnoticed but context-sensitive factors lurking in a dataset, one should also consider dividing the data into the appropriate resolution to make the forecast practical and avoid over-generalization of the derived results. Silver addresses this in the chapter “For Years You’ve Been Telling Us That Rain is Green”. The failure to first conclusively envisage the hurricane Katrina, and then effectively communicating the necessity of evacuating New Orleans left thousands dead and many more devastated. Factors like consumer/client interest in the interpretability of the forecast, impact of the decision relating to the forecast, competition in a race to throw out the most popular and easily inferred forecast, all affect the methods and effort which go into forecasting. Recognizing that pooling a lot of data together would only gradually give way to chaos taking over is crucial to avoid overly generalized solutions to complex research questions. In many cases, this might even lead to incorrectly classifying data into categories which are unsuitable given their number of dimensions. Transportation systems and factors leading to any major change in the demand for traffic facilities are symbolized by utility functions which are dynamic in nature, ever changing and sensitive to changes. These changes might not be completely intuitive, and can evade from notice now, more than ever, given the vast amount of data modern day researchers have access to. Chaos theory also points out the non-linear nature of these systems meaning that they might be exponentially affecting some component of the utility function. Advanced computational facilities these days enable forecasters to speed up multi-dimensional calculations, a skill which is only useful if applied in a carefully designed, context-specific manner.
Another factor of importance is translation of data. In the field of forecasting, or in this case, in the field of traffic demand forecasting, translation of data and results sourced from a different point in space and time plays a pivotal role in affecting the final inference derived from the data base. Silver, in the chapter “All I Care About is W’s and L’s” of his book, mentions Baseball Prospectus which enumerates statistics related to baseball players, both major and minor league. Numbers corresponding to the minor league players are skillfully standardized to make them comparable to those corresponding to the major league players. Results often sourced from analyses, studies and predictions carried out in conditions unrelated to the context within examination created a bias in our method of forecast. On one hand, it is important to keep track of the advancements in forecasting methodologies elsewhere and incorporating these approaches with suitable caution to our own circumstances, it is also essential to learn to segregate such results and avoid them from introducing unnecessary noise and prejudice in our own calculations. Some events are plain random; a consequence of belonging to a larger, more untamed set of disconnected data, and are capable of causing significant kinks in our prediction results. A number of factors differentiate major and minor league players from each other, for example, size of field, frequency of games, etc. It is therefore imperative that wins and losses from each category of the game be normalized to make an “equal-ground” comparison between them. In a game like baseball, which is often played in fields of non-standard dimensions, it is essential that such calibrations be made not only between major and minor leagues, but often even between games belonging to the same league categories. How this extends to forecasts related to transportation and traffic demands is that methods and forecasts which have been historically proven to be correct may not necessarily work in a different setting, a different time period, or even a different demographic group. Baseball Prospectus, for instance, used ‘Park Scores’ to evaluate and homogenize scores sourced from games played in each park. Similar empirical studies need to be carried out when calculating demand within a certain environment composed of different temporal, spatial and heuristic elements like social groups inhabiting an area, recognizing the key demographic which is expected to use a certain transportation facility, number of years the forecast is being calculated for, mode for which the transport demand model is being calculated for, etc. These scores should then be factored in the analysis with appropriate consideration when forecasting demand for the respective conditions.
Traffic forecasts essentially play the role of rating agencies in a world composed of federal and state departments of transportation, private construction contractors, political policy makers, urban planners, etc. To account for the numerous influential nuances embedded in the socioeconomic and temporal data driving transportation forecast models, it is vital that context-sensitive methodologies be adopted. If forecasting ethics are isolated from political uses of transportation model forecast results, arriving at objective predictions based solely on available data, particularly in developed countries like the USA seems to be a viable and practical endeavor. However, the presence and interests of major stakeholders govern such forecasts. Analysts, in such cases, are expected to devote more time in coming up with technically sound defense arguments justifying a set of predetermined outcomes than ensuring the accuracy of the forecast itself. Resource allocation to various transportation projects benefits a number of sections of the society. Although there may be grounds of rationalizing such benefits from a political perspective, the purpose of forecasting traffic demand should not, in strict terms, be regulated by such extenuating validations. On occasions, forecast specialists employed in firms tweak certain assumptions to which the results of the forecast inherently adhere to for fulfilling self-serving purposes: for instance, in order to rake in subsequent contracts, the existence of which is tied to the necessity of the initial step being projected indispensable. As forecasts are, by definition, unverifiable if the underlying assumptions are somehow warranted, they are basically used as a primary obligatory step instead of a decision which fundamentally drives the decision making procedure [1]. The rampant abuse of estimation methodologies has made people cynical towards the very necessity of this procedure. As a result, policy makers may soon advocate the removal of this step in its entirety instead of making the process transparent and accessible to the public. This would crown into the blatant abuse of political mandate and stakeholders will solely be responsible for authenticating the requirement of any future public project. Thus, resolutions to address this are need of the hour. As suggested by Wach (1990), aware and educated masses well-equipped to question the authenticity and accuracy of the forecast results will provide a greater impetus to forecasting agencies as well as political and entrepreneurial stakeholders to function more honorably. Also, conceiving and acknowledging the ambiguity in the forecasters’ code of ethics and professional practices would go a long way in addressing the dilemma forecasting personnel experience in the face of being honor bound to serve their employer as well as being true to professional integrity. Hartgen (2013) explores the techniques European and Australian forecasters use to address the uncertainties in the values of the variables shaping their transportation model predictions. Uncertainty logs are developed to quantify the indecisions associated with the values of each randomly distributed variable. These quantitative measures are then classified into ranks (like the rating agencies did with the risk associated with each bond). A list of decisions linked with each of these ranks serves as an advisory tool to make decisions about the practicality and necessity of the project being undertaken. The risk-based decision module follows a rubric which lists various recommendations so as to facilitate the decision making process, for example, if a specific project could be undertaken by tweaking the policies, or changing the size and scale of construction, etc. Hartgen points out the necessity of calling out the ethical concerns related to forecasting practices instead of fixating solely upon modifying the underlying structure of the forecasting methodologies from modal to topical. Like Silver, he also suggests presenting the result of travel demand models being presented in a probabilistic form and/or ranges of possible scenarios rather than mere numbers. He notes that this will increase the quality of these forecasts in terms of accuracy, as well as preparedness on the part of the contractors. He has formulated a rubric for assisting the process of identifying inaccuracies in the forecast results which can be used by almost every hierarchy of the stakeholders- public, journalists, political beneficiaries, contractors, engineers as well as analysts. Unrealistic and unverified assumptions often are the most serious perpetrators of spitting out a glaringly flawed forecast. The validity of these assumptions should not be based purely on their justifiability. In an uncertain socioeconomic paradigm, which exists for any forecast made for a time period 20 years into the future, any assumption can be ascertained citing the vast cloud of improbability. These assumptions simply serve as a computational convenience and should be treated as such.
Using untested methodological advancements in the field of travel demand modelling forecasts, without carrying out sufficient reliability tests exclusive to the context of the forecast should be discouraged. Care should be taken to make sufficient corrections for temporal bias and sampling errors in the traffic behavior data when using the four step model as the data and assumptions related to travel behavior is obtained from different spatial points across an area, but not necessarily from different points in time. Overfitting the data, as both Hartgen and Silver pointed out, is another major but tremendously commonplace misapplication of statistical models. Overfitting creates early breeding grounds for insinuating bogus relationships within variables in a database. One might think that such spurious relationships could be easily spotted and expelled from the model results but in reality, it is not as straightforward as that. Real world data is much more amorphous and noisy to enable spotting a visibly conspicuous relationship. In fact, some of these spurious correlations may in fact not be intuitively as apocryphal at all. Silver quotes the winner of the Super Bowl being considered a major predictor for the development of the economy for the better part of 1990’s. The hypothesis had excellent R-squared and P-values. The model even performed well in “predicting” the GDP growth for a few years before starting to fail and being called out for its co-incidental and correlation-without-causation nature. Theoretically, the probability of the relationship being merely due to chance was less than 1 in 4700000. What is interesting is that these figures could as easily be generated by fitting a model of chicken production in Uganda and the economy of USA (this effect is more commonly known as the Butterfly effect). Similar is the effect of personal biases introduced during sampling or modelling by the analyst producing intuitionally sound forecasts. Utmost care should therefore be taken to ensure the comprehensiveness of an analysis, and explicit post-hoc diagnostics should be encouraged to minimize the risk of ending up with a contrived forecast. This again fortifies the necessity of reporting results accompanied by their respective margins of error.
Zhang et al. (2012) in their study about peak temporal traffic trend forecasting, compared different non-parametric models to arrive at the most effective and least computationally intensive method to provide real time peak hour traffic forecasts using historic peak time traffic data [2]. It should be noted that even for models employing non parametric methods (in this case, Least Square Support Vector Machines) to analyze large time series data sets, there is considerable noise on days displaying more haphazard peak hour traffic (Thursdays, for instance). Historic data too fails to predict with significant accuracy the real time traffic data to be expected during these hours. This is reminiscent of the erratic weather forecasts made 10 days earlier to the target date that Silver mentioned in the chapter “All I Care About Is Wins and Losses”. Forecasts made ten days prior to the target date and promptly rolled out through savvy interfaces were based mostly on moving averages of historic weather data. Hardly any refined analysis actually went into the calculations leading to these forecasts. The forecasters hardly have any faith in these numbers. More often than not, these numbers fail to resemble the more cogently forecasted predictions (based on climatic and temporal data, often within a week of the target date) but perchance do seem intuitive to the actual weather encountered on the target dates.
Building on that premise, in a race to make forecasts appear more precise and larger than life than they really are, individuals or agencies often try to distract an observer away from the number of times the prediction failed. The fail percentage, or ‘risk’, associated with a forecast is as important as the accuracy, result and the confidence level of the forecast itself. Selective reporting of results is fundamentally unethical. But when the stakes are not too high, for example a TV show prophesying election results, or when a forecaster is instructed to mold his/her predictions to fit a specific frame, failed predictions are hardly ever discussed. It should be noted that a lot of such discussions, like the TV panel anticipating election results in weeks leading up to the final election day are substantially a form of feedback data as opposed to the mere critique analysis they usually guise themselves in. Many of these ‘analyses’ have stark underlying flaws in them, like using small sample sizes, evidence opposing their hypotheses conveniently being overlooked, etc. While some of these predictions have to be right only once for their predictor to be regarded as a highly gifted political analyst, many of these experts have made multiple, often blatantly contrasting, claims based on incidental evidences. But this elitist stance may not work entirely in the favor of political scientists either. What it depends upon is whether their forecasting approach is based on pools of contingent data or do they follow a one size fits all methodology to arrive at their conclusion. Sometimes, information obtained from diverse and unrelated strands are woven together by keen observers to predict certain outcomes. A trite example for this supposition would be the failure of many a political scientist to predict the collapse of USSR. The dissimilar pools of data (in this case, news) were not even contrary to each other that they would have instigated two opposing schools of political prediction pedagogy. Instead, people who predicted the demise of the union merely fortified their conclusion by assimilating the data gathered from these multiple sources. As mentioned in the chapter “Are You Smarter than a TV Pundit” (a pun on the popular TV show, “Are You Smarter than a Fifth Grader”, cleverly sneering the likes of McLaughlin) by Silver, “fox like” forecasters, who are scrappy about locating information and interlacing them together to form a comprehensive story usually have a higher success rate when it comes to prediction. In spite of this, their predictions hardly make the headlines, probably due to the absence of an overbearing cockiness about their predictions. Also, their predictions are typically based on unintuitive bits of information strewn together, often culminating into inexplicable and far too complicated empirical derivations. An average reader or viewer does not have the patience nor acumen to try to sift through the humdrum of proofs therein. These people also do not characteristically make particularly charming TV show guests. The roster of clues they have to offer incorporate ideas from multidisciplinary sources, their statistical models are highly sensitive to new pieces of information and due to these reasons, they are far too cautious about their predictions. Their predictions also frequently fail to resolve a lot of seemingly related surrogate questions mostly due to the reason that these questions are just that to them: unresolvable, given the current set of data.
Another category of forecasters, that is, “Hedgehogs” to quote the famous UC Berkeley psychiatrist Dr. Tetlock, are what you and I would call the “Alpha” forecasters. They are dedicated analysts, often trained to offer predictions based on tested theories. Additionally, they are mostly career forecasters who deal with limited areas within forecasting (in extension, this type of personality is often found frequenting within professional urban planners, who are exclusively trained to function within specific fiduciaries). People with such highly devoted expertise often belittle outsiders’ opinions, ultimately rejecting a probable pool of additional information a keen, scrappy fox might have gathered and to offer. In such cases, traces of new data is not viewed as a potential resource to change the pre-existing theories or statistical model, but to refine the current (actually, age old and time tested) model. This often leads them to ignoring crucial changes of the current time and day which might lead to what Silver earlier referred to as a catastrophical failure of prediction. Acknowledging the chaos within the predicting variables is the first step towards attempting to evaluate it and “hedgehogs” are very reluctant to embrace the anarchy within their analytical turf. The exceedingly imposing inferences made by “hedgehogs” in their predictions make for excellent TV guests; they exude the sort of confidence which accompanies precise forecasts, irrespective of accuracy.
Instinctively, the secret to develop oneself into a better forecaster clearly lies in being ‘foxy’. What this means, implores Silver candidly, is to assess probabilities instead of figures. This means, that in many cases, the evaluator might be left with a range of outcomes as wide as almost half of all possible results. A ‘hedgehog’ might argue that this is a pre-conceived excuse to corroborate the failure of one’s predictions. What they will most likely fail to take into account is that this range will prevent a forecaster from quoting a figure as absurdly off the mark as predicting that Republicans would win 100 seats within the Electoral College in the 2010 presidential elections when it won 63. This is because individual P-values pertaining to singular value for the dependent variable are often misleading when a statistical model is not corrected for selectivity bias, random variable biases and unidentified panels in the data. On the other hand, the likelihood of a range of values within a reasonably defined confidence interval being as preposterously faulty as the one mentioned above is meagre (in which case, there either exists a capital flaw in the methodology of the model forecast, or the number of unidentified lurking variables is too large to be accounted for). This is because the combined likelihood of a range of values, even after being conservatively corrected for multiple comparisons and post-hoc analysis, will hardly administer a misleading prediction, even if it is not aggressively precise. Although, the results from this foxy mechanism of forecasting are useful and even applicable to real life problems spread over a longer period of time and a sufficiently large dataset, it might be impractical and, to some extent, cumbersome to substantiate the results spewed by a range of possible outcomes. When lives are not at stake, a more conservative or ‘hedgehog’ way of working might be vouched for given ample empirical evidence to prove its veracity. But as extensively as practical, the consumer(s) should espouse their faith in the more soundly principled probabilistic prediction to avoid making high stake blunders in attenuating circumstances like threat to life (field of medicine) or the economy.
Khaled et al. (2003) addressed the issues surrounding lack of credible data sources in developing countries and its impact on traffic forecasting, and consequently on the lack of resources to conduct cost benefit studies corresponding to numerous land-use and urban planning projects [3]. This, coupled with the high urban population densities, underdeveloped roadway facilities for non-motorized vehicles, high crash and traffic fatality rates, degrading environmental conditions and saturated conditions of traffic operations in developing countries, expounds the scope of financial losses if an unplanned and poorly analyzed transportation project is undertaken. Moreover, with continually rising incomes the need of a thoroughly planned infrastructure and traffic management system is becoming the need of the hour. The four step method of urban planning, essentially developed for first world countries is ineffective and trite way to address the entirely different heuristics of developing nations. The trip generation method of the four step model requires socioeconomic data which predicts the demand for transportation facilities while there is no consideration for quantitative metrics like travel time and roadway capacity which are subject to highly distinct and discernible differences in developing countries depending upon a number of traffic interruptive influences. Also, the population and land use patterns change more rapidly in a developing economy than that in developed countries, making predictions well into the future (usually necessities and magnitude regarding transportation facilities are made twenty years into the future) subject to a variety of unknowns. To counter this, Khaled et al. suggested modelling the respective urban network into the TransCAD software, superimposing demographic and land use data from a GIS shapefile, and creating skim matrices of the origin-destination pairs thus generated. The utility functions to be used in distributing the trips and factoring the mode splits would be generated by an empirical expression, the summation of the individual products of trips T and a proportion P determined by the stochastic nature of the destination. The stochastic nature of the proportion variable is due to estimating traffic from new generators, varying land use patterns and changing demographic statistics of the region in question.

While Khaled et al. investigated the experiential relationships between transportation-demand modelling in developing countries; Naess et al. (2014) examined sources and causes of forecasting inaccuracies in transportation modelling in Scandinavian countries [4]. They have analyzed and quantified errors arising from a number of methodical errors while statistically calculating future travel demand models. Survey data was used to explore extant practices in the field of transportation forecasting. Insights were sought within this data to identify underlying sources through which common inaccuracies like optimism bias, strategic misrepresentation, sampling errors, etc. could creep into the model. In order to avoid generalizing the results obtained from this study, it should be noted that the region addressed in this study, namely Scandinavia, is a developed economy. It exists at its peak of socioeconomic prosperity and possesses highly advanced systems of traffic system and transportation facilities. Additionally, the traffic forecasting agencies are institutionalized to most extent, meaning that there lies certain uniformity between how these forecasting agencies within the same department operate. Overestimation of future traffic demand was found to be a major issue in most European countries. This is widely referred to as Optimism Bias. Optimism bias also includes the underestimation of construction costs. This combination of issues is a common occurrence, which was also scrutinized by Hartgen and Wach in their papers critiquing prevailing transportation forecasting techniques and ethics [5]. Other insights which could be drawn from the Scandinavian questionnaires were that most of their forecasting agencies almost unanimously agreed that ontological explanations like delays in construction, unpredictable land use development and development of some unforeseen transportation infrastructure lurked behind failed predictions of traffic demand models. That unexpected and unpredictable geopolitical trajectories and vastly different vested interests of political and business groups were responsible for the uncertainties in the predictive models and significantly accurate predictions could not be made for demand models as far as 10 years into the future based on the data we have access to today. Interestingly, there is hardly any mention of probabilistic values and/or ranges of predicted values for the demand models which might prove useful in explaining their undisputed distrust in predictive models. 

Saturday, February 22, 2014

The Cog-not

The snow on the hilltops, they say, lay unwilted since centuries, uninterrupted by gales or the decrepit warmth of the short-lived summer sun or the numerous footfalls of the transitorily draped in local attire tourists, or the tousled hammering of the terrain through nappy trekking shoes, or the ceaseless indentations of undying love carved into the snow by young lovers. 
Somehow, the perpetuality of the stubborn, unyielding disavowment of the frozen waterfalls brings back the untouched school memory from the ghost of Class 6th past. That one chunk of cognizance that refuses to unhook from the ever-budding set of morale-making episode cropping through life, where it had lodged itself, contributing in major proportion to the idea of polity and a cog-wheelistic helplessness that has so doggedly become a part of mature conscience.

It was lunch-time at school. Montonocity dictated, the closing bell rang and all the girls were to be formed in a line to be escorted back to our regular classrooms for the after-lunch classes. The whole process was a daily hassle as each one of us were subjected to an avalanche of dragooning and a near stampede took place everyday. Everyone refused to put an iota of effort in trying to remain standing at our original position so that at least the remaining line of girls in front of the respective effort-putting-pioneer could be saved from all the pushing back-n-front dilly-dally the girls at back were being subjected to. All it would have taken was a little muscular effort. So one afternoon, a little girl finally made her mind up not to be physically regulated from her position and remain put, standing at her spot, no matter how much impetus she had to bear in order to not be organically moved. I must mention here that the girl was a plump, heavyset bundle of physical prowess during those days and therefore, her decision to take a stand (literally!) was not entirely thoughtless or unpractically dramatic. Anyhow, she put her plan to action. And it worked; or so it seemed in the beginning. To paint a sketch of acute accuracy, I would blow my story a little out of proportion so that it would be easier for the reader to mellow it down, thereby arriving at a pretty precise picture. So, to an onlooking bystander, it appeared as if the girl was putting every last bit of essay she had to stay rigidly at her position, an ever greater magnitude of confused rampage of domination, perplex and chagrin on the back, and a perfectly undisturbed, tranquil line of students to her front. This went on for a while, until a teacher spotted the proceedings. Not unpredictably, she had eyes only for, or to rephrase, she had her eye only on the girl. To a completely unaware, unrelated and just-introduced-into-the-frame-of-things person as her, it seemed as if the girl was the sole source of all the commotion taking place on her back; the effort she was putting in to keep the students on her front stay unaffected from the totally antonymous discomposure morphed in her eyes to look like the primitive provenance of the more-than-usual pandemonium than the teacher was used to handle, and consequently, ignore. Without any further ado, she marched straight up to the girl and planted a tight slap on her face for allegedly fueling the tumult. The girl was pulled out of the line and made to stand out in the sun for the rest of the day. The infuriation of being misunderstood seemed maddening to her. 
For a really long time after that incident, the girl refused to get involved in any trailblazing, colonizing, spearheading fountainhead activity.

Wednesday, November 13, 2013

Doosro ke jay se pehle khud ko jay karein!

I wish, before pouring out words from my rust-endowed, stubby fingers rendered mute for the past couple of months by a rich gravy of thoughts doing rounds inside my ever stringent, unfathomable mental labyrinths, I be not arbitrated. Obnoxious, contentious, cantankerous thoughts Alert.

Wandering cognizance elusively stare at me, caravan of the vast expanse of the clear, blue sky blanketing the minuscule fraction of charted universal existence, threatening to burst out of the very bars that hold them in place. Ghosts of the poetic pasts, leaders of the spiritual mass, and the relatively new ideas of selflessness strewn widespread in the society, negating the very concept of the virtue of selfishness surprise me. Shock me, to be fair. Examining human behavior throughout history and focusing on their choices between alleged morality and selfishness, the pattern of wielding power for personal benefit emerges distinctly. Personal benefit has always been an innately humane trait. But instead of being trained to nurture and channelize selfish desires into productive causes, which appears to be a Herculean task at hand by the sound of it, we have been taught to suppress the gargantuan source of selfish desire into oblivion, lest it becomes too handsome to handle by the non-capitalistic elements in the society. 

We have been led to believe by renaissance authors and compellingly beatified examples, that the sun burns tirelessly to nurture life, that rivers glide through bellicose, prickly terrains to quench humane desires, that altruism assuages the essence of mankind. Undeniably, it's led to leaping strides in humanitarian fields, and not completely circumscribed to mortality either, but it's swept much more under the rugs of in-afflicted, stoic excellence; it has destroyed Tesla's vision of an infinite power-tower, portrayed parental love into an unexplained selfless act of love, slashed down NASA fundings, boomed sweat shops churning out illicit, lovely artifacts into throes of unquestionable business and incarcerated the common desire to stare at the stark imagery of rationality.

Selfless love is an unattainable mystery. The sun's been burning since millions of years to conclusively cringe into a little white star, it's simply outliving itself. That life's bloomed in its course is a happy bypass of age. Arguing that it sends out UVB rays nevertheless would be engorging the argument into ludicrous proportions. The glaciers would have molten and deceased into their own pools of carcass had they not fractionally melted and created rivers. If parental love had been as puerile as universally acknowledged, there would have been a sharp surge in misery, heartbreak and the perception of "expectations" and "falsifications" would not have ripened in the first place. 

Patriarchal inheritance, again not an act of selflessness, is the sole reason why the concept of marrying for love has not originated in its truest essence in this country. No matter how much a person loves his betrothed, the smallest iota of malintentional possibility still holds. The evolution of exponential proportions of respect for vocational degree holders has created a dearth of conformity, and the echelons of selfless love has been thrown out of the window. Last I checked, it was decaying in a marsh of unrealized outcrops, contrary to what was promised by the pallbearers of charity.

"Women have been found to find altruistic men to be attractive partners. When looking for a long-term partner more conventional altruism may be preferred which may indicate that he is also willing to share resources with her and her children while when looking for a short-term partner heroic risk-taking, which may be costly signal showing good genes, may be more preferable." Ambiguous?!

Cellular slime molds conforming to Darwin's theory of survival of the selfless, perish. Paradox, huh?!

Tuesday, August 20, 2013

About that

I write today, not as a backwash of boredom, but because I owe it to myself, to my words, as Plath only put too familiarly, not to let them rust and rot, of lust and thought. Penning them down before they are lost amid the deluge of the atrocious threshold of acceptance that The Almighty has so generously bestowed upon me; before they plunge into the deepest abyss of my subconscious, returning only to torment me in my dreams that I once again, blissfully, forget in a trice, unless they are spiked with images of paisley debutante actors, Alladin-isque jewels or a Lamborghini.


Strangely, the cue to wander away from gloom, entranced pretty strategically up there fails to make my mind take a predictable detour. But then again, some things are harsh to write about. When something happens to us, we write it down, either underplaying it or over-dramatizing it; exaggerating the immaterial, ignoring the essentials. At any rate, you never quite write it the way you want to. It’s poignant Chaitali, a crushing, lamentable insight that people you thought shared the same wavelength as yours, don’t. Perceiving people and letting go of expectations are like trying to find one’s reflection in pieces of broken mirror inside the water. One realizes they exist only when they see their own time-worn faces staring back at them. A moment ago, they had just been an imagery of illusion. But the instinct of holding on to the past, to the unreal, non-existing statuettes of disgruntled intentions is but puerile humane. If you linger too long, they prick you and you bleed. All of it happens in a surreal universe and you don’t feel the ache. That’s the beauty of it. You trudged dreary, hurtful paths in the past and the pain saunters still, but you fail to notice. You have accepted it and have got accustomed to it. You have lugged along the baggage for too long to notice the slack in your pace. The past doesn’t belong to you, and by extension, neither does the accoutrement it entailed. Don’t give in. Don't give up, Chaitali.

Sunday, August 11, 2013

শ্রাবনী সন্ধ্যা

আমি আত্মদর্শন করে পেলাম যে তোমায় মনের ক্ষণে, অঙ্গের প্রানে
তার ঝিরি মিরি শব্দ সকাল বিকেলে ঝরে পরে সেই কানে,
মল্লিকা পৃথ্বীর সেই জ্ঞানে, তরাতল মাটির তাকানি,
চারে দিকে বরণ করি তোমারি গানের শব্দে সকাল,
বাঁশির ধুনে ধুনো, মেঘের বডির রাত,
প্রেমের আলাপ শ্রাবনী সবুজ পাতার মত পাল্টে গেল
সন রঙের সন্ধ্যা প্রকাশ তাকিয়ে বলে পাত!