I’ve spent a lot of years writing about work, technology, revolution, organizing, politics, etc. From the early days of Processed World (1981-1994) to my book Nowtopia (2008) and in dozens of essays on this blog and on Foundsf.org, I’ve addressed how the shape and control of work is the linchpin of how we live, of how we make life together.As I dug into history I realized that it was more than a century ago when a key battle was lost: who decides what kinds of technologies we use, why, and to what purpose? Who decides what work we do, how we do it, how the benefits of our efforts are distributed, and what should we do going forward? It is obvious that nothing remotely resembling a democratic process exists for any of these questions. And yet most people, if asked (which we never are!), would probably agree that a democratic process would be appropriate when it comes to determining the texture and purpose of our lives.
Or as Jathan Sadowski puts it in his new and very valuable book The Mechanic and the Luddite: A Ruthless Criticism of Technology and Capitalism
… risk governance transforms the public into janitors cleaning up the messes of corporations, militaries, police forces, and others who would rather shoot first and never ask questions later. The default is to allow capital to innovate without needing to ask for permission—or at least to move forward with as few imposed guardrails as possible. (p. 166)
As I learned of the long and admirable history of labor agitation and organizing, with its many epic battles and violent confrontations, I concluded that unions as we know them are as much as part of the problem as any other institution in our society. Not because the unions are guided by bad people (though too often they are!), or even that the urge to unionize itself is misguided. I’m in favor of people getting organized!
But the logic of AFL-CIO trade unionism, born in the 1935 Wagner Act and massively curtailed by the 1947 Taft-Hartley Act and ensuing McCarthyite Red Scare, is to narrow union goals to contractual agreements on wages and benefits, and to ensure a base line of safety standards for workers. (Not that the miserable condition of workplace safety and health across U.S. factories, refineries, offices, campuses, and highways is anything to be proud of!) But unions agreed to let owners of capital decide how to organize factories, which technologies to implement, etc., during bitter 19th century battles between capital and labor—when capital had all the power of the state and its police and military, in addition to private armies, at its disposal in that class war. In the recent past unions have capitulated to decades of neoliberal cutbacks, deindustrialization, automation, globalization, and ultimately presided over their own political demise, unwilling to break out of the legal shackles that prevent workers from exercising the power they (should) have over economic life. For all its limitations, the unionization wave of the 1930s was at least based on widespread workplace occupations, secondary strikes and picket lines, boycotts, etc. All those effective techniques were specifically made illegal by Taft-Hartley in 1947.
If such militancy still existed and could prevail against the inevitable repression that the state and corporations would bring to bear, it’s still not clear that the working class, or its representative institutions, have much desire to wrest control over decisions about technology, production, or the very structure of the economy from the Lords of Capital. Perhaps with the coming disruptions that the Trump regime will impose on business, government, international borders, etc., not to mention the expected all-out assault on nonprofit organizations and anyone with even the vaguest of center-left politics, social opposition and revolt will be thrown back to basics. That is, finding power where it exists and can be wielded, at work, on the roads, at chokepoints throughout the just-in-time global economy. Once people begin to assert themselves in that direct way again, assuming they do, it’s long overdue to take up the deeper predicament we face over a profoundly unjust and uncontrollable technosphere, one that is using all the tools of behavior modification to keep people down and blaming each other for a fucked up capitalist world.
In the three books I’m going to discuss in this post, a process is unveiled starting with early industrialization at the beginning of the 1800s that has made the introduction of new technologies remarkably undemocratic. Technologies chosen by vested interests to protect and expand their power and wealth are trotted out as inventions and innovations but at no point are everyday people given a chance to evaluate the purpose, value, or potential consequences (good or bad)—not even elected representatives in government are given the opportunity. Any effort to address the consequences of a given technology or invention, no matter how life-altering it turns out to be, are well after the fact and treated by some as illegitimate regulations or impediments to their ability to profit freely. Obviously the current “large language model” system of artificial intelligence (AI) is a good example of a potentially powerful implementation of new software that might have far-reaching consequences in multiple areas.
In Happy Apocalypse: A History of Technological Risk the French historian Jean-Baptiste Fressoz provides a very useful look back at key technological introductions and the social conflicts that arose in the early 1800s. He helpfully frames his discussion with the term disinhibition by which he means two sides of the social integration of new technologies: contemplating danger and then normalizing it. He uses several key examples to show how regulations and safety standards and governmental authorizations were all used to legitimize technological faits accomplis, or technologies that were already being used, regardless of their demonstrably negative or risky qualities.
From a historiographical point of view, his work is doubly interesting because he is seeking to give voice back to the losers of history, to “reconstruct frameworks of intelligibility that their defeat rendered invisible. In so doing, it becomes clear that the opponents of these projects were not siding against innovation, but rather for their environment, their safety, their jobs, and for the preservation of forms of life they considered valuable.” (p. 8) Jathan Sadowski describes the way technologies are actually implemented in the 21st century: “Rather than a politics of refusal, we are given an ethics of acceptance. The driving concern is: How do you get people to trust a technology, integrate it into their lives, or just look the other way and not raise a fuss?” (p. 45)
The Luddites in the 1810s in England are the best example of a dynamic and widespread revolt against new looms and spinning machines that they knew would make their lives worse. No other group or movement has had a bigger impact historically on the question of technology and its social acceptance than the Luddites. Sadowski quotes the great historian of technology and labor processes David Noble saying, “the Luddites were perhaps the last people in the West to perceive technology in the present tense and to act upon that perception.” (p. 43)
Fressoz offers a different perspective, going back to examine several key technological introductions that were not immediately accepted. In the 1700s the idea of vaccination made its first appearance. Curiously, Fressoz points out that “Inoculation and masturbation belong to one and the same moment in the naturalization of morality,” citing publications against masturbation in 1718 followed by the first inoculations in England in 1721. The same thing repeated itself in France decades later in 1754 when an inoculation controversy erupted in Paris followed six years later by a major work denouncing masturbation. Supporters of inoculation against smallpox were the same writers who opposed the enormous sin of masturbation—they argued that the “least risk” should guide behavior and by then, there were statistical proofs available that fewer people died in populations that had been inoculated. But since the consequences of masturbation would fall on one in the afterlife, it could only be presumed that the risk was much greater! All of this is part of what Fressoz identifies as the creation of the “calculating subject” which itself was at the heart of a political project:
It seemed to be an essential condition for the functioning of a society made up of sovereign individuals: unlike moral and religious principles, which are irreducibly plural and contradictory, and unlike virtue, pity or benevolence, the ability to calculate seemed to be the only faculty sufficiently shared to form a broad political community and a consensus on laws. Once the transcendental foundations of the social order based on absolute monarchy had been revoked, it appeared that autonomous individuals could be governable, provided that they became calculating subjects. (p. 28) …
Historians have shown how liberal political philosophy was ultimately an anthropological project aimed at creating an egotistical, calculating subject, as against the traditional values of gift, sacrifice and honor. We might add they have failed to see that in return the homo economicus demanded a world tailored to his own standards – rethought, reconstructed and redefined so that the quest for the greatest utility could be freely exercised. At the beginning of the nineteenth century, science and technology adjusted ontologies and objects with the aim of creating an ‘economic world’: a mundus economicus. (p. 220)
This is foundational to our own epoch, where AI is presented as a transformative technology that can address nearly any question, nearly any task, but in EVERY case requires the reduction of social and biological complexity to its computational shadow, parallel to the reduction of complex social humans to mere “calculating subjects.” This allows AI’s solutionist ideology to provide “answers” to questions it has itself asked, which seems successful unless you blink several times and shake off the haze that self-referential techno-boosterism has blinded us with.
Dan McQuillan, who is a colleague in a network of Luddite-inspired tech critics I’ve been communicating with, has a great short book that packs a wallop: Resisting AI: An Anti-Fascist Approach to Artifical Intelligence was published before the frenzy over ChatGPT and its competitors erupted in 2023. Here he offers one the most cogent definitions of the ideological aura that cloaks AI:
AI is a form of scientism. It uses the aura of science to perpetuate the idea that its abstract mathematical models provide a reliable way of knowing, and promotes a reductive definition of truth that is claimed as inherently superior to lived experience. The scientism of AI allows alternative perspectives to be blunted or dismissed as subjective, and it reinforces the notion of representations that stand outside and above the context which they are used to pronounce judgment on. But, in practice, AI acts as an epistemic power grab that conceals politics and ideology under its machinic opacity. (p. 51) … AI is the steam hammer of limited imagination, a solution to problems defined in administrative offices and enforced through predictive boundaries. If our epistemology is derived only from the world as it is currently realized, it misses the horizon of plurality and difference. (p. 115)
Back to the early 19th century where a growing belief that cowpox was a benign virus that could inoculate humans against smallpox became common sense. The method at the time for vaccine preservation? Foundlings in orphanages were deliberately infected to produce and transport the lymph til the end of the 1800s. An 1809 decree designated 25 orphanages as “vaccine depots,” responsible for maintaining the vaccine. Thanks to a careful management of complaints and accidents up and down the vaccine system, a general acceptance of the vaccine was established. Learning this made vaccine skepticism, which has been present since the very beginning of vaccinations, a bit more understandable even if I’m still an enthusiastic taker of all sorts of vaccines now.
Fressoz details how the former use of kelp off the French coast to create soda ash (needed for armaments) gave way to new chemical factories that created clouds of noxious smells and plenty of toxic waste in their production processes. As people in upscale neighborhoods of Paris objected, the chemical factories carried out a two-pronged strategy by moving their facilities away from densely populated, wealthier areas and embarking on a public relations campaign claiming that the smell of a chemical factory was refreshing and evidence of good hygiene! We take for granted that in the 21st century we are especially attuned to the ecological consequences of modern production, but Fressoz offers a different view:
As industrialisation advanced, its attendant pollution and massive use of natural resources radically transformed its surroundings. All this took place within the theoretical framework of climate medicine. So the problem posed to historians is not a matter of understanding how so-called environmental awareness ultimately emerged. Quite the opposite: the question is how to understand the schizophrenic nature of industrial modernity, which continued to conceive man as a product of his surroundings, even as it allowed him to alter and destroy them. (p. 81)
Already in the post-French revolution period the state began regulating chemical industries but the purpose was not solely to protect the public from noxious smells or poisonous substances. “The new regulation was, above all, a liberalization and commodification of the environment, adapted to the emergence of industrial capitalism.” (p.110) … “… the fact that a financial form of pollution regulation was already established in the early nineteenth century calls into question the pertinence of the current dominant approach to environmental problems.” (p. 150)
It is clear that this mode of environmental regulation has not prevented pollution; on the contrary, it has historically accompanied and justified the degradation of the environment. In fact, there is an intrinsic logic to this regulation, the consequences of which have been apparent since the 1820s. The principle of compensating for damage, combined with the imperative of economic profit, produced three results: first, the hiring, for the most dangerous tasks, of the most vulnerable populations, whose hardships could remain socially invisible; second, the concentration of production and pollution in a few localities; and third, the choice to situate them, in particular, in poor territories which lacked the social and political resources that would increase the value of environmental compensation. We can only conclude that this logic still holds today, and that it has undoubtedly become even more pronounced as a result of economic globalisation. (p. 150)
In a lengthy discussion of the development of steam engines and gas boilers, Fressoz is able to show the contrasting approaches to safety and regulation in France and England.
The ways in which technological risk was managed in France and Britain were diametrically opposed. In France, it was obvious that a regulatory solution was needed. The question the government put to the académicien-experts was: What should the content of the decree be? In Britain, the parliamentary select committee asked: Is it really necessary to enact a law? And the experts’ answer was unanimously negative: competition, progress and improved safety all went hand in hand. After all, did gas leaks not cost companies money? Regulatory intervention thus risked undermining the safety of the gas installations. The optimization of technology, even from the point of view of safety, would be achieved through the market and competition. (p. 173) … In France, by contrast, maritime and land boilers, as well as gasometers, were subjected to precise safety standards from 1823 onwards. British industrialists, engineers and MPs considered such a solution rather presumptuous. To their eyes, the safety of technical devices ultimately depended on innovation, and it was better to give free rein to the inventiveness of profit-driven engineers, rather than rely on standards that would become obsolete as soon as they were issued. (p. 153)
The basic approach in the U.S. to the introduction of new technologies, repeating itself before our eyes with the rapid deployment of Large Language Model/AI software, follows the British approach laid down two centuries ago. Those who call for a more cautious and thoughtful evaluation of technologies, an application of the precautionary principle, or—heaven forbid!—government regulation, are dismissed as promoters of inefficiency or impediments to the competitivity of domestic companies.
As Fressoz reveals, the development of technical standards also addressed a political problem. By identifying “objects that could not cause accidents all by themselves, the standard made it possible to systematically direct blame towards humans”… holding them responsible for accidents. But by 1898 when a general liability law passed the French parliament, insurers could freely sell compulsory policies to employers to ensure all workers collectively since accidents were an inevitable feature of modern production. “A powerful legal, economic, insurance and ‘reforming’ discourse on accidents had succeeded in making morally acceptable the normalization of accidents and the integration of the worker’s body into the company’s economic calculations.” (p. 211) Or as Sadowski describes it: “New methods of collecting statistical data [in the early 1900s] showed that there was a “grinding regularity of industrial accidents” in workplaces. The conclusion drawn by both corporations and governments was that these hazards were inevitable and natural, a grim reality of industrial society and the price of progress; it had to be managed, but could never be changed.” (p. 183)
From the beginning of the 19th century to the turn of the 20th, the fundamental principles on which our objectification and reduction to simple bearers of time for sale were established and perfected. What begins as a project of creating new “calculating subjects” suitable for the emerging market societies, ends with the objectification of workers as mere means of production, equivalent to the machines that so often maimed and killed their human attendants. Today this same logic reappears in the bureaucratic phrases employed by regulatory agencies such as “acceptable daily intakes” or “maximum permissible concentrations” with regard to substance exposures for which there is no truly safe threshold. All that is known is that statistically, the number of people who get sick or die within the limits designated by those dry phrases is considered “acceptable.”
Dan McQuillan’s book came out before the current AI frenzy. But once you read his book it’s clear that the products that have launched after his book are mere continuations of what came before, regardless of the breathless hype that has surrounded the AI bubble for over a year now. (For a thorough debunking of most of the hot air surrounding the investor craze for “anything-plus-AI,” I strongly recommend checking out Ed Zitron’s Better Offline podcast or newsletter.) McQuillan is not interested in the companies or specific software packages that have appeared in the last couple of years. His analysis is a deeper look at the underlying logic and the assumptions that are baked in to computational systems. The AI software that is being sold to us is a type of machine learning:
… rather than the assimilation of novel concepts based on accumulated experience and common sense, machine learning is a set of mathematical operations of iteration and optimization. While machine learning has some clever mathematical tricks up its sleeve, it’s important to grasp that it is a brute force mathematical process. There’s no actual intelligence in artificial intelligence. (p. 11)
In my perambulations through labor history I learned about the Mechanization & Modernization Agreement of 1960 signed by the longshoremen’s union in San Francisco. I urge you to read more about it here. Without that agreement most of what we know as globalizaton, with its attendant destruction of the industrial working (middle) class in the U.S. and large parts of Europe, would not have been possible. Interesting to find McQuillan drawing an analogy with it:
The new era of machine learning means that a similar overarching logic to that which revolutionized global supply chains, through the abstraction and datafication made possible by containerization, can now be applied directly to everyday life. (p. 15) … The long history of statistical reasoning shows how state and non-state institutions have always used statistical methods to turn the diversity of lived experience into a single space of equivalence, ready for distant decision-making. The statistical transformations of AI are the latest iteration in this process of rendering the world ready for algorithmic governance. (p. 19)
And by all accounts we are well on our way to being subsumed by algorithmic governance, kicking and screaming, or with a subdued, glazed look as we try to tear our eyes from our devices. To a great extent it doesn’t matter if we are sullen obeyers or angry rebels, because the extensive system pays no attention to our affective state unless it’s to sell us a product. In The Mechanic and the Luddite, Sadowski describes in his way the fragmentation and objectification that each of these critics is underscoring:
… in the computer vision papers and patents, any “entity” the system detects, identifies, and tracks is called an “object.” This includes cars, cats, trees, tables, and so on, but also the “object” most often targeted by these technologies: humans. A world made into data is a world without subjects. The engineers claim the position of objectivity while their human targets are rendered into objects. (p. 86) … The whole point is to turn integrated human subjects into fragmented data objects. (p. 88) … In classic engineering fashion, every qualitative concern can be ignored, reframed, and addressed with quantitative solutions. (p. 98)
McQuillan describes the actual work going on behind the scenes on AI and other software “services,” a system that Sadowski cleverly labels “Potemkin AI,” after the Russian meaning of a village of realistic façades with nothing behind them.
Platform work, online or offline, comes without the protections, such as sick pay, holiday entitlement, pensions or health and safety, that were hard won by the historical struggles of organized labor. As well as the decomposition of individual subjectivity, there’s a fragmentation of the kind of community and solidarity that has historically empowered resistance through strikes and other industrial actions. AI is a futuristic technology that helps to push back the conditions of work by a century or more. (p. 53) It should be no surprise, then, if AI carries forward into everyday life what sociologist Randy Martin called the social logic of the derivative—a logic of fragmentation, financialization and speculation. This is a precaritizing logic that depends on the decomposition of that which was previously whole (the job, the asset, the individual life) so that operations can be moved into a space that’s free of burdensome attachments to the underlying entity, whether that’s the fluctuating price of actual commodities or the frailty of the actual worker. (p. 55)
Here’s Sadowski:
The reality is a hybrid relationship where workers use technology to create value, managers use technology to exploit labor, and entrepreneurs use technology to erase the existence of the other two groups. I call this way of building and presenting such systems—whether analog automatons or digital software—Potemkin AI. (p. 106) … To be fair, OpenAI is only doing exactly what every other company trying to train AI systems is doing. They confronted a complex problem by turning that problem into dirty jobs done dirty cheap by an army of workers. That’s how capitalism has been brute forcing growth and progress for hundreds of years. This is also why the global market for data annotation—that is, the outsourced labor and tools to facilitate that labor—is projected to reach over $13 billion by 2030. (p. 110-111) … in most applications, AI is still an unsatisfying reality, if not a total fantasy. Potemkin AI is role-play. It’s people masquerading as soulless systems. It’s the ideal of being served without having to acknowledge the unpleasant existence of servants… Potemkin AI is seduction based on deception… It is the most capitalist thing imaginable to construct complex tangles of transnational infrastructures for circulating finance, information, and commodities, all to perpetuate a fetish for dehumanization. (p. 116)
AI systems represent the ultimate victory of mindless bureaucracy, precisely because they’ll find their most pernicious application in the “streamlining” of bureaucracies by replacing humans who at least have the possibility of thinking and reacting to the real context of any given situation. McQuillan captures it well: “In institutions with the power to cause social harms, the threat of AI is not the substitution of humans by machines but the computational extension of existing social automatism and thoughtlessness.” (p. 63) He earlier cites Max Weber’s seminal analysis of bureaucracy, recognizing that its deep purpose is to “valorize indifference as the means to effective implementation of policy… it’s through distancing and indifference that AI amplifies the most harmful behaviors of the bureaucratic state.” (p. 60)
Probably the most horrifying activity of the bureaucratic state is its casual division of the population into those who will live and those who will die. McQuillan spends a chapter on this notion of “necropolitics,” which he considers a key function of the new AI-augmented state power, which is clearly tilting towards a new kind of modernized fascism.
While many in the science and statistical communities would like to argue that scientific achievements can be wholly disentangled from the beliefs of their originators, what we are pursuing here is the way that political positions are emerging around AI which can be traced directly back to the [white] supremacist agenda that statistical methods were originally developed to serve. (p. 88) … The problem with this is not only the instrumentalist allocation of life chances but the question of who gets to decide what kinds of life are worth living. As philosopher of race and tech Ruha Benjamin puts it, ‘a belief that humans can be designed better than they are’ is really ‘a belief that more humans can be like those already deemed superior.’ (p. 91) … Ultrarationalism and neoreaction are ideologies that keep AI aligned with White supremacy, but they don’t exhaust its full potential for amplifying far-right politics. (p. 97) … We can’t rely on past images of fascism to alert us to its re-emergence because fascism won’t do us the favor of returning in the same easily recognizable form, especially when it finds new technological vectors. While AI is a genuinely novel approach to computation, what it offers in terms of social application is a reactionary intensification of existing hierarchies. (p. 99)
It’s quite easy to succumb to a deeply pessimistic perspective right now. Trump will be inaugurated in a few days, and the predictable whirlwind of malicious declarations and venal, self-serving wealth grabs will ensue. A frightful Theater of Cruelty will be enacted in countless venues by thousands of petty authoritarians who will feel fully empowered to act out their sadistic impulses on the most vulnerable people they can find. Fighting this onslaught is going to be a slog at best.
The role of the AI hype cycle in cementing a long-established process of wealth extraction through speculation and outright gambling, while real, may not be the main concern. In many areas the use of AI chatbots and “agents” finely tuned to specific areas of knowledge will likely be implemented to overwhelm us with their machinic indifference and impersonal comfort with deciding the fate of real people. Wresting control away from such systems will be harder as time goes on. Promoting a completely different way to think and act when it comes to technological change, the structure of government, the rights and powers of economic entities (corporations et al) is really our only way forward to a life that is free and self-directed. Sounds rather pollyannish to say it but there it is. Dan McQuillan goes considerably further in his concluding chapters, laying out his vision of “post-normal science” based on peer communities that would take into account much more than the latest benchmark testing against fraudulently designed “tests”:
Starting from the principle of care is a counterproject to AI’s thoughtlessness and ‘view from nowhere.’ The contrast between current AI and matters of care is not only instructive on the levels of values and epistemology, but also directs our attention back to questions of labor and political economy. Care is the invisibilized labor that is an inevitable consequence of our interdependence. It’s not only AI that is built on invisible labor and ‘ghost work’—all economically productive activity is sustained by someone else doing the cleaning, child-rearing and sustaining of social bonds and shared understandings. (p. 116) … Mutual aid manifests the ontology of new materialism: we act for each other because we recognize, at some level, that we are not absolutely divided and separate, that we co-constitute each other in some important way. Mutual aid counters social separation both concretely and ontologically; that is, both as a practical tactic and as a proposition that the world is itself constituted by relations of mutual interdependence. The mobilization of mutual aid is a direct counter to algorithmic segregation and carelessness. (p. 120)
Going even further he extrapolates from this counter-logic of mutual aid to propose workers’ councils (“the self-organized and unmediated engagement of workers in the direct transformation of their conditions”) and people’s councils to build a counter-power to the hegemony of the money and technology systems that dominate our lives now. Such gatherings, advancing their own rights to determine what is important and how we should adopt or reject any given technology or system, make “space for previously undervalued knowledge and expertise to be mobilized.” The ultimate goal described by McQuillan is the expansion of the commons, the heart of an anti-fascist agenda. And as he usefully reminds us, “AI is already part of the system’s ongoing violent response to the autonomous activity of ordinary people.”
Dan McQuillan has gone further than most in critiquing the deep logic of AI and the current wave of machine-learning-based technological change. But even more importantly he lays out a positive vision of a completely different kind of techno-politics that systematically makes visible all the hidden and denied work that still undergirds modern life. In describing a politics based on popular councils at work and in neighborhoods, rooted in care and focused on growing a commons from which we all become wealthier, he’s done us a great service.
Jathan Sadowski has also written a vital book that presents a thorough-going Marxist critique of technology. Unlike Yanis Varoufakis (who I heard on Upstream), who in his most recent book argues that capitalism is over and we’ve entered a new techno-feudalism based on Cloud Serfs and rentierism (many of his observations are sharp and worth taking into account), Sadowski argues that data is fundamentally value, the driving force of capitalist expansion during the past two and a half centuries. The current embrace of data extraction and accumulation is merely the new form of value that has been with us all along. Unlike McQuillan, he does not depart from his full-throated critique to offer a political agenda to counter what’s underway. Instead he underscores an anguish with the poverty of our general discourse:
Rather than a politics of technology, we are left with innovation fetishism and capitalist realism, which shuts down entire ways of understanding our world, imagining potential worlds, and building new worlds. We scarcely realize just how impoverished—politically, technologically, metaphysically—we have become. (p. 210)
I agree with him. I’ve thought many times during the past decades about how reduced our political and philosophical discussions have become, how narrow our concerns are. And yet, my own view continues to focus on how deeply and widely our refusal needs to be to have any hope of challenging the pernicious logic that shapes our lives. From rejecting the technologies that are being promoted by self-serving billionaires to rejecting the logic that accepts as normal that we are expected to sell ourselves to someone else’s agenda in order to survive, I insist that we all have to go a lot farther than we’ve even begun to contemplate. Maybe the clear-cutting of social norms that’s coming will help us abandon the institutions and expectations we’ve clung to as the best we can hope for. Optimism of the will, pessimism of the intellect… yup.
Leave a Reply