Opponent: Evergreen Valley Independent SE | Judge: Jeong-Wan Choi
1ac - whole res 1nc - spec rob spec util kant nc consult icj case 1ar - all 2nr - spec util consult icj case 2ar - all
Apple Valley
6
Opponent: Eagan AE | Judge: Rex Evans
1ac - whole res 1nc - disclose tournament names multiple fw warrants bad defend general principle queerpess icj cp hedge case 1ar - all vague alts bad 2nr - defend general principle vague alts bad 2ar - all
Apple Valley
3
Opponent: Los Altos BF | Judge: David Robinson
1ac - badiou 1nc - espec usa util nc icj cp telugu cp case 1ar - all 2nr - util nc icj cp case 2ar - all
1ac - jaeggi 1nc - community spec no general principle util nc business confidence da hedge case 1ar - all lying iv rvi 2nr - community spec no general principle lying iv rvi 2ar - lying iv
Blue Key
6
Opponent: American Heritage Broward JA | Judge: Tajaih Robinson
1ac - deleuze 1nc - combo shell coastline spec tt nibs deleuze pedophilia emotivism nc hedge case 1ar - all iv on deleuze pedophilia 2nr - tt nibs iv case 2ar - all
1ac - lay 1nc - tt nibble nc spooky skep terrorism da transparency measures cp util nc case 1ar - all 2nr - all 2ar - all
CPS
2
Opponent: Ayala AM | Judge: Ruchir Rastogi
1ac - ptd 1nc -
CPS
4
Opponent: Harker SY | Judge: Parth Shah
1ac - debris 1nc - t-unjust i-law reg case 1ar - all condo 2nr - i-law reg case 2ar - all
CPS
5
Opponent: Harker AS | Judge: Vanessa Nguyen
1ac - debris 1nc - t-unjust i-law regs cp xi lashout da case 1ar - all condo agent fiat bad 2nr - condo agent fiat bad xi lashout da i-law regs cp case 2ar - case xi-lashout da i-law regs cp
Harvard
1
Opponent: Lake Highland Prep YA | Judge: Eric He
1ac - cap 1nc - new affs bad sbsp cp global concon cp eval consequences hedge case 1ar - all 2nr - new affs bad 2ar - all
Harvard
4
Opponent: Scarsdale KS | Judge: David Herrera
1ac - deleuze 1nc - must defend implementation tournament name disclosure must record kant nc tt nibble nc case 1ar - all 2nr - tournament names disclosures kant nc case 2ar - all
Harvard
5
Opponent: Hunter AH | Judge: Henry Eberhart
1ac - whole res 1nc - must open source space elevators cp space deterrence da space innovation da regulation cp v2 case 1ar - all 2nr - space elevators cp case 2ar - all
Jack Howe
2
Opponent: Brentwood BB | Judge: Vanessa Ngywen
1ac - cap 1nc - solvency advocate espec infra single payer cp nukes cp hedge case 1ar - all 2nr - infra single payer cp 2ar - all
Jack Howe
3
Opponent: Ayala AM | Judge: Srinidhi Yerraguntala
1ac - synthetic biology 1nc - new affs bad csa consult who infra da hedge case 1ar - all 2nr - new affs bad 2ar - all
Jack Howe
5
Opponent: Modernbrain AY | Judge: Amanda Ciocca
1ac - fem psycho 1nc - t fw cap case 1ar - all 2nr - cap case 2ar - all
Lex
4
Opponent: Ridge SN | Judge: Daniel Shahab Diaz
1ac - space debris 1nc - spec standard must disclose contact info kant nc china pic tt nibs case 1ar - all multi shell bad ivi 2nr - spec standard kant nc ivi multi shell bad 2ar - all
Lex
6
Opponent: Summit MR | Judge: Neda Bahrani
1ac - trans performance 1nc - t fw telugu cp narratives cp cap k case 1ar - all ivis 2nr - ivis cap k case 2ar - all
Lex
2
Opponent: Bridgeland PT | Judge: Brett Cryan
1ac - kant 1nc - cx checks bad outer space spec util nc space deterrence da tt nibs hedge case 1ar - all 2nr - outer space spec 2ar - case outer space spec
Loyola
2
Opponent: Dulles TY | Judge: Joseph Georges
1ac - nailbomb 1nc - nibs hidden a prioris bad disclose spikes hedge case 1ar - all 2nr - disclose spikes 2ar - all
Loyola
6
Opponent: Harvard Westlake EJ | Judge: Neville Tom
1ac - covid 1nc - tt nibs climate patent da infrastructure da csa t-reduce hedge case 1ar - all 2nr - csa infrastructure da 2ar - all
Palm Classic
1
Opponent: Harker SY | Judge: Felicity Park
1ac - china 1nc - t-nebel outer space spec asteroid mining cp chinese deflection da pan reps k case 1ar - all 2nr - pan reps k t-nebel 2ar - all
Palm Classic
4
Opponent: Immaculate Heart BC | Judge: Joseph Barquin
1ac - mars colonization 1nc - must not spec implementation appropriation spec kant nc case 1ar - all 2nr - must not spec implementation 2ar - all
Palm Classic
5
Opponent: Harker NA | Judge: Lena Mizrahi
1ac - china 1nc - t-nebel asteroid mining cp us-china alliance cp asteroid deflection da adr cp 1ar - all 2nr - asteroid mining cp adr cp case 2ar - all
Palm Classic
Triples
Opponent: Catonsville AT | Judge: James Stuckert, Truman Le, John Boals
1ac - mega constellations 1nc - t-appropriation disclose before flip hacking da starlink ag da adr cp taxation cp case 1ar - all 2nr - t-appropriation 2ar - all
Palm Classic
Doubles
Opponent: Sequoia AS | Judge: Annabelle Long, Spencer Paul, Jonathan Meza
1ac - megaconstellations 1nc - starlink ag da regulation cp t-appropriation case 1ar - condo all 2nr - case regulation cp condo 2ar - case regulation cp
Princeton
1
Opponent: Ardrey Kell RG | Judge: Tara Riggs
1ac - whole res 1nc - jurisdiction spec court politics da icj cp suez canal pic supertrees cp 1ar - all pics bad 2nr - pics bad suez canal pic 2ar - pics bad
Princeton
3
Opponent: Ardrey Kell SG | Judge: Mark Early
1ac - lay 1nc - lay 1ar - lay 2nr - lay 2ar - lay
Valley
1
Opponent: Harker PG | Judge: TJ Maher
1ac - eu 1nc - csa must read rob t-reduce pharma innovation da kant nc case 1ar - all 2nr - csa kant nc 2ar - all
Valley
4
Opponent: Mountain House ES | Judge: Rohit Lakshman
1ac - evergreening 1nc - csa say please oceans rob spec tt nibble nc kant nc hedge case 1ar - all combo shell 2nr - tt nibble nc oceans say please combo shell 2ar - combo shell oceans
Valley
6
Opponent: Walt Whitman EY | Judge: Breigh Plat
1ac - kant 1nc - must not defend as a general principle combo shell hobbes nc hedge case 1ar - all combo shell 2nr - must not defend as a general principle combo shell hedge 2ar - case must not defend as a general primciple combo shell hedge
1ac - covid waiver 1nc - t-reduce spec reductions infra da anonymous donations cp climate patents da case 1ar - all 2nr - climate patents da anonymous donations cp case 2ar - all
Opponent: Murphy Independent AW | Judge: Jared Burke, Conal Thomas-McGinnis
1ac - evergreening 1nc - espec cites theory csa climate patents da fuckshit nc case 1ar - all 2nr - cites theory fuckshit nc 2ar - all
Voices
2
Opponent: Notre Dame AR | Judge: Felicity Park
1ac - covid waiver 1nc - t-reduce must have solvency advocate TRIPS info-sharing distribution cp climate patents da case 1ar - all condo 2nr - t-reduce must have solvency advocate condo 2ar - t-reduce must have solvency advocate condo
Voices
4
Opponent: Immaculate Heart RR | Judge: Quentin Clark
1ac - CRISPR 1nc - effects t consult who gene editing regulation cp case 1ar - all condo bad consult bad 2nr - condo bad consult bad gene editing regulation cp case 2ar - case gene editing regulation cp
Voices
5
Opponent: Prospect ST | Judge: Vishan Chaudhary
1ac - covid waiver 1nc - t-reduce csa espec eliminate nukes climate patents da anonymous donation cp trips info-sharing cp hedge case 1ar - all multi shell bad 2nr - climate patents anonymous donation cp case 2ar - all
Voices
Doubles
Opponent: Archbishop Mitty AS | Judge: Quentin Clark, Gordon Krauss, Samantha McLoughlin
1ac - data exclusivity 1nc - csa effects t infrastructure da kant nc hedge case 1ar - all 2nr - csa 2ar - case csa
info
1
Opponent: dhruv | Judge: yesh
cites
To modify or delete round reports, edit the associated round.
Cites
Entry
Date
0 - Content Warnings
Tournament: info | Round: 1 | Opponent: dhruv | Judge: yesh Let me know before round if you require any content warnings or other adjustments.
Please no graphic descriptions of physical violence or rape.
Tournament: info | Round: 1 | Opponent: dhruv | Judge: yesh Loyola - Loyola Invitational Jack Howe - Jack Howe Memorial Tournament Valley/Valley RR - Mid America Cup (Note - Same tabroom name, different event) Voices - Nano Nagle Classic and Nano Nagle RR Blue Key - Florida Blue Key Speech and Debate Tournament Apple Valley - Apple Valley Mineapple Debate Tournament Princeton - Princeton Classic CPS - College Prep LD Invitational Lex - Lexington Winter Invitational Palm Classic - Palm Classic Harvard - 48th Annual Harvard National Forensics Tournament
2/19/22
1 - CP - Narratives
Tournament: Lex | Round: 6 | Opponent: Summit MR | Judge: Neda Bahrani 3 Counterplan text: we endorse the entirety of the aff minus their use of narratives. To clarify, using personal narratives is bad. We endorse the content of thehir message but we rejec ttheir uses of narratives as a means to express it. Narratives are violent – they force the judge to compare between different people’s experiences and stories, which requires the judge to quantify lived experiences and suffering which causes oppression olympics and violence when someone is told that their narratives are not good enough
1/16/22
1 - CP - Telugu
Tournament: Apple Valley | Round: 3 | Opponent: Los Altos BF | Judge: David Robinson Cites are broken - check open source
11/6/21
1 - Evaluate Consequences
Tournament: Harvard | Round: 1 | Opponent: Lake Highland Prep YA | Judge: Eric He 4 The role of the ballot is to evaluate the consequences of the plan – anything else is self-serving, arbitrary, and begs the question of the resolution Extinction must be relevant given inevitable moral uncertainty Pummer 15 Theron, Junior Research Fellow in Philosophy at St. Anne's College, University of Oxford. “Moral Agreement on Saving the World” Practical Ethics, University of Oxford. May 18, 2015 AT There appears to be lot of disagreement in moral philosophy. Whether these many apparent disagreements are deep and irresolvable, I believe there is at least one thing it is reasonable to agree on right now, whatever general moral view we adopt: that it is very important to reduce the risk that all intelligent beings on this planet are eliminated by an enormous catastrophe, such as a nuclear war. How we might in fact try to reduce such existential risks is discussed elsewhere. My claim here is only that we – whether we’re consequentialists, deontologists, or virtue ethicists – should all agree that we should try to save the world. According to consequentialism, we should maximize the good, where this is taken to be the goodness, from an impartial perspective, of outcomes. Clearly one thing that makes an outcome good is that the people in it are doing well. There is little disagreement here. If the happiness or well-being of possible future people is just as important as that of people who already exist, and if they would have good lives, it is not hard to see how reducing existential risk is easily the most important thing in the whole world. This is for the familiar reason that there are so many people who could exist in the future – there are trillions upon trillions… upon trillions. There are so many possible future people that reducing existential risk is arguably the most important thing in the world, even if the well-being of these possible people were given only 0.001 as much weight as that of existing people. Even on a wholly person-affecting view – according to which there’s nothing (apart from effects on existing people) to be said in favor of creating happy people – the case for reducing existential risk is very strong. As noted in this seminal paper, this case is strengthened by the fact that there’s a good chance that many existing people will, with the aid of life-extension technology, live very long and very high quality lives. You might think what I have just argued applies to consequentialists only. There is a tendency to assume that, if an argument appeals to consequentialist considerations (the goodness of outcomes), it is irrelevant to non-consequentialists. But that is a huge mistake. Non-consequentialism is the view that there’s more that determines rightness than the goodness of consequences or outcomes; it is not the view that the latter don’t matter. Even John Rawls wrote, “All ethical doctrines worth our attention take consequences into account in judging rightness. One which did not would simply be irrational, crazy.” Minimally plausible versions of deontology and virtue ethics must be concerned in part with promoting the good, from an impartial point of view. They’d thus imply very strong reasons to reduce existential risk, at least when this doesn’t significantly involve doing harm to others or damaging one’s character. What’s even more surprising, perhaps, is that even if our own good (or that of those near and dear to us) has much greater weight than goodness from the impartial “point of view of the universe,” indeed even if the latter is entirely morally irrelevant, we may nonetheless have very strong reasons to reduce existential risk. Even egoism, the view that each agent should maximize her own good, might imply strong reasons to reduce existential risk. It will depend, among other things, on what one’s own good consists in. If well-being consisted in pleasure only, it is somewhat harder to argue that egoism would imply strong reasons to reduce existential risk – perhaps we could argue that one would maximize her expected hedonic well-being by funding life extension technology or by having herself cryogenically frozen at the time of her bodily death as well as giving money to reduce existential risk (so that there is a world for her to live in!). I am not sure, however, how strong the reasons to do this would be. But views which imply that, if I don’t care about other people, I have no or very little reason to help them are not even minimally plausible views (in addition to hedonistic egoism, I here have in mind views that imply that one has no reason to perform an act unless one actually desires to do that act). To be minimally plausible, egoism will need to be paired with a more sophisticated account of well-being. To see this, it is enough to consider, as Plato did, the possibility of a ring of invisibility – suppose that, while wearing it, Ayn could derive some pleasure by helping the poor, but instead could derive just a bit more by severely harming them. Hedonistic egoism would absurdly imply she should do the latter. To avoid this implication, egoists would need to build something like the meaningfulness of a life into well-being, in some robust way, where this would to a significant extent be a function of other-regarding concerns (see chapter 12 of this classic intro to ethics). But once these elements are included, we can (roughly, as above) argue that this sort of egoism will imply strong reasons to reduce existential risk. Add to all of this Samuel Scheffler’s recent intriguing arguments (quick podcast version available here) that most of what makes our lives go well would be undermined if there were no future generations of intelligent persons. On his view, my life would contain vastly less well-being if (say) a year after my death the world came to an end. So obviously if Scheffler were right I’d have very strong reason to reduce existential risk. We should also take into account moral uncertainty. What is it reasonable for one to do, when one is uncertain not (only) about the empirical facts, but also about the moral facts? I’ve just argued that there’s agreement among minimally plausible ethical views that we have strong reason to reduce existential risk – not only consequentialists, but also deontologists, virtue ethicists, and sophisticated egoists should agree. But even those (hedonistic egoists) who disagree should have a significant level of confidence that they are mistaken, and that one of the above views is correct. Even if they were 90 sure that their view is the correct one (and 10 sure that one of these other ones is correct), they would have pretty strong reason, from the standpoint of moral uncertainty, to reduce existential risk. Perhaps most disturbingly still, even if we are only 1 sure that the well-being of possible future people matters, it is at least arguable that, from the standpoint of moral uncertainty, reducing existential risk is the most important thing in the world. Again, this is largely for the reason that there are so many people who could exist in the future – there are trillions upon trillions… upon trillions. (For more on this and other related issues, see this excellent dissertation). Of course, it is uncertain whether these untold trillions would, in general, have good lives. It’s possible they’ll be miserable. It is enough for my claim that there is moral agreement in the relevant sense if, at least given certain empirical claims about what future lives would most likely be like, all minimally plausible moral views would converge on the conclusion that we should try to save the world. While there are some non-crazy views that place significantly greater moral weight on avoiding suffering than on promoting happiness, for reasons others have offered (and for independent reasons I won’t get into here unless requested to), they nonetheless seem to be fairly implausible views. And even if things did not go well for our ancestors, I am optimistic that they will overall go fantastically well for our descendants, if we allow them to. I suspect that most of us alive today – at least those of us not suffering from extreme illness or poverty – have lives that are well worth living, and that things will continue to improve. Derek Parfit, whose work has emphasized future generations as well as agreement in ethics, described our situation clearly and accurately: “We live during the hinge of history. Given the scientific and technological discoveries of the last two centuries, the world has never changed as fast. We shall soon have even greater powers to transform, not only our surroundings, but ourselves and our successors. If we act wisely in the next few centuries, humanity will survive its most dangerous and decisive period. Our descendants could, if necessary, go elsewhere, spreading through this galaxy…. Our descendants might, I believe, make the further future very good. But that good future may also depend in part on us. If our selfish recklessness ends human history, we would be acting very wrongly.” (From chapter 36 of On What Matters)
2/18/22
1 - K - Capitalism
Tournament: Jack Howe | Round: 5 | Opponent: Modernbrain AY | Judge: Amanda Ciocca 2 The 1AC represents the individualist telos of liberal feminism – the turn to gender identity colludes with neoliberal austerity and disconnects itself from material struggles for economic justice. Fraser ‘13 Nancy, critical theorist, feminist, and the Henry A. and Louise Loeb Professor of Political and Social Science and professor of philosophy at The New School in New York City. 10/14/2013. “How feminism became capitalism's handmaiden - and how to reclaim it.” https://www.theguardian.com/commentisfree/2013/oct/14/feminism-capitalist-handmaiden-neoliberal pat In a cruel twist of fate, I fear that the movement for women's liberation has become entangled in a dangerous liaison with neoliberal efforts to build a free-market society. That would explain how it came to pass that feminist ideas that once formed part of a radical worldview are increasingly expressed in individualist terms. Where feminists once criticised a society that promoted careerism, they now advise women to "lean in". A movement that once prioritised social solidarity now celebrates female entrepreneurs. A perspective that once valorised "care" and interdependence now encourages individual advancement and meritocracy. What lies behind this shift is a sea-change in the character of capitalism. The state-managed capitalism of the postwar era has given way to a new form of capitalism – "disorganised", globalising, neoliberal. Second-wave feminism emerged as a critique of the first but has become the handmaiden of the second. With the benefit of hindsight, we can now see that the movement for women's liberation pointed simultaneously to two different possible futures. In a first scenario, it prefigured a world in which gender emancipation went hand in hand with participatory democracy and social solidarity; in a second, it promised a new form of liberalism, able to grant women as well as men the goods of individual autonomy, increased choice, and meritocratic advancement. Second-wave feminism was in this sense ambivalent. Compatible with either of two different visions of society, it was susceptible to two different historical elaborations. As I see it, feminism's ambivalence has been resolved in recent years in favour of the second, liberal-individualist scenario – but not because we were passive victims of neoliberal seductions. On the contrary, we ourselves contributed three important ideas to this development. One contribution was our critique of the "family wage": the ideal of a male breadwinner-female homemaker family that was central to state-organised capitalism. Feminist criticism of that ideal now serves to legitimate "flexible capitalism". After all, this form of capitalism relies heavily on women's waged labour, especially low-waged work in service and manufacturing, performed not only by young single women but also by married women and women with children; not by only racialised women, but by women of virtually all nationalities and ethnicities. As women have poured into labour markets around the globe, state-organised capitalism's ideal of the family wage is being replaced by the newer, more modern norm – apparently sanctioned by feminism – of the two-earner family. Never mind that the reality that underlies the new ideal is depressed wage levels, decreased job security, declining living standards, a steep rise in the number of hours worked for wages per household, exacerbation of the double shift – now often a triple or quadruple shift – and a rise in poverty, increasingly concentrated in female-headed households. Neoliberalism turns a sow's ear into a silk purse by elaborating a narrative of female empowerment. Invoking the feminist critique of the family wage to justify exploitation, it harnesses the dream of women's emancipation to the engine of capital accumulation. Feminism has also made a second contribution to the neoliberal ethos. In the era of state-organised capitalism, we rightly criticised a constricted political vision that was so intently focused on class inequality that it could not see such "non-economic" injustices as domestic violence, sexual assault and reproductive oppression. Rejecting "economism" and politicising "the personal", feminists broadened the political agenda to challenge status hierarchies premised on cultural constructions of gender difference. The result should have been to expand the struggle for justice to encompass both culture and economics. But the actual result was a one-sided focus on "gender identity" at the expense of bread and butter issues. Worse still, the feminist turn to identity politics dovetailed all too neatly with a rising neoliberalism that wanted nothing more than to repress all memory of social equality. In effect, we absolutised the critique of cultural sexism at precisely the moment when circumstances required redoubled attention to the critique of political economy. Finally, feminism contributed a third idea to neoliberalism: the critique of welfare-state paternalism. Undeniably progressive in the era of state-organised capitalism, that critique has since converged with neoliberalism's war on "the nanny state" and its more recent cynical embrace of NGOs. A telling example is "microcredit", the programme of small bank loans to poor women in the global south. Cast as an empowering, bottom-up alternative to the top-down, bureaucratic red tape of state projects, microcredit is touted as the feminist antidote for women's poverty and subjection. What has been missed, however, is a disturbing coincidence: microcredit has burgeoned just as states have abandoned macro-structural efforts to fight poverty, efforts that small-scale lending cannot possibly replace. In this case too, then, a feminist idea has been recuperated by neoliberalism. A perspective aimed originally at democratising state power in order to empower citizens is now used to legitimise marketisation and state retrenchment. In all these cases, feminism's ambivalence has been resolved in favour of (neo)liberal individualism. But the other, solidaristic scenario may still be alive. The current crisis affords the chance to pick up its thread once more, reconnecting the dream of women's liberation with the vision of a solidary society. To that end, feminists need to break off our dangerous liaison with neoliberalism and reclaim our three "contributions" for our own ends. First, we might break the spurious link between our critique of the family wage and flexible capitalism by militating for a form of life that de-centres waged work and valorises unwaged activities, including – but not only – carework. Second, we might disrupt the passage from our critique of economism to identity politics by integrating the struggle to transform a status order premised on masculinist cultural values with the struggle for economic justice. Finally, we might sever the bogus bond between our critique of bureaucracy and free-market fundamentalism by reclaiming the mantle of participatory democracy as a means of strengthening the public powers needed to constrain capital for the sake of justice.
The system’s terminally unsustainable, it’s the root cause of every impact, and attempting to save it only results in extinction and scapegoating violence. Robinson ’16 (William; 2016; professor of sociology, global studies and Latin American studies at the University of California at Santa Barbara; Truthout; “Sadistic Capitalism: Six Urgent Matters for Humanity in Global Crisis”; robinson 16http:www.truth-out.org/opinion/item/35596-sadistic-capitalism-six-urgent-matters-for-humanity-in-global-crisis) In these mean streets of globalized capitalism in crisis, it has become profitable to turn poverty and inequality into a tourist attraction. The South African Emoya Luxury Hotel and Spa company has made a glamorized spectacle of it. The resort recently advertised an opportunity for tourists to stay "in our unique Shanty Town ... and experience traditional township living within a safe private game reserve environment." A cluster of simulated shanties outside of Bloemfontein that the company has constructed "is ideal for team building, braais, bachelors parties, theme parties and an experience of a lifetime," read the ad. The luxury accommodations, made to appear from the outside as shacks, featured paraffin lamps, candles, a battery-operated radio, an outside toilet, a drum and fireplace for cooking, as well as under-floor heating, air conditioning and wireless internet access. A well-dressed, young white couple is pictured embracing in a field with the corrugated tin shanties in the background. The only thing missing in this fantasy world of sanitized space and glamorized poverty was the people themselves living in poverty. Escalating inequalities fuel capitalism's chronic problem of over-accumulation. The "luxury shanty town" in South Africa is a fitting metaphor for global capitalism as a whole. Faced with a stagnant global economy, elites have managed to turn war, structural violence and inequality into opportunities for capital, pleasure and entertainment. It is hard not to conclude that unchecked capitalism has become what I term "sadistic capitalism," in which the suffering and deprivation generated by capitalism become a source of aesthetic pleasure, leisure and entertainment for others. I recently had the opportunity to travel through several countries in Latin America, the Middle East, North Africa, East Asia and throughout North America. I was on sabbatical to research what the global crisis looks like on the ground around the world. Everywhere I went, social polarization and political tensions have reached explosive dimensions. Where is the crisis headed, what are the possible outcomes and what does it tell us about global capitalism and resistance? This crisis is not like earlier structural crises of world capitalism, such as in the 1930s or 1970s. This one is fast becoming systemic. The crisis of humanity shares aspects of earlier structural crises of world capitalism, but there are six novel, interrelated dimensions to the current moment that I highlight here, in broad strokes, as the "big picture" context in which countries and peoples around the world are experiencing a descent into chaos and uncertainty. 1) The level of global social polarization and inequality is unprecedented in the face of out-of-control, over-accumulated capital. In January 2016, the development agency Oxfam published a follow-up to its report on global inequality that had been released the previous year. According to the new report, now just 62 billionaires -- down from 80 identified by the agency in its January 2015 report -- control as much wealth as one half of the world's population, and the top 1 owns more wealth than the other 99 combined. Beyond the transnational capitalist class and the upper echelons of the global power bloc, the richest 20 percent of humanity owns some 95 percent of the world's wealth, while the bottom 80 percent has to make do with just 5 percent. This 20-80 divide of global society into haves and the have-nots is the new global social apartheid. It is evident not just between rich and poor countries, but within each country, North and South, with the rise of new affluent high-consumption sectors alongside the downward mobility, "precariatization," destabilization and expulsion of majorities. Escalating inequalities fuel capitalism's chronic problem of over-accumulation: The transnational capitalist class cannot find productive outlets to unload the enormous amounts of surplus it has accumulated, leading to stagnation in the world economy. The signs of an impending depression are everywhere. The front page of the February 20 issue of The Economist read, "The World Economy: Out of Ammo?" Extreme levels of social polarization present a challenge to dominant groups. They strive to purchase the loyalty of that 20 percent, while at the same time dividing the 80 percent, co-opting some into a hegemonic bloc and repressing the rest. Alongside the spread of frightening new systems of social control and repression is heightened dissemination through the culture industries and corporate marketing strategies that depoliticize through consumerist fantasies and the manipulation of desire. As "Trumpism" in the United States so well illustrates, another strategy of co-optation is the manipulation of fear and insecurity among the downwardly mobile so that social anxiety is channeled toward scapegoated communities. This psychosocial mechanism of displacing mass anxieties is not new, but it appears to be increasing around the world in the face of the structural destabilization of capitalist globalization. Scapegoated communities are under siege, such as the Rohingya in Myanmar, the Muslim minority in India, the Kurds in Turkey, southern African immigrants in South Africa, and Syrian and Iraqi refugees and other immigrants in Europe. As with its 20th century predecessor, 21st century fascism hinges on such manipulation of social anxiety at a time of acute capitalist crisis. Extreme inequality requires extreme violence and repression that lend to projects of 21st century fascism. 2) The system is fast reaching the ecological limits to its reproduction. We have reached several tipping points in what environmental scientists refer to as nine crucial "planetary boundaries." We have already exceeded these boundaries in three areas -- climate change, the nitrogen cycle and diversity loss. There have been five previous mass extinctions in earth's history. While all these were due to natural causes, for the first time ever, human conduct is intersecting with and fundamentally altering the earth system. We have entered what Paul Crutzen, the Dutch environmental scientist and Nobel Prize winner, termed the Anthropocene -- a new age in which humans have transformed up to half of the world's surface. We are altering the composition of the atmosphere and acidifying the oceans at a rate that undermines the conditions for life. The ecological dimensions of global crisis cannot be understated. "We are deciding, without quite meaning to, which evolutionary pathways will remain open and which will forever be closed," observes Elizabeth Kolbert in her best seller, The Sixth Extinction. "No other creature has ever managed this ... The Sixth Extinction will continue to determine the course of life long after everything people have written and painted and built has been ground into dust." Capitalism cannot be held solely responsible. The human-nature contradiction has deep roots in civilization itself. The ancient Sumerian empires, for example, collapsed after the population over-salinated their crop soil. The Mayan city-state network collapsed about AD 900 due to deforestation. And the former Soviet Union wrecked havoc on the environment. However, given capital's implacable impulse to accumulate profit and its accelerated commodification of nature, it is difficult to imagine that the environmental catastrophe can be resolved within the capitalist system. "Green capitalism" appears as an oxymoron, as sadistic capitalism's attempt to turn the ecological crisis into a profit-making opportunity, along with the conversion of poverty into a tourist attraction. 3) The sheer magnitude of the means of violence is unprecedented, as is the concentrated control over the means of global communications and the production and circulation of knowledge, symbols and images. We have seen the spread of frightening new systems of social control and repression that have brought us into the panoptical surveillance society and the age of thought control. This real-life Orwellian world is in a sense more perturbing than that described by George Orwell in his iconic novel 1984. In that fictional world, people were compelled to give their obedience to the state ("Big Brother") in exchange for a quiet existence with guarantees of employment, housing and other social necessities. Now, however, the corporate and political powers that be force obedience even as the means of survival are denied to the vast majority. Global apartheid involves the creation of "green zones" that are cordoned off in each locale around the world where elites are insulated through new systems of spatial reorganization, social control and policing. "Green zone" refers to the nearly impenetrable area in central Baghdad that US occupation forces established in the wake of the 2003 invasion of Iraq. The command center of the occupation and select Iraqi elite inside that green zone were protected from the violence and chaos that engulfed the country. Urban areas around the world are now green zoned through gentrification, gated communities, surveillance systems, and state and private violence. Inside the world's green zones, privileged strata avail themselves of privatized social services, consumption and entertainment. They can work and communicate through internet and satellite sealed off under the protection of armies of soldiers, police and private security forces. Green zoning takes on distinct forms in each locality. In Palestine, I witnessed such zoning in the form of Israeli military checkpoints, Jewish settler-only roads and the apartheid wall. In Mexico City, the most exclusive residential areas in the upscale Santa Fe District are accessible only by helicopter and private gated roads. In Johannesburg, a surreal drive through the exclusive Sandton City area reveals rows of mansions that appear as military compounds, with private armed towers and electrical and barbed-wire fences. In Cairo, I toured satellite cities ringing the impoverished center and inner suburbs where the country's elite could live out their aspirations and fantasies. They sport gated residential complexes with spotless green lawns, private leisure and shopping centers and English-language international schools under the protection of military checkpoints and private security police. In other cities, green zoning is subtler but no less effective. In Los Angeles, where I live, the freeway system now has an express lane reserved for those that can pay an exorbitant toll. On this lane, the privileged speed by, while the rest remain one lane over, stuck in the city's notorious bumper-to-bumper traffic -- or even worse, in notoriously underfunded and underdeveloped public transportation, where it may take half a day to get to and from work. There is no barrier separating this express lane from the others. However, a near-invisible closed surveillance system monitors every movement. If a vehicle without authorization shifts into the exclusive lane, it is instantly recorded by this surveillance system and a heavy fine is imposed on the driver, under threat of impoundment, while freeway police patrols are ubiquitous. Outside of the global green zones, warfare and police containment have become normalized and sanitized for those not directly at the receiving end of armed aggression. "Militainment" -- portraying and even glamorizing war and violence as entertaining spectacles through Hollywood films and television police shows, computer games and corporate "news" channels -- may be the epitome of sadistic capitalism. It desensitizes, bringing about complacency and indifference. In between the green zones and outright warfare are prison industrial complexes, immigrant and refugee repression and control systems, the criminalization of outcast communities and capitalist schooling. The omnipresent media and cultural apparatuses of the corporate economy, in particular, aim to colonize the mind -- to undermine the ability to think critically and outside the dominant worldview. A neofascist culture emerges through militarism, extreme masculinization, racism and racist mobilizations against scapegoats. 4) We are reaching limits to the extensive expansion of capitalism. Capitalism is like riding a bicycle: When you stop pedaling the bicycle, you fall over. If the capitalist system stops expanding outward, it enters crisis and faces collapse. In each earlier structural crisis, the system went through a new round of extensive expansion -- from waves of colonial conquest in earlier centuries, to the integration in the late 20th and early 21st centuries of the former socialist countries, China, India and other areas that had been marginally outside the system. There are no longer any new territories to integrate into world capitalism. Meanwhile, the privatization of education, health care, utilities, basic services and public land are turning those spaces in global society that were outside of capital's control into "spaces of capital." Even poverty has been turned into a commodity. What is there left to commodify? Where can the system now expand? With the limits to expansion comes a turn toward militarized accumulation -- making wars of endless destruction and reconstruction and expanding the militarization of social and political institutions so as to continue to generate new opportunities for accumulation in the face of stagnation. 5) There is the rise of a vast surplus population inhabiting a "planet of slums," alienated from the productive economy, thrown into the margins and subject to these sophisticated systems of social control and destruction. Global capitalism has no direct use for surplus humanity. But indirectly, it holds wages down everywhere and makes new systems of 21st century slavery possible. These systems include prison labor, the forced recruitment of miners at gunpoint by warlords contracted by global corporations to dig up valuable minerals in the Congo, sweatshops and exploited immigrant communities (including the rising tide of immigrant female caregivers for affluent populations). Furthermore, the global working class is experiencing accelerated "precariatization." The "new precariat" refers to the proletariat that faces capital under today's unstable and precarious labor relations -- informalization, casualization, part-time, temp, immigrant and contract labor. As communities are uprooted everywhere, there is a rising reserve army of immigrant labor. The global working class is becoming divided into citizen and immigrant workers. The latter are particularly attractive to transnational capital, as the lack of citizenship rights makes them particularly vulnerable, and therefore, exploitable. The challenge for dominant groups is how to contain the real and potential rebellion of surplus humanity, the immigrant workforce and the precariat. How can they contain the explosive contradictions of this system? The 21st century megacities become the battlegrounds between mass resistance movements and the new systems of mass repression. Some populations in these cities (and also in abandoned countryside) are at risk of genocide, such as those in Gaza, zones in Somalia and Congo, and swaths of Iraq and Syria. 6) There is a disjuncture between a globalizing economy and a nation-state-based system of political authority. Transnational state apparatuses are incipient and do not wield enough power and authority to organize and stabilize the system, much less to impose regulations on runaway transnational capital. In the wake of the 2008 financial collapse, for instance, the governments of the G-8 and G-20 were unable to impose transnational regulation on the global financial system, despite a series of emergency summits to discuss such regulation. Elites historically have attempted to resolve the problems of over-accumulation by state policies that can regulate the anarchy of the market. However, in recent decades, transnational capital has broken free from the constraints imposed by the nation-state. The more "enlightened" elite representatives of the transnational capitalist class are now clamoring for transnational mechanisms of regulation that would allow the global ruling class to reign in the anarchy of the system in the interests of saving global capitalism from itself and from radical challenges from below. At the same time, the division of the world into some 200 competing nation-states is not the most propitious of circumstances for the global working class. Victories in popular struggles from below in any one country or region can (and often do) become diverted and even undone by the structural power of transnational capital and the direct political and military domination that this structural power affords the dominant groups. In Greece, for instance, the leftist Syriza party came to power in 2015 on the heels of militant worker struggles and a mass uprising. But the party abandoned its radical program as a result of the enormous pressure exerted on it from the European Central Bank and private international creditors. The Systemic Critique of Global Capitalism A growing number of transnational elites themselves now recognize that any resolution to the global crisis must involve redistribution downward of income. However, in the viewpoint of those from below, a neo-Keynesian redistribution within the prevailing corporate power structure is not enough. What is required is a redistribution of power downward and transformation toward a system in which social need trumps private profit. A global rebellion against the transnational capitalist class has spread since the financial collapse of 2008. Wherever one looks, there is popular, grassroots and leftist struggle, and the rise of new cultures of resistance: the Arab Spring; the resurgence of leftist politics in Greece, Spain and elsewhere in Europe; the tenacious resistance of Mexican social movements following the Ayotzinapa massacre of 2014; the favela uprising in Brazil against the government's World Cup and Olympic expulsion policies; the student strikes in Chile; the remarkable surge in the Chinese workers' movement; the shack dwellers and other poor people's campaigns in South Africa; Occupy Wall Street, the immigrant rights movement, Black Lives Matter, fast food workers' struggle and the mobilization around the Bernie Sanders presidential campaign in the United States. This global revolt is spread unevenly and faces many challenges. A number of these struggles, moreover, have suffered setbacks, such as the Greek working-class movement and, tragically, the Arab Spring. What type of a transformation is viable, and how do we achieve it? How we interpret the global crisis is itself a matter of vital importance as politics polarize worldwide between a neofascist and a popular response. The systemic critique of global capitalism must strive to influence, from this vantage point, the discourse and practice of movements for a more just distribution of wealth and power. Our survival may depend on it. We call for a break from the cynicism of pragmatism in exchange for experimentation through a Benjamian red-green process of minor utopia that refigures social relations and sparks creativity, thought and feeling Featherstone 17 Mark, Senior Lecturer in Sociology at Keele University. “Planet Utopia: Utopia, Dystopia, and Globalisation.” Series: Routledge studies in social and political thought. February 17, 2017. In Benjamin’s (2009) work on the trauerspiel, early modern German tragedy captures the image of tyranny in a state of collapse. The tyrant seeks to exert control, but continually fails, because fate dictates that his efforts to shore up his empire must always miss their mark. Realising his terminal situation, the tyrant falls into an abyss of melancholia, characterised by a sense of paralysis and a perception of the end: the end of his reign, the end of his meaning, the end of his-tory. Surveying his world, the sovereign sees nothing but failure, ruination, and decay everywhere. It is in this dystopian vision of collapse that Benjamin finds utopian possibility and the space of the new. In a world devoid of meaning and significance, he explains that the exposed thing, or what he calls the creaturely, offers hope for the future, because it affords the opportunity to begin again. As Susan Buck-Morss (1991) shows, Benjamin saw the same situation played out in the Paris arcades of the late 19th century, where the ruined objects of early consumer capitalism shone with utopian possibility. Perhaps this perspective is still appropriate, or even more appropriate, for the contemporary world, where consumer capitalism has become a global form, what Benjamin might call a global ur-landscape, a kind of natural background or fate which seems absolutely inescapable. Frozen in this natural system, there is nowhere to go, and we collapse into repetition, compulsion, and routine in order to dull the pain caused by our lack of future. However, descent into the dark underworld of the contemporary addictogenic society offers no real escape, because immersion in the compulsion to repeat simply emphasises our profane objectivity—unless, of course, it produces reflexive recognition and the determination to engineer change. There is, therefore, value in ruins. There is ruin value in the debased worker who is simply a meaningless cog in a machine, ruin value in the prostitute who is little more than a piece of meat bought and sold like any other commodity, ruin value in the addict who is a slave to junk, ruin value in the slum dweller who must struggle to survive on a daily basis. In these ruined bodies living in dystopia we confront Benjamin’s (2009) creaturely life, the blank people Catherine Malabou (2012) calls the new wounded, the waste products of late capitalism who open up the possibility of the kind of catastrophic and post-catastrophic subjectivity Wilfred Bion explored through his work. For Bion (1993) these subjects come face to face with bare life, or the thing in itself he captured through his use of the symbol O, and must find some way forward into the future. In other words, O represents the lived experience of dystopia, a world catastrophe for a destroyed subject which is also a blank canvas, an island of hope that points towards an infinite number of possible futures. Although Bion was centrally concerned with catastrophic subjectivity, Benjamin’s (1999) utopians were not only destroyed subjects-cum-objects—the prostitutes, the beggars, and scum of the capitalist system—but also children, who always exist on the edge of the world, because they are in the process of being socialised into normal ways of living. Benjamin (2006) found utopian hope in kids, whose naïve questions—Where did I come from? What is this, that, and the other? Why is the world the way it is? and so on—suggest distance from orthodoxy and the accepted order of things, because their way of being suggests a model of imaginative, ludic thought and practice which might enable everybody else to escape the closure of modern, capitalist society. Against the hard pragmatism of the capitalist, who is only interested in costs and benefits, Benjamin wanted to wake the capitalist subject up to the dream-world of the child who invents the future through everyday play. For Benjamin, the human future is hidden within these small utopias (Stewart, 2010). Even in the contemporary situation, where the child has become a key source of value production for capitalists, Benjamin would resist despair on the basis that children will always find the new in their play with even the most profane objects. In his work, capitalism evolves through different conceptions of value, where use value becomes exchange value becomes symbolic value becomes ruin value becomes utopian value, which results in the transition of the object from a useful object to a commodity to be bought and sold to a symbol to be exchanged and finally a ruined piece of waste that signals the closure of one way of thinking and the possibility of some other path into the future (Featherstone, 2005). This is how Benjamin finds utopia in dystopia, infinity in the finite and the profane, and suggests we might escape the nihilism of the always the same of capitalism. Ž ižek (2008, 2010) makes a similar point in his recent works on catastrophe and utopia. In his In Defence of Lost Causes (2008), he argues that we must exploit the current global situation in the name of the lost cause of the eternal idea. However, whereas Ž ižek’s eternal idea reflects a Platonic notion of justice, I would argue that this concept has little value today, simply because of its inherent authoritarianism, and must instead be taken to represent a kind of empty signifier, which we need to fill out through creative practice. Thus, my view is that what the pursuit of the eternal idea of justice calls for is less some transcendental imperative around division of resources imposed from above and more the creation of a space of immanence to enable experimentation about what it is people value in life. Although this call may appear to be based in utopian idealism, I would argue that such activity is absolutely practical and rooted in the immanent idealism of the child at play. Absorbed in play, this utopian child exemplifies the idea of fixation, which reflects deep immersion in the objective world, where profane things become magical signs of the future to come. Utopian play is purposeful, and characterised by practice organised around an imagined goal, but centrally a goal which is open to adaptation on the basis of creative interaction with changing circumstance. Thus it becomes clear why culture is so important politically—culture is the space of interaction between the subject and the objective world, where the subject simultaneously makes meaning in the world and in doing so creates his own identity. In my view, this is what utopian practice means today, and how we can develop a mode of concrete utopianism to oppose the global capitalist system that seems devoid of spirit, significance, and human meaning. As Žižek points out in his apocalyptic Living in the End Times (2010), the generalised crisis of late capitalism, which takes in looming ecological catastrophe and intractable social division, means that we must find a way to move beyond the neoliberal utopia-cum-dystopia in the creation of a human world. In my view, culture must play a central role in this task, because culture is communication, and the basis upon which humans form worlds. Culture is also the medium of human imagination, creativity, and fantasy, what Winnicott called our little madnesses (Kuhn, 2013). I would argue that we need more little madnesses in the contemporary world, simply because neoliberal capitalism has created a worldless world where meaning is reduced to economic equations around value. There is no humanity in this mode of thinking. Thus, my objective in this book is to consider the concepts of dystopia and utopia from the vantage point of the seashore where children play and imagine possible worlds very different from our own. Following this introductory chapter, where I have sought to read global politics through the lens of the psychoanalysis of D. W. Winnicott, in the next chapter I move on to focus on the situation of contemporary Greece. Wrecked by EU austerity measures caused by fantastical attempts to build a new neoliberal utopia on the back of unsustainable debt and credit, I compare and contrast the Greece of the early 21st century with the Greece of the original utopians, the ancient Greek philosophers Plato (1991) and Socrates, in order to try to articulate a vision of a more socially just, economically sustainable society. In Chapters 2, 3, and 4 of the book, which comprise the centre of the first part of the work, I move on from this exploration of Greece, which re-reads Plato’s Republic (1991) through the lens of Alain Badiou’s philosophy, in order to try to understand how the contemporary global capitalist model emerged and whether it is possible to read this history through concepts of utopia and dystopia. On the basis that there is no sustained study of the utopian vision of capitalism, which is supposed to be the realist mode of social and economic organisation par excellence, in these three chapters I track the evolution of capitalism and capitalist thought back through the work of Adam Smith (1982, 1999), John Locke (1988), and Thomas Hobbes (2008) before leaping forward into the works of Milton Friedman (2002) and finally the key theories of contemporary financialisation. In order to kick-start this history, I begin with a discussion of the difference between the capitalist vision of economy and the archaic, primitive view of economy found in Plato (1991), but also anthropologists such as Marshall Sahlins (1974) and Marcel Mauss (2000). Where the latter primitivists regard economy in terms of the need to sustain life, the capitalists, perhaps starting with Bernard Mandeville (1989) and John Locke, take economy as a means of ever-increasing productivity and profitability. Tracing the development of this history, in Chapter 3 of the book I explore the development of capitalism in America, and particularly across the post-World War II period when Milton Friedman (2002) and the neoliberal thinkers read economics through the cold war cybernetic theory of early computational thinkers such as Norbert Wiener and John Von Neumann, who, with John Nash, was instrumental in the development of game theory (Mirowski, 2002). In order to extend this work, which shows how economy evolved from a system for the distribution of scarce goods necessary to sustain life to a technoscientific cybernetic model concerned with the production of profit removed from any concern with human or environmental sustainability, I move on to look at the ultimate form of capitalist, economic abstraction, financialisation, where money makes money without the need for human production. Against this theory of the non- or post-human dimensions of contemporary economy, where human and world are subordinate to the needs of the financial system that abolishes the future in the name of debt repayment, in Chapter 5 of the book I take up an alternative vision of economy, organised around the irreducible sociality of people and the necessary relationship between human and world explored in the philosophy of Merleau-Ponty (1969) and Deleuze and Guattari (1994). The aim of this chapter, then, is to suggest a leftist, red-green model of what I call the minor utopia, where work and productivity are understood in terms of natural productivity and the satisfaction of need, rather than abstract profit making that harms humanity both in itself and through the destruction of its biospheric life support system. In this chapter I connect Merleau-Ponty (1969) to Marx (1988), and recall my earlier reference to Winnicott (2005) on infant creativity, in order to argue that humans are infinitely creative and imaginative and defined by the need to express themselves and that the capitalist model of economy has progressively subjugated this potential and reduced humans to profit-making machines. Inspired by Marcuse’s (1987) work, I suggest that under this utilitarian model there is no room for life or human imagination that transgresses the current order of things, which is by definition beyond utility. In Chapter 6, I return to the issue of utopian potential through an exploration of the dystopian dimensions of the ur-space of sociality, the city, in the cinema of the Danish director Nicolas Winding Refn, and particularly the ways in which he situates his characters within an autistic space where social relations never hold and continually break apart. The space of the city is, of course, key here because the history of the ideas of utopia, dystopia, identity, society, economy, politics, and culture can be traced back to the invention of the urban form that creates a space for the articulation of everything human. Akin to a variety of contemporary utopias, which envision the city in terms of dystopian collapse, Winding Refn’s films, but particularly his most recent works Drive (2011) and Only God Forgives (2013), imagine the globalised city (his cities span the globe, from LA in the West to Bangkok in the East, to create a nightmarish vision of the global city) saturated with asociality, suspicion, mistrust, and ultra-violence. At the heart of both films I consider, the main character is explained in terms of destroyed masculinity rooted in lost childhood and the kind of abandonment one might imagine Winnicott’s children suffer in the mechanised world where alienated work in the name of profitability is more important than human development. It is on the basis of this work on cultural expressions of global dystopia linked to ideas of the collapse of social and particularly familial relations that I turn, in Chapter 7, to a consideration of the situation of youth in contemporary Britain. In this chapter, entitled Dis-United Kingdom, I move back into straight sociological critique and take up a discussion of the riots of 2011.
9/23/21
1 - K - Cornpess
Tournament: Loyola | Round: 6 | Opponent: Harvard Westlake EJ | Judge: Neville Tom 9 The only ethical demand is that of corn – the end of the world itself – the system of violent antagonisms means solving for contingent violence only reifies futurism which subjects corn bodies to infinite ontological exclusion and erasure. Anti-corn violence is fundamentally irrational and progress within civil society is constructed in opposition to corn. James 9, 11-5-2009, "Why Don’t People Like Corn?," James and the Giant Corn, http://www.jamesandthegiantcorn.com/2009/11/05/why-dont-people-like-corn/ SJCPJG I read an interesting question on the still growing thread on the problems with CSI: Miami’s “Bad Seed” episode. “Between this episode and some other stuff I’ve heard about corn, I started wondering what all the concern is about corn lately. … Can you now help shine some light on why the corn industry has been getting such a bad reputation lately?” I’m really not sure about the answer to this question. My best guess is that the declining reputation of corn has to do with our transition from a society where our biggest food concern was not having enough to eat, to one where we worry more about eating too much rather than not enough. Plants like corn want to be eaten – their intrinsic purpose is to attract and be eaten – bright colors and lack of toxicity prove. Ahrens 17 Joseph Ahrens, 8-18-2017, "(1) Do vegetables and fruits want to be eaten?," Developed handling/processing protocols for foods worldwide, https://www.quora.com/Do-vegetables-and-fruits-want-to-be-eaten SJCPJG Yes and no! Some want to be eaten so you will spread their seeds. Some want to be left alone because they do not want you to assist in this activity. So they are poisonous to us. But other animals are immune to the poison and they do the job. Berries are very interesting. They taste delicious. But they also contain chemicals that give animals diarrhea. Thus causing the seeds to be expelled, along with fertilizer! Some plants have berries that are bright colors to attract animals to eat them. Other plants have bright colors to warn us to stay away and they are poisonous.
9/23/21
1 - K - Queer Pessimism
Tournament: Apple Valley | Round: 6 | Opponent: Eagan AE | Judge: Rex Evans 4 Desire from lack projects identity which we can never fully reach which urges the political to determine which identities are legitimate. Thus, the role of the ballot is to vote for the debater with the best method of traversing the fantasy. Edelman 1 (Lee Edelman, No Future: Queer Theory and the Death Drive, 2004, Duke University Press, p. 7-9) SJCPJG Politics, to put this another way, names the space in which Imaginary relations, relations that hark back to a misrecognition of the self as enjoying some originary access to presence (a presence retroactively posited and therefore lost, one might say, from the start), compete for Symbolic fulfillment, for actualization in the realm of language to which subjectification subjects us all. Only the mediation of the signifier allows us to articulate those Imaginary relations, though always at the price of introducing the distance that precludes their realization: the distance inherent in the chain of ceaseless deferrals and substitutions to which language as a system of differences necessarily gives birth. The signifier, as alienating and meaningless token of our Symbolic constitution as subjects (as token, that is, of our subjectification through subjection to the prospect of meaning); the signifier, by means of which we always inhabit the order of the Other, the order of a social and linguistic reality articulated from somewhere; the signifier, which calls us into meaning by seeming call us to ourselves: this signifier only bestows a sort of promissory identity, one with which we can never succeed in fully coinciding because we, as subjects of the signifier, can only, be signifiers ourselves, can only ever aspire to catch up to be what whatever it is we might signify by closing the gap that divides us and, paradoxically, makes us subjects through that act of division alone. This structural inability of the subject to merge with the self for which it sees itself as a signifier in the eyes of the Other necessitates various strategies designed to suture the subject in the space of meaning where Symbolic and Imaginary overlap. Politics names the social enactment of the subject's attempt to establish the conditions for this impossible consolidation by identifying with something outside of itself in order to enter the presence, deferred perpetually, of itself. Politics, that is, names the struggle to effect a fantasmic order of reality in which the subject's alienation would vanish into the seamlessness of identity at the endpoint of the endless chain of signifiers lived as history. If politics in the Symbolic is always therefore a politics of the Symbolic, operating in the name and in the direction of a constantly anticipated futurity, then the telos that would, in fantasy, put an end to these deferrals, the presence toward which the metonymic chain of signifiers always aims, must be recognized, nonetheless, as belonging to an Imaginary past. This means not only that politics conforms to the temporality of desire, to what we might call the inevitable historicity of desire- the successive displacements forward of nodes of attachment as figures of meaning, points of intense metaphoric investment, produced in the hope, however vain, of filling the constitutive gap in the subject that the signifier necessarily installs- but also that politics is name for the temporalization of desire, for its translation into a narrative, for its teleological determination. Politics and futurism is built on the premise that any negation of the signifier of the child is essential in order to fulfill desire from lack which deems queerness out of the political. Edelman 2 (Lee Edelman, No Future: Queer Theory and the Death Drive, 2004, Duke University Press, p. 10-13) SJCPJG Politics, then, in opposing itself to the negativity of such a drive, gives us history as the continuous staging of our dream of eventual self-realization by endlessly reconstructing, in the mirror of desire, what we take to be reality itself. And it does so without letting us acknowledge that the future, to which it persistently appeals, marks the impossible place of an Imaginary past exempt from the deferrals intrinsic to the operation of the signifying chain and projected ahead as the site at which being and meaning are joined as One. In this it enacts the formal repetition distinctive of the drive while representing itself as bringing to fulfillment the narrative sequence of history and, with it, of desire, in the realization of the subject's authentic presence in the Child imagined as enjoying unmediated access to Imaginary wholeness. Small wonder that the era of the universal subject should produce as the very figure of politics, because also as the embodiment of futurity collapsing undecidably into the past, the image of the Child as we know it: the Child who becomes, in Wordsworth's phrase, but more punitively, "father of the Man." Historically constructed, as social critics and intellectual historians including Phillipe Aries, James Kincaid, and Lawrence Stone have made clear, to serve as the repository of variously sentimentalized cultural identifications, the Child has come to embody for us the telos of the social order and come to be seen as the one for whom that order is held in perpetual trust. In its coercive universalization, however, the image of the Child, not to be confused with the lived experiences of any historical children, serves to regulate political discourse-to prescribe what will count as political discourse-by compelling such discourse to accede in advance to the reality of a collective future whose figurative status we are never permitted to acknowledge or address. From Delacroix's iconic image of Liberty leading us into a brave new world of revolutionary possibility- her bare breast making each spectator the unweaned Child to whom it's held out while the boy to her left, reproducing her posture, affirms the absolute logic of reproduction itself-to the revolutionary waif in the logo that miniaturizes the "politics" of Les Mis (summed up in its anthem to futurism, the "inspirational" "One Day More"), we are no more able to conceive of a politics without a fantasy of the future than we are able to conceive of a future without the figure of the Child. That figural Child alone embodies the citizen as an ideal, entitled to claim full rights to its future share in the nation's good, though always at the cost of limiting the rights "real" citizens are allowed. For the social order exists to preserve for this universalized subject, this fantasmatic Child, a notional freedom more highly valued than the actuality of freedom itself, which might, after all, put at risk the Child to whom such a freedom falls due. Hence, whatever refuses this mandate by which our political institutions compel the collective reproduction of the Child must appear as a threat not only to the organization of a given social order but also, and far more ominously, to social order as such, insofar as it threatens the logic of futurism on which meaning always depends. So, for example, when D. James, in her novel Children of Men, imagines a future in which the human race has suffered a seemingly absolute loss of the capacity to reproduce, her narrator, Theodore Faron, not only attributes this reversal of biological fortune to the putative crisis of sexual values in late twentieth-century democracies-"Pornography and sexual violence on film, on television, in books, in life had increased and became more explicit but less and less in the West we made love and bred children," he declares-but also gives voice to the ideological truism that governs our investment in the Child as the obligatory token of futurity: "Without the hope of posterity, for our race not for ourselves, without the assurance that we being dead yet live," he later observes, "all pleasures of the mind and senses sometimes seem to me no more than pathetic and crumbling defences shored up against our ruins."12 While this allusion to Eliot's "The Waste Land" may recall another of its well-known lines, one for which we apparently have Eliot's Wife, Vivian, to thank-"What you get married for if you don't want children?"-it also brings out the function of the child as the prop of the secular theology on which our social reality rests: the secular theology that shapes at once the meaning of our collective narratives and our collective narratives of meaning. Charged, after all, with the task of assuring "that we being dead yet live," the Child, as if by nature (more precisely, as the promise of a natural transcendence of the limits of nature itself), exudes the very pathos from which the narrator of The Children of Men recoils when he comes upon it in nonreproductive "pleasures of the mind and senses." For the "pathetic" quality he projectively locates in non-generative sexual enjoyment-enjoyment that he views in the absence of futurity as empty, substitutive, pathological-exposes the fetishistic figurations of the Child that the narrator pits against it as legible in terms identical to those for which enjoyment without "hope of posterity" is peremptorily dismissed: legible, that is, as nothing more than "pathetic and crumbling defences shored up against our ruins." How better to characterize the narrative project of The Children of Men itself, which ends, as anyone not born yesterday surely expects from the start, with the renewal of our barren and dying race through the miracle of birth? After all, as Walter Wangerin Jr., reviewing the book for the New York Times, approvingly noted in a sentence delicately poised between description and performance of the novel's pro-procreative ideology: "If there is a baby, there is a future, there is redemption."13 If, however, there is no baby and, in consequence, no future, then the blame must fall on the fatal lure of sterile, narcissistic enjoyments understood as inherently destructive of meaning and therefore as responsible for the undoing of social organization, collective reality, and, inevitably, life itself. The political relies on this negation in order to sustain itself which forces any sign of deviancy to a position of ontological damnation where queerness is condemned to overkill. Stanley 11 (Eric Stanley, Near Life, Queer Death: Overkill and Ontological Capture, 2011, p. 8-10) SJCPJG To this end, the law is made pos- sible through the reproduction of both material and discursive formation of antiqueer, along with many other forms of violence. I too quickly rehearse this argument in the hope that it might foreclose the singular reliance on the law as the ground, and rights as the technology, of safety.23 “He was my son—my daughter. It didn’t matter which. He was a sweet kid,” Lauryn Paige’s mother, trying to reconcile at once her child’s mur- der and her child’s gender, stated outside an Austin, Texas, courthouse.24 Lauryn was an eighteen-year-old transwoman who was brutally stabbed to death. According to Dixie, Lauryn’s best friend, it was a “regular night.” The two women had spent the beginning of the evening “working it” as sex workers. After Dixie and Lauryn had made about $200 each they decided to call it quits and return to Dixie’s house, where both lived. On the walk home, Gamaliel Mireles Coria and Frank Santos picked them up in their white conversion van. “Before we got into the van the very first thing I told them was that we were transsexuals,” said Dixie in an interview.25 After a night of driving around, partying in the van, Dixie got dropped off at her house. She pleaded for Lauryn to come in with her, but Lauryn said, “Girl, let me finish him,” so the van took off with Lauryn still inside.26 Santos was then dropped off, leaving Lauryn and Coria alone in the van. According to the autopsy report, Travis County medical examiner Dr. Roberto Bayardo cataloged at least fourteen blows to Lauryn’s head and more than sixty knife wounds to her body. The knife wounds were so deep that they almost decapitated her—a clear sign of overkill. Overkill is a term used to indicate such excessive violence that it pushes a body beyond death. Overkill is often determined by the post- mortem removal of body parts, as with the partial decapitation in the case of Lauryn Paige and the dissection of Rashawn Brazell. The temporality of violence, the biological time when the heart stops pushing and pulling blood, yet the killing is not finished, suggests the aim is not simply the end of a specific life, but the ending of all queer life. This is the time of queer death, when the utility of violence gives way to the pleasure in the other’s mortality. If queers, along with others, approximate nothing, then the task of ending, of killing, that which is nothing must go beyond normative times of life and death. In other words, if Lauryn was dead after the first few stab wounds to the throat, then what do the remaining fifty wounds signify? The legal theory that is offered to nullify the practice of overkill often functions under the name of the trans or gay-panic defense. Both of these defense strategies argue that the murderer became so enraged after the “discovery” of either genitalia or someone’s sexuality they were forced to protect themselves from the threat of queerness. Estanislao Martinez of Fresno, California, used the trans-panic defense and received a four-year prison sentence after admittedly stabbing J. Robles, a Latina transwoman, at least twenty times with a pair of scissors. Importantly, this defense is often used, as in the cases of Robles and Paige, after the murderer has engaged in some kind of sex with the victim. The logic of the trans-panic defense as an explanation for overkill, in its gory semiotics, offers us a way of understanding queers as the nothing of Mbembe’s query. Overkill names the technologies necessary to do away with that which is already gone. Queers then are the specters of life whose threat is so unimaginable that one is “forced,” not simply to murder, but to push them backward out of time, out of History, and into that which comes before.27 In thinking the overkill of Paige and Brazell, I return to Mbembe’s query, “But what does it mean to do violence to what is nothing?”28 This question in its elegant brutality repeats with each case I offer. By resituating this question in the positive, the “something” that is more often than not translated as the human is made to appear. Of interest here, the category of the human assumes generality, yet can only be activated through the specificity of historical and politically located intersection. To this end, the human, the “something” of this query, within the context of the liberal democracy, names rights-bearing subjects, or those who can stand as subjects before the law. The human, then, makes the nothing not only possible but necessary. Following this logic, the work of death, of the death that is already nothing, not quite human, binds the categorical (mis)recognition of humanity. The human, then, resides in the space of life and under the domain of rights, whereas the queer inhabits the place of compromised personhood and the zone of death. As perpetual and axiomatic threat to the human, the queer is the negated double of the subject of liberal democracy. Understanding the nothing as the unavoidable shadow of the human serves to counter the arguments that suggest overkill and antiqueer violence at large are a pathological break and that the severe nature of these killings signals something extreme. In contrast, overkill is precisely not outside of, but is that which constitutes liberal democracy as such. Overkill then is the proper expression to the riddle of the queer nothingness. Put another way, the spectacular material-semiotics of overkill should not be read as (only) individual pathology; these vicious acts must indict the very social worlds of which they are ambassadors. Overkill is what it means, what it must mean, to do violence to what is nothing. Ignore statistics regarding material progress for queerness – they’re geared at hiding the truth of the situation which means only our ontology claim explains the reality of overkill. Stanley 2 (Eric Stanley, Near Life, Queer Death: Overkill and Ontological Capture, 2011, p. 5-6) SJCPJG Can one find what was not ever there—the missing head of a black queer or the identity of an unnamed transwoman whose body is never claimed? How do we measure the pain of burying generations of those we love or even those we never knew? Brazell’s bloody end asks these questions through its calculus of trauma. This kind of loss orders a precarious orga- nization, a kind of trace of that which was never there, a death that places into jeopardy the category of life itself. The numbers, degrees, locations, kinds, types, and frequency of attacks, the statistical evidence that is meant to prove that a violation really happened, are the legitimizing measures that dictate the ways we are mandated to understand harm. However, statistics as an epistemological project may be another way in which the enormity of antiqueer is disap- peared. Thinking only, or primarily, statistically about antiqueer violence is both a theoretical and a material trap. Although statistical evidence is important to make strong knowledge claims about the severity of violence, “statistics” seem to have a way of ensuring that the head of Brazell is never found. Ironically, because his head has yet to be recovered, the “actual” cause of death cannot be officially determined. Furthermore, this indeter- minate cause of death bars Brazell from being entered into hate crimes statistics. Not yet dead, Brazell has never been counted as a casualty of “hate violence.”13 Currently the FBI, through the Criminal Justice Information Ser- vices (CJIS) Division, collects the only national data on “hate violence.” These data on hate violence (or hate crimes, as they are more commonly called) contain categories for religious, racial, and disability “bias” and antihomosexual (male and female), antibisexual, and antiheterosexual incidents (in the 2008 statistics, 2 percent of reported hate crimes were antiheterosexual incidents, while 1.6 percent were antibisexual).14 This hate violence reporting is optional for local jurisdictions; the FBI collects no statistics on trans/gender variant incidents; and the 2008 statistics report that only ten “victims” experienced “multi-bias” incidents. The 2008 report also counted only 1,706 incidents based on “sexual orienta- tion,” which comprised infractions ranging from vandalism to murder. It would seem misguided at best to suggest that the number 1,706 can really tell us anything about the work of antiqueer violence. Reported attacks on “out” queer folks, such as these data, can of course only work as a swinging signifier for the incalculable referent of the actualized violence. This is not simply a numerical issue; it is a larger question of the friction between measures and effect. Not unlike the structuring lack produced by any representation that offers us, the viewers, the promise of the real, statistics can leave us with only a fragmented copy of what they might index. “Reports” on antiqueer violence, such as the “Hate Crime Statis- tics,” reproduce the same kinds of rhetorical loss along with the actual loss of people that cannot be counted. The quantitative limits of what gets to count as antiqueer violence cannot begin to apprehend the numbers of trans and queer bodies that are collected off cold pavement and highway underpasses, nameless flesh whose stories of brutality never find their way into an official account beyond a few scant notes in a police report of a body of a “man in a dress” discovered.15 The alternative is to embrace the death drive – a full affirmation of queer negativity in which we reject the 1AC in favor of traversing the fantasy and realizing the structural positionality of queer identity. Edelman 3 (Lee Edelman, No Future: Queer Theory and the Death Drive, 2004, Duke University Press, p. 4-7) SJCPJG “Rather than rejecting, with liberal discourse, this ascription of negativity to the queer, we might, as I argue, do better to consider accepting and even embracing it. Not in the hope of forging thereby some more perfect social order-such a hope, after all, would only reproduce the constraining mandate of futurism, just as any such order would equally occasion the negativity of the queer-but rather to refuse the insistence of hope itself as affirmation, which is always affirmation of an order whose refusal will register as unthinkable, irresponsible, inhumane. And the trump card of affirmation? Always the question: If not this, what? Always the demand to translate the insistence, the pulsive force, of negativity into some determinate stance or "position" whose determination would thus negate it: always the imperative to immure it in some stable and positive form. When I argue, then, that we might do well to attempt what is surely impossible-to withdraw our allegiance, however compulsory, from a reality based on the Ponzi scheme of reproductive futurism-I do not intend to propose some "good" that will thereby be assured. To the contrary, I mean to insist that nothing, and certainly not what we calI the "good," can ever have any assurance at all in the order of the Symbolic. Abjuring fidelity to a futurism that's always purchased at our expense, though bound, as Symbolic subjects consigned to figure the Symbolic's undoing, to the necessary contradiction of trying turn its intelligibility against itself, we might rather, figuratively, cast our vote for "none of the above," for the primacy of a constant no in response to the law of the Symbolic, which would echo that law's foundational act, its self¬constituting negation. The structuring optimism of politics to which the order of meaning commits us, installing as it does the perpetual hope of reaching meaning through signification, is always, I would argue, a negation of this primal, constitutive, and negative act. And the various positivities produced in its wake by the logic of political hope depend on the mathematical illusion that negated negations might somehow escape, and not redouble, such negativity. My polemic thus stakes its fortunes on a truly hopeless wager: that taking the Symbolic's negativity to the very letter of the law, that attending to the persistence of something internal to reason that reason refuses, that turning the force of queerness against all subjects, however queer, can afford an access to the jouissance that at once defines and negates us. Or better: can expose the constancy, the inescapability, of such access to jouissance in the social order itself even if that order can access its constant access to jouissance only in the process of abjecting that constancy of access onto the queer. In contrast to what Theodor Adorno describes as the "grimness with which a man clings to himself, as to the immediately sure and substantial," the queerness of which I speak would deliberately sever us from ourselves, from the assurance, that is, of knowing ourselves and hence of knowing our "good."4 Such queerness proposes, in place of the good, something I want to call "better," though it promises, in more than one sense of the phrase, absolutely nothing. I connect this something better with Lacan's characterization of what he calls "truth," where truth does not assure happiness, or even, as Lacan makes clear, the good.5 Instead, it names only the insistent particularity of the subject, impossible fully to articulate and "tending toward the real."6 Lacan, therefore, can write of this truth: The quality that best characterizes it is that of being the true Wunsch, which was at the origin of an aberrant or atypical behavior. We encounter this Wunsch with its particular, irreducible character as a modification that presupposes no other form of normalization than that of an experience of pleasure or of pain, but of a final experience from whence it springs and is subsequently preserved in the depths of the subject in an irreducible form. The Wunsch does not have the character of a universal law but, on the contrary, of the most particular of laws-even if it is universal that this particularity is to be found in every human being.' Truth, like queerness, irreducibly linked to the "aberrant or atypical," to what chafes against "normalization," finds its value not in a good susceptible to generalization, but only in the stubborn particularity that voids every notion of a general good. The embrace of queer negativity, then,- can have no justification if justification requires it to reinforce some positive social value; its value, instead, resides in its challenge to value as defined by the social, and thus in its radical challenge to the very value of the social itself. For by figuring a refusal of the coercive belief in the paramount value of futurity, while refusing as well any backdoor hope for dialectical access to meaning, the queer dispossesses the social order of the ground on which it rests: a faith in the consistent reality of the social-and by extension, of the social subject; a faith that politics, whether of the left or of the right, implicitly affirms. Divesting such politics of its thematic trappings, bracketing the particularity of its various proposals for social organization, the queer insists that politics is always a politics of the signifier, or even of what Lacan will often refer to as "the letter." It serves to shore up a reality always unmoored by signification and lacking any guarantee. To say as much is not, of course, to deny the experiential violence that frequently troubles social reality or the apparent consistency with which it bears-and thereby bears down on-us all. It is, rather, to suggest that queerness exposes the obliquity of our relation to what we experience in and as social reality, alerting us to the fantasies structurally necessary in order to sustain it and engaging those fantasies through the figural logics, the linguistic structures, that shape them. If it aims effectively to intervene in the reproduction of such a reality-an inter¬vention that may well take the form of figuring that reality's abortion¬ then queer theory must always insist on its connection to the vicissi¬tudes of the sign, to the tension between the signifier's collapse into the letter's cadaverous materiality and its participation in a system of refer¬ence wherein it generates meaning itself. As a particular story, in other words, of why storytelling fails, one that takes both the value and the burden of that failure upon itself, queer theory, as I construe it, marks the "other" side of politics: the "side" where narrative realization and derealization overlap, where the energies of vitalization ceaselessly turn against themselves; the "side" outside all political sides, committed as they are, on every side, to futurism's unquestioned good.
Don’t allow AC offense weighing: A Your aff analysis starts from the wrong point, that’s an epistemological indict, all your offense just feeds back into futurism. B Alt solves case- we’re a better explanation of the root cause of the AC impacts specifically, which solves back. C Solvency deficit- your aff does nothing but allow resistance strategies to become known and coopted which turns solvency. It’s actively bad because ruse of solvency means we focus on the wrong part. Also, fiat is illusory because nothing happens when the judge votes aff. They don’t get a permutation: A even if you can conceive of or prefer a world with both the aff and the alternative, view it as artificially distinct as its necessary to fully flesh out the intricacies of both methods so you should hold them to their method being different as its also the only way to create concrete and nuanced proposals, Bit’s a methods debate – you should hold them to the method they defended in the 1AC by itself since anything else justifies and endorses severance which endorses bad scholarship as it should be a debate of my method versus yours, and C perms justify infinite aff conditionality – allowing permutations allow the aff reading infinite new advocacies in the 1AR which skews 7 mins of the 1NC and destroys neg ground.
11/6/21
1 - NC - Emotivism
Tournament: Blue Key | Round: 6 | Opponent: American Heritage Broward JA | Judge: Tajaih Robinson 6 Hijack – Deleuze justifies emotivism:
Affect terminates in the normative conclusion of the NC since the aff just makes an ontological claim about the world but that materializes itself in terms of expressions of those desire in every aspect, including linguistic constructs like the resolutional statement. 2. The reason communal attachments increase affect is for the purpose of joy – your card literally says it which proves expression of emotions is the reason your framework matters. 3. Affect material manifests itself through emotions, which means the only way we can account for affect is through those expressions and exchange of emotive responses. That negates –
Every emotive judgement is indexed to a particular individual, no emotive sentiments can ever be fully universal. This means that the resolution negates since there is no emotion that can be applied to a universal claim that x is y. And, The aff cannot prove the resolution true since statements like the aff are not truth apt but expressions. 2. Even if not, desires are predicated on pursuing an individual course of action. Ought statements do not make sense and are counter-intuitive because individuals have an overwhelming emotion against decisions out of their control. Cain, Cain, George. "Needs, Desires, Fears, and Freedom." The Downtown Review. Vol. 1. Iss. 1 (2015). Available at: http://engagedscholarship.csuohio.edu/tdr/vol1/iss1/7 Now, thus far we have discussed how humans are connected by common needs, desires, and fears. What people want most of all is the ability to have control of their persons, their lives, and their circumstances in order to satisfy their needs, fulfill their desires, and eliminate their fears if possible. In order to have such control, people need the freedom to do so. This freedom is commonly known as autonomy. Although what constitutes true autonomy is entirely subjective and varies from person to person, the most general definition of the term is the freedom of the individual to do whatever he wants to do without any hindrances.
10/31/21
1 - NC - Flat Earth
Tournament: Loyola | Round: 6 | Opponent: Harvard Westlake EJ | Judge: Neville Tom 7 Earth is flat – tons of warrants. Anti-Vaccine Scientific Support Arsenal 16 Anti-Vaccine Scientific Support Arsenal, 2-8-2016, "Top Ten Undeniable Proofs the Earth is Flat," FLAT EARTH SCIENCE AND THE BIBLE, https://flatearthscienceandbible.com/2016/02/08/top-ten-undeniable-flat-earth-proofs/ JS 1) The horizon always appears completely flat 360 degrees to the observer, regardless of how high you go up. Any curvature you think you see is from curved airplane windows or Go Pro cameras and fisheye lenses (which NASA loves to use). The reality is that the horizon never curves because we are on an endless plane. On a globe with 25,000 miles in circumference you would see a noticeable disappearance of objects the further they are as they would be leaning away from you and dropping below the constantly curving horizon! 2) The horizon always rises to meet your eye level never no matter how high in altitude you go. Even at 20 miles up the horizon rises to meet the observer/camera. This is only physically possible if the earth is a huge "endless" flat plane. 3) The natural physics of water is to find and maintain its level. If Earth were a giant spinning sphere tilting and hurling through space then truly flat, consistently level surfaces would not exist here. There would be a massive bulge of water in the oceans because of the curvature of the earth. If earth was curved and spinning the oceans of water would be flowing down to level and covering land. Some rivers would be impossibly flowing uphill. There would massive water chaos and flooding! What we would see and experience would be vastly different! But since Earth is in fact an extended flat plane, this fundamental physical property of fluids finding and remaining level is consistent with experience and common sense. The water remains flat because the earth is flat! 4) If Earth were a ball 25,000 miles in circumference as NASA and modern astronomy claim, spherical trigonometry dictates the surface of all standing water must curve downward an easily measurable 8 inches per mile multiplied by the square of the distance. This means along a 6 mile channel of standing water, the Earth would dip 6 feet on either end from the central peak. Every time such experiments have been conducted, however, standing water has proven to be perfectly level. 5) The sun is much closer than we have been told. It is, in fact, in our atmosphere. You can clearly see that it is not 93 million miles away. Many times you can see the sun's rays shooting out of a cloud forming a triangle. If you follow the rays to their source it will always lead to a place above the clouds. If the sun was truly millions of miles away, all the rays would come in at a straight angle. Also the sun can be seen directly above clouds in some balloon photos, creating a hot spot on the clouds below it and in other photos you can clearly see the clouds dispersing directly underneath the close small sun. 6) If we were living on a spinning globe airplane's would constantly have to dip their noses down every few minutes to compensate for the curvature of the earth (with a circumference of 25,000 miles the earth would be constantly curving at the speed of an airplane). In reality however, they never do this! They learn how to fly based on a level flat plane. Also if the earth was spinning the airplane's going west would get to their destination much faster since the earth is spinning in the opposite direction. If the atmosphere is spinning with the earth then airplanes flying west would have to fly faster than the earth's spin to reach its destination. In reality, the earth is flat and airplanes just fly level and reach their destination easily because the earth is not moving. 7) The experiment known as “Airy’s Failure” proved that the stars move relative to a stationary Earth and not the other way around. By first filling a telescope with water to slow down the speed of light inside, then calculating the tilt necessary to get the starlight directly down the tube, Airy failed to prove the heliocentric theory since the starlight was already coming in the correct angle with no change necessary, and instead proved the geocentric model correct. 8) The Michelson-Morley and Sagnac experiments attempted to measure the change in speed of light due to Earth’s assumed motion through space. After measuring in every possible different direction in various locations they failed to detect any significant change whatsoever, again proving the stationary geocentric model. 9) If “gravity” is really a force strong enough to hold the world’s oceans, buildings, people and atmosphere stuck to the surface of a spinning ball, then it is impossible for “gravity” to also simultaneously be weak enough to allow little birds, bugs, and planes to take-off and travel freely unabated in any direction. If “gravity” is strong enough to curve the massive expanse of oceans around a globular Earth, it would be impossible for fish and other creatures to swim through such forcefully held water. 10) Ship captains in navigating great distances at sea never need to factor the supposed curvature of the Earth into their calculations. Both Plane Sailing and Great Circle Sailing, the most popular navigation methods, use plane, not spherical trigonometry, making all mathematical calculations on the assumption that the Earth is perfectly flat. If the Earth were in fact a sphere, such an errant assumption would lead to constant glaring inaccuracies. Plane Sailing has worked perfectly fine in both theory and practice for thousands of years, however, and plane trigonometry has time and again proven more accurate than spherical trigonometry in determining distances across the oceans. If the Earth were truly a globe, then every line of latitude south of the equator would have to measure a gradually smaller and smaller circumference the farther South travelled. If, however, the Earth is an extended plane, then every line of latitude south of the equator should measure a gradually larger and larger circumference the farther South travelled. The fact that many captains navigating south of the equator assuming the globular theory have found themselves drastically out of reckoning, more so the farther South travelled, testifies to the fact that the Earth is not a ball. Flat earth flips existing all conceptions of science and society at large – this means you go neg on presumption because their presumptions are presumptive DirtyOldAussie 17 DirtyOldAussie, 4-1-2017, "What are the true implications of a Flat Earth vs Spherical Earth? How else would our thinking change if it really was flat? • r/AskReddit," reddit, *this post was marked serious so it’s legit, https://www.reddit.com/r/AskReddit/comments/670rf6/what_are_the_true_implications_of_a_flat_earth_vs/ JS You'd have throw away the theory of gravity, special relativity, Newtonian mechanics, conventional astronomy, celestial mechanics, cosmology and a bunch of other fairly well established structures. Then you'd also have to deal with several worldwide conspiracies involving government, airline pilots, space agencies, astronomers, ships captains and others.
9/23/21
1 - NC - Fuckshit
Tournament: Loyola | Round: 6 | Opponent: Harvard Westlake EJ | Judge: Neville Tom 6 The following arguments are all drop the debater to deter future abuse 1 The aff debater must say the words “praise the spaghetti monster” once every speech – anything else causes the spaghetti monster to kill us all which is a reason to reject them 2 They must call me “Kaps” in cross examination, anything else incentivizes psychological violence which is a reason to reject them 3 No aff arguments – key to letting me hang out with my friends faster after round 4 No prep time for the aff - key to allow for strategic on the fly thinking 5 Don’t evaluate the 1AC – key to stop them from reading boring policy positions 6 Reasonability on 1AR shells – 1AR theory is very aff-biased because the 2AR gets to line-by-line every 2NR standard with new answers that never get responded to 7 DTA on 1AR shells - They can blow up blippy 20 second shells in the 2AR but I have to split my time and can’t preempt 2AR spin which necessitates judge intervention 8 RVIs on 1AR theory – 1AR being able to spend 20 seconds on a shell and still win forces the 2N to allocate at least 2:30 on the shell which means RVIs check back time skew No new 1ar theory paradigm issues- A New 1ar paradigms moot any 1NC theoretical offense B introducing them in the aff allows for them to be more rigorously tested 9 No overview answers to neg arguments – they must LBL everything – key to stop abusive grouping of arguments 10 Vote for brown people – key to empowering minority populations who are marginalized in America 11 No new 2ar responses – they’re infinetly abusive because they save their best args for last and I can’t respond
9/23/21
1 - NC - LogCon
Tournament: Loyola | Round: 6 | Opponent: Harvard Westlake EJ | Judge: Neville Tom 10 The burden of the aff is to prove that they are consistent with the logical consequence of the resolution. Prefer this –
Text – Oxford Dictionary defines ought as “used to indicate something that is probable.” https://en.oxforddictionaries.com/definition/oughtMassa Ought is “used to express logical consequence” as defined by Merriam-Webster (http://www.merriam-webster.com/dictionary/ought)Massa 2. Debatability – a) my interp means debates focus on empirics about squo trends rather than irresolvable abstract principles that’ve been argued for years b) Moral oughts cannot guide action due to the is/ought fallacy – we cannot derive moral obligations from what happens in the real world 3. Neg definition choice – Anything else kills 1NC strategy since I premised my engagement on a lack of your definition. Their inherency proves the aff won’t happen. Either a) the aff is non-inherent and you vote neg on presumption or b) It is and it isn’t going to happen.
9/23/21
1 - NC - Spaghetti Monster
Tournament: Loyola | Round: 6 | Opponent: Harvard Westlake EJ | Judge: Neville Tom 5 I am The Spaghetti Monster, an Evil Demon from the Nether, and I have one goal: This ballot. I will wreak havoc and stop at nothing until l get this dub, then I will go back to Nether. I have taken over Kap’s Body, Fear me and my threat. No rules will constrain me as the application of rules, even when justified, are not inherent. Langseth ,This section shows that rules themselves do not determine how they are to be followed. There is nothing, for example, inherent in an arrow that shows us which way it is pointing or directing us to go.2 Similarly, as the above quote shows, there is no means by which it can be known with com- plete certainty that, in following the arithmetical sequence 0, n, 2n, 3n, 4n... in line with the order “+1,” a person is following the intended rule, for he or she may be following an alternative rule that is compatible with the intended rule up to a certain point. There must be something in addition to the rule that directs us in a particular manner and indicates to us that we proceed accordingly. The argument Wittgenstein is making in Section 185 is dependent upon the fact that a rule, in order to be a rule, must be able to be broken. There must be correct and incorrect applications of a rule. The question that arises here is: What determines correct and incorrect application of a rule? Or, what justifies following a rule correctly? If a rule in itself does not show us how we are to follow it, then our interpretation of a rule must also not determine correct use. If interpretation was what determined correct use, there would be no incorrect application of a rule. This is the case because any interpretation can be seen to be in accordance with a rule. Elizabeth is now under my control, I have hypnotized them during prep time and they are now my Puppet. To demonstrate this, I will make them do a couple of things. In their 1ar, they will make arguments about why you should vote me down and why you should vote them up. (I will also make them say they aren’t hypnotized) But know this: through telepathy, I have learned that their true intention was to lose this round; They planned to forfeit in the 1ar. It appears I didn’t need to hypnotize them in the first place. No amount of evidence can ever prove objective knowledge. Searle, You could have the best possible evidence about other people’s behavior and still be mistaken about their mental states. You could have the best possible evidence about the past and still be mistaken about the future. You could have the best possible evidence about your own perceptual experiences and still be mistaken about the external world. This is so because you could be dreaming,having hallucinations, be a brain in a vat, or be deceieved systematically by an evil demon. Strange situations, yes, but it is impossible to disprove the potentiality for any of thesescenarios.”
I have programmed them to think that they are not hypnotized, that they want to win the round, and that they think what I am saying is very silly. But no matter our empirical observations, their intentions are indeterminate. Kant Immanuel, The Critique of Pure Reason. Translated by J.M.D. Meiklejohn. 1781. Under heading “Exposition of the Cosmological Idea of Freedom in Harmony with the Universal Law of Natural Necessity.” available online: http://www.gutenberg.org/dirs/etext03/cprrn10.txt SJCPJG The real morality of actions’--their merit or demerit, and even that of our own conduct, is completely unknown to us. Our estimates can relate only to their empirical character. How much is the result of the action of free will, how much is to be ascribed to nature and to blameless error, or to a happy constitution of temperament (merito fortunae), no one can discover, nor, for this reason, determine with perfect justice.
Of course, I have no intention of keeping them as my puppet, (I have too many). When they say “I am forfeiting this round, yes this is serious, and this comes before all other arguments. To clarify- I am kicking every single argument I made and asking you to vote for Kaps” and then stop speaking, then they will wake up and you will know they are no longer under my command. Until then, I am the puppet-master.:
9/23/21
1 - NC - Util v2
Tournament: Blue Key | Round: 4 | Opponent: Scarsdale KS | Judge: Samantha McLoughlin 3 The standard is act hedonistic util. Prefer – 1 – Pleasure and pain are intrinsic value and disvalue – everything else regresses – robust neuroscience. Blum et al. 18 Kenneth Blum, 1Department of Psychiatry, Boonshoft School of Medicine, Dayton VA Medical Center, Wright State University, Dayton, OH, USA 2Department of Psychiatry, McKnight Brain Institute, University of Florida College of Medicine, Gainesville, FL, USA 3Department of Psychiatry and Behavioral Sciences, Keck Medicine University of Southern California, Los Angeles, CA, USA 4Division of Applied Clinical Research and Education, Dominion Diagnostics, LLC, North Kingstown, RI, USA 5Department of Precision Medicine, Geneus Health LLC, San Antonio, TX, USA 6Department of Addiction Research and Therapy, Nupathways Inc., Innsbrook, MO, USA 7Department of Clinical Neurology, Path Foundation, New York, NY, USA 8Division of Neuroscience-Based Addiction Therapy, The Shores Treatment and Recovery Center, Port Saint Lucie, FL, USA 9Institute of Psychology, Eötvös Loránd University, Budapest, Hungary 10Division of Addiction Research, Dominion Diagnostics, LLC. North Kingston, RI, USA 11Victory Nutrition International, Lederach, PA., USA 12National Human Genome Center at Howard University, Washington, DC., USA, Marjorie Gondré-Lewis, 12National Human Genome Center at Howard University, Washington, DC., USA 13Departments of Anatomy and Psychiatry, Howard University College of Medicine, Washington, DC US, Bruce Steinberg, 4Division of Applied Clinical Research and Education, Dominion Diagnostics, LLC, North Kingstown, RI, USA, Igor Elman, 15Department Psychiatry, Cooper University School of Medicine, Camden, NJ, USA, David Baron, 3Department of Psychiatry and Behavioral Sciences, Keck Medicine University of Southern California, Los Angeles, CA, USA, Edward J Modestino, 14Department of Psychology, Curry College, Milton, MA, USA, Rajendra D Badgaiyan, 15Department Psychiatry, Cooper University School of Medicine, Camden, NJ, USA, Mark S Gold 16Department of Psychiatry, Washington University, St. Louis, MO, USA, “Our evolved unique pleasure circuit makes humans different from apes: Reconsideration of data derived from animal studies”, U.S. Department of Veterans Affairs, 28 February 2018, accessed: 19 August 2020, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6446569/, R.S. Pleasure is not only one of the three primary reward functions but it also defines reward. As homeostasis explains the functions of only a limited number of rewards, the principal reason why particular stimuli, objects, events, situations, and activities are rewarding may be due to pleasure. This applies first of all to sex and to the primary homeostatic rewards of food and liquid and extends to money, taste, beauty, social encounters and nonmaterial, internally set, and intrinsic rewards. Pleasure, as the primary effect of rewards, drives the prime reward functions of learning, approach behavior, and decision making and provides the basis for hedonic theories of reward function. We are attracted by most rewards and exert intense efforts to obtain them, just because they are enjoyable 10. Pleasure is a passive reaction that derives from the experience or prediction of reward and may lead to a long-lasting state of happiness. The word happiness is difficult to define. In fact, just obtaining physical pleasure may not be enough. One key to happiness involves a network of good friends. However, it is not obvious how the higher forms of satisfaction and pleasure are related to an ice cream cone, or to your team winning a sporting event. Recent multidisciplinary research, using both humans and detailed invasive brain analysis of animals has discovered some critical ways that the brain processes pleasure 14. Pleasure as a hallmark of reward is sufficient for defining a reward, but it may not be necessary. A reward may generate positive learning and approach behavior simply because it contains substances that are essential for body function. When we are hungry, we may eat bad and unpleasant meals. A monkey who receives hundreds of small drops of water every morning in the laboratory is unlikely to feel a rush of pleasure every time it gets the 0.1 ml. Nevertheless, with these precautions in mind, we may define any stimulus, object, event, activity, or situation that has the potential to produce pleasure as a reward. In the context of reward deficiency or for disorders of addiction, homeostasis pursues pharmacological treatments: drugs to treat drug addiction, obesity, and other compulsive behaviors. The theory of allostasis suggests broader approaches - such as re-expanding the range of possible pleasures and providing opportunities to expend effort in their pursuit. 15. It is noteworthy, the first animal studies eliciting approach behavior by electrical brain stimulation interpreted their findings as a discovery of the brain’s pleasure centers 16 which were later partly associated with midbrain dopamine neurons 17–19 despite the notorious difficulties of identifying emotions in animals. Evolutionary theories of pleasure: The love connection BO Charles Darwin and other biological scientists that have examined the biological evolution and its basic principles found various mechanisms that steer behavior and biological development. Besides their theory on natural selection, it was particularly the sexual selection process that gained significance in the latter context over the last century, especially when it comes to the question of what makes us “what we are,” i.e., human. However, the capacity to sexually select and evolve is not at all a human accomplishment alone or a sign of our uniqueness; yet, we humans, as it seems, are ingenious in fooling ourselves and others–when we are in love or desperately search for it. It is well established that modern biological theory conjectures that organisms are the result of evolutionary competition. In fact, Richard Dawkins stresses gene survival and propagation as the basic mechanism of life 20. Only genes that lead to the fittest phenotype will make it. It is noteworthy that the phenotype is selected based on behavior that maximizes gene propagation. To do so, the phenotype must survive and generate offspring, and be better at it than its competitors. Thus, the ultimate, distal function of rewards is to increase evolutionary fitness by ensuring the survival of the organism and reproduction. It is agreed that learning, approach, economic decisions, and positive emotions are the proximal functions through which phenotypes obtain other necessary nutrients for survival, mating, and care for offspring. Behavioral reward functions have evolved to help individuals to survive and propagate their genes. Apparently, people need to live well and long enough to reproduce. Most would agree that homo-sapiens do so by ingesting the substances that make their bodies function properly. For this reason, foods and drinks are rewards. Additional rewards, including those used for economic exchanges, ensure sufficient palatable food and drink supply. Mating and gene propagation is supported by powerful sexual attraction. Additional properties, like body form, augment the chance to mate and nourish and defend offspring and are therefore also rewards. Care for offspring until they can reproduce themselves helps gene propagation and is rewarding; otherwise, many believe mating is useless. According to David E Comings, as any small edge will ultimately result in evolutionary advantage 21, additional reward mechanisms like novelty seeking and exploration widen the spectrum of available rewards and thus enhance the chance for survival, reproduction, and ultimate gene propagation. These functions may help us to obtain the benefits of distant rewards that are determined by our own interests and not immediately available in the environment. Thus the distal reward function in gene propagation and evolutionary fitness defines the proximal reward functions that we see in everyday behavior. That is why foods, drinks, mates, and offspring are rewarding. There have been theories linking pleasure as a required component of health benefits salutogenesis, (salugenesis). In essence, under these terms, pleasure is described as a state or feeling of happiness and satisfaction resulting from an experience that one enjoys. Regarding pleasure, it is a double-edged sword, on the one hand, it promotes positive feelings (like mindfulness) and even better cognition, possibly through the release of dopamine 22. But on the other hand, pleasure simultaneously encourages addiction and other negative behaviors, i.e., motivational toxicity. It is a complex neurobiological phenomenon, relying on reward circuitry or limbic activity. It is important to realize that through the “Brain Reward Cascade” (BRC) endorphin and endogenous morphinergic mechanisms may play a role 23. While natural rewards are essential for survival and appetitive motivation leading to beneficial biological behaviors like eating, sex, and reproduction, crucial social interactions seem to further facilitate the positive effects exerted by pleasurable experiences. Indeed, experimentation with addictive drugs is capable of directly acting on reward pathways and causing deterioration of these systems promoting hypodopaminergia 24. Most would agree that pleasurable activities can stimulate personal growth and may help to induce healthy behavioral changes, including stress management 25. The work of Esch and Stefano 26 concerning the link between compassion and love implicate the brain reward system, and pleasure induction suggests that social contact in general, i.e., love, attachment, and compassion, can be highly effective in stress reduction, survival, and overall health. Understanding the role of neurotransmission and pleasurable states both positive and negative have been adequately studied over many decades 26–37, but comparative anatomical and neurobiological function between animals and homo sapiens appear to be required and seem to be in an infancy stage. Finding happiness is different between apes and humans As stated earlier in this expert opinion one key to happiness involves a network of good friends 38. However, it is not entirely clear exactly how the higher forms of satisfaction and pleasure are related to a sugar rush, winning a sports event or even sky diving, all of which augment dopamine release at the reward brain site. Recent multidisciplinary research, using both humans and detailed invasive brain analysis of animals has discovered some critical ways that the brain processes pleasure. Remarkably, there are pathways for ordinary liking and pleasure, which are limited in scope as described above in this commentary. However, there are many brain regions, often termed hot and cold spots, that significantly modulate (increase or decrease) our pleasure or even produce the opposite of pleasure— that is disgust and fear 39. One specific region of the nucleus accumbens is organized like a computer keyboard, with particular stimulus triggers in rows— producing an increase and decrease of pleasure and disgust. Moreover, the cortex has unique roles in the cognitive evaluation of our feelings of pleasure 40. Importantly, the interplay of these multiple triggers and the higher brain centers in the prefrontal cortex are very intricate and are just being uncovered. Desire and reward centers It is surprising that many different sources of pleasure activate the same circuits between the mesocorticolimbic regions (Figure 1). Reward and desire are two aspects pleasure induction and have a very widespread, large circuit. Some part of this circuit distinguishes between desire and dread. The so-called pleasure circuitry called “REWARD” involves a well-known dopamine pathway in the mesolimbic system that can influence both pleasure and motivation. In simplest terms, the well-established mesolimbic system is a dopamine circuit for reward. It starts in the ventral tegmental area (VTA) of the midbrain and travels to the nucleus accumbens (Figure 2). It is the cornerstone target to all addictions. The VTA is encompassed with neurons using glutamate, GABA, and dopamine. The nucleus accumbens (NAc) is located within the ventral striatum and is divided into two sub-regions—the motor and limbic regions associated with its core and shell, respectively. The NAc has spiny neurons that receive dopamine from the VTA and glutamate (a dopamine driver) from the hippocampus, amygdala and medial prefrontal cortex. Subsequently, the NAc projects GABA signals to an area termed the ventral pallidum (VP). The region is a relay station in the limbic loop of the basal ganglia, critical for motivation, behavior, emotions and the “Feel Good” response. This defined system of the brain is involved in all addictions –substance, and non –substance related. In 1995, our laboratory coined the term “Reward Deficiency Syndrome” (RDS) to describe genetic and epigenetic induced hypodopaminergia in the “Brain Reward Cascade” that contribute to addiction and compulsive behaviors 3,6,41. Furthermore, ordinary “liking” of something, or pure pleasure, is represented by small regions mainly in the limbic system (old reptilian part of the brain). These may be part of larger neural circuits. In Latin, hedus is the term for “sweet”; and in Greek, hodone is the term for “pleasure.” Thus, the word Hedonic is now referring to various subcomponents of pleasure: some associated with purely sensory and others with more complex emotions involving morals, aesthetics, and social interactions. The capacity to have pleasure is part of being healthy and may even extend life, especially if linked to optimism as a dopaminergic response 42. Psychiatric illness often includes symptoms of an abnormal inability to experience pleasure, referred to as anhedonia. A negative feeling state is called dysphoria, which can consist of many emotions such as pain, depression, anxiety, fear, and disgust. Previously many scientists used animal research to uncover the complex mechanisms of pleasure, liking, motivation and even emotions like panic and fear, as discussed above 43. However, as a significant amount of related research about the specific brain regions of pleasure/reward circuitry has been derived from invasive studies of animals, these cannot be directly compared with subjective states experienced by humans. In an attempt to resolve the controversy regarding the causal contributions of mesolimbic dopamine systems to reward, we have previously evaluated the three-main competing explanatory categories: “liking,” “learning,” and “wanting” 3. That is, dopamine may mediate (a) liking: the hedonic impact of reward, (b) learning: learned predictions about rewarding effects, or (c) wanting: the pursuit of rewards by attributing incentive salience to reward-related stimuli 44. We have evaluated these hypotheses, especially as they relate to the RDS, and we find that the incentive salience or “wanting” hypothesis of dopaminergic functioning is supported by a majority of the scientific evidence. Various neuroimaging studies have shown that anticipated behaviors such as sex and gaming, delicious foods and drugs of abuse all affect brain regions associated with reward networks, and may not be unidirectional. Drugs of abuse enhance dopamine signaling which sensitizes mesolimbic brain mechanisms that apparently evolved explicitly to attribute incentive salience to various rewards 45. Addictive substances are voluntarily self-administered, and they enhance (directly or indirectly) dopaminergic synaptic function in the NAc. This activation of the brain reward networks (producing the ecstatic “high” that users seek). Although these circuits were initially thought to encode a set point of hedonic tone, it is now being considered to be far more complicated in function, also encoding attention, reward expectancy, disconfirmation of reward expectancy, and incentive motivation 46. The argument about addiction as a disease may be confused with a predisposition to substance and nonsubstance rewards relative to the extreme effect of drugs of abuse on brain neurochemistry. The former sets up an individual to be at high risk through both genetic polymorphisms in reward genes as well as harmful epigenetic insult. Some Psychologists, even with all the data, still infer that addiction is not a disease 47. Elevated stress levels, together with polymorphisms (genetic variations) of various dopaminergic genes and the genes related to other neurotransmitters (and their genetic variants), and may have an additive effect on vulnerability to various addictions 48. In this regard, Vanyukov, et al. 48 suggested based on review that whereas the gateway hypothesis does not specify mechanistic connections between “stages,” and does not extend to the risks for addictions the concept of common liability to addictions may be more parsimonious. The latter theory is grounded in genetic theory and supported by data identifying common sources of variation in the risk for specific addictions (e.g., RDS). This commonality has identifiable neurobiological substrate and plausible evolutionary explanations. Over many years the controversy of dopamine involvement in especially “pleasure” has led to confusion concerning separating motivation from actual pleasure (wanting versus liking) 49. We take the position that animal studies cannot provide real clinical information as described by self-reports in humans. As mentioned earlier and in the abstract, on November 23rd, 2017, evidence for our concerns was discovered 50 In essence, although nonhuman primate brains are similar to our own, the disparity between other primates and those of human cognitive abilities tells us that surface similarity is not the whole story. Sousa et al. 50 small case found various differentially expressed genes, to associate with pleasure related systems. Furthermore, the dopaminergic interneurons located in the human neocortex were absent from the neocortex of nonhuman African apes. Such differences in neuronal transcriptional programs may underlie a variety of neurodevelopmental disorders. In simpler terms, the system controls the production of dopamine, a chemical messenger that plays a significant role in pleasure and rewards. The senior author, Dr. Nenad Sestan from Yale, stated: “Humans have evolved a dopamine system that is different than the one in chimpanzees.” This may explain why the behavior of humans is so unique from that of non-human primates, even though our brains are so surprisingly similar, Sestan said: “It might also shed light on why people are vulnerable to mental disorders such as autism (possibly even addiction).” Remarkably, this research finding emerged from an extensive, multicenter collaboration to compare the brains across several species. These researchers examined 247 specimens of neural tissue from six humans, five chimpanzees, and five macaque monkeys. Moreover, these investigators analyzed which genes were turned on or off in 16 regions of the brain. While the differences among species were subtle, there was a remarkable contrast in the neocortices, specifically in an area of the brain that is much more developed in humans than in chimpanzees. In fact, these researchers found that a gene called tyrosine hydroxylase (TH) for the enzyme, responsible for the production of dopamine, was expressed in the neocortex of humans, but not chimpanzees. As discussed earlier, dopamine is best known for its essential role within the brain’s reward system; the very system that responds to everything from sex, to gambling, to food, and to addictive drugs. However, dopamine also assists in regulating emotional responses, memory, and movement. Notably, abnormal dopamine levels have been linked to disorders including Parkinson’s, schizophrenia and spectrum disorders such as autism and addiction or RDS. Nora Volkow, the director of NIDA, pointed out that one alluring possibility is that the neurotransmitter dopamine plays a substantial role in humans’ ability to pursue various rewards that are perhaps months or even years away in the future. This same idea has been suggested by Dr. Robert Sapolsky, a professor of biology and neurology at Stanford University. Dr. Sapolsky cited evidence that dopamine levels rise dramatically in humans when we anticipate potential rewards that are uncertain and even far off in our futures, such as retirement or even the possible alterlife. This may explain what often motivates people to work for things that have no apparent short-term benefit 51. In similar work, Volkow and Bale 52 proposed a model in which dopamine can favor NOW processes through phasic signaling in reward circuits or LATER processes through tonic signaling in control circuits. Specifically, they suggest that through its modulation of the orbitofrontal cortex, which processes salience attribution, dopamine also enables shilting from NOW to LATER, while its modulation of the insula, which processes interoceptive information, influences the probability of selecting NOW versus LATER actions based on an individual’s physiological state. This hypothesis further supports the concept that disruptions along these circuits contribute to diverse pathologies, including obesity and addiction or RDS. 2 – No intent-foresight distinction – if I foresee a consequence, then it becomes part of my deliberation since its intrinsic to my action No intent foresight distinction for states. Enoch 07 Enoch, D The Faculty of Law, The Hebrew Unviersity, Mount Scopus Campus, Jersusalem. (2007). INTENDING, FORESEEING, AND THE STATE. Legal Theory, 13(02). doi:10.1017/s1352325207070048 https://www.cambridge.org/core/journals/legal-theory/article/intending-foreseeing-and-the-state/76B18896B94D5490ED0512D8E8DC54B2 The general difficulty of the intending-foreseeing distinction here stemmed, you will recall, from the feeling that attempting to pick and choose among the foreseen consequences of one’s actions those one is more and those one is less responsible for looks more like the preparation of a defense than like a genuine attempt to determine what is to be done. Hiding behind the intending-foreseeing distinction seems like an attempt to evade responsibility, and so thinking about the distinction in terms of responsibility serves 39. Anderson and Pildes, supra note 38. I will use this text as my example of an expressive theory here. 40. See id. at 1554, 1564. 41. For a general critique, see Mathew D. Adler, Expressive Theories of Law: A Skeptical Overview, 148 U. PA. L. REV. 1363 (1999–2000). 42. As Adler repeatedly notes, the understanding of expression Anderson and Pildes work with is amazingly broad, so that “To express an attitude through action is to act on the reasons the attitude gives us”; Anderson and Pildes, supra note 38, at 1510. If this is so, it seems that expression drops out of the picture and everything done with it can be done directly in terms of reasons. 43. This may be true of what Anderson and Pildes have in mind when they say that “expressive norms regulate actions by regulating the acceptable justifications for doing them”; id. at 1511. http://journals.cambridge.org Downloaded: 03 Aug 2014 IP address: 134.153.184.170 Intending, Foreseeing, and the State 91 to reduce even further the plausibility of attributing to it intrinsic moral significance. This consideration—however weighty in general—seems to me very weighty when applied to state action and to the decisions of state officials. For perhaps it may be argued that individuals are not required to undertake a global perspective, one that equally takes into account all foreseen consequences of their actions. Perhaps, in other words, individuals are entitled to (roughly) settle for having a good will, and beyond that let chips fall where they may. But this is precisely what stateswomen and statesmen—and certainly states—are not entitled to settle for.44 In making policy decisions, it is precisely the global (or at least statewide, or nationwide, or something of this sort) perspective that must be undertaken. Perhaps, for instance, an individual doctor is entitled to give her patient a scarce drug without thinking about tomorrow’s patients (I say “perhaps” because I am genuinely not sure about this), but surely when a state committee tries to formulate rules for the allocation of scarce medical drugs and treatments, it cannot hide behind the intending-foreseeing distinction, arguing that if it allows45 the doctor to give the drug to today’s patient, the death of tomorrow’s patient is merely foreseen and not intended. When making a policy-decision, this is clearly unacceptable. Or think about it this way (I follow Daryl Levinson here):46 perhaps restrictions on the responsibility of individuals are justified because individuals are autonomous, because much of the value in their lives comes from personal pursuits and relationships that are possible only if their responsibility for what goes on in the (more impersonal) world is restricted. But none of this is true of states and governments. They have no special relationships and pursuits, no personal interests, no autonomous lives to lead in anything like the sense in which these ideas are plausible when applied to individuals persons. So there is no reason to restrict the responsibility of states in anything like the way the responsibility of individuals is arguably restricted.47 States and state officials have much more comprehensive responsibilities than individuals do. Hiding behind the intending-foreseeing distinction thus more clearly constitutes an evasion of responsibility in the case of the former. So the evading-responsibility worry has much more force against the intending-foreseeing distinction when applied to state action than elsewhere. 3 – Actor spec – governments lack wills or intentions and inevitably deals with tradeoffs – outweighs because agents have differing obligations. 4 – No act omission distinction – choosing not to act is an action in of itself since you had to make an active decision to omit. Walking past a drowning baby and choosing not to save it is a cognitive decision you were faced with and you actively decided to keep walking b) warranting a distinction gives agents the permissible choice of omitting from any ethical action since omissions lack culpability. 5 No calc indicts – a) no philosophy actually says that consequences don’t matter at all since otherwise it would indict every theory since they use causal events to understand how their ethics have worked in the past and through the justification of premises b) we don’t need consequences – winning hedonism proves we’re the only one with impacts to it which means risk of offense framing is sufficient c) they’re blippy nibs that set the aff at an unfair advantage since they only have to win one while we have to beat them all – voting issue for fairness 6 Extinction first – A Forecloses future improvement – we can never improve society because our impact is irreversible which proves moral uncertainty 2 – Turns suffering – mass death causes suffering because people can’t get access to resources and basic necessities 3 – Objectivity – body count is the most objective way to calculate impacts because comparing suffering is unethical 7 Not wearing a Halloween costume is a voting issue – prevents us from celebrating Halloween which outweighs on fun which is the only portable impact.
10/30/21
1 - Paradoxes
Tournament: Loyola | Round: 2 | Opponent: Dulles TY | Judge: Joseph Georges Permissibility and presumption negate 1 Obligations- the resolution indicates the affirmative has to prove an obligation, and permissibility would deny the existence of an obligation 2 Falsity- Statements are more often false than true because proving one part of the statement false disproves the entire statement. Presuming all statements are true creates contradictions which would be ethically bankrupt. 3 Negation Theory “to negate” means “to deny the truth of,” which means any argument that renders the resolution false is sufficient to negate. 4 Trichotomy Triple there is a trichotomy between obligation, prohibition and permissibility; proving one disproves the other two because they are three intertwined moral terms which coexist within each other. Outweighs because it interacts with each term or moral obligation. 5 Status Quo Bias – you should default to a world where you don’t make change because making change assumes that world will be better than the current world
1 The holographic principle is the most reasonable conclusion Stromberg 15Joseph Stromberg- “Some physicists believe we're living in a giant hologram — and it's not that far-fetched” https://www.vox.com/2015/6/29/8847863/holographic-principle-universe-theory-physics Vox. June 29th 2015 War Room Debate AI Some physicists actually believe that the universe we live in might be a hologram. The idea isn't that the universe is some sort of fake simulation out of The Matrix, but rather that even though we appear to live in a three-dimensional universe, it might only have two dimensions. It's called the holographic principle. The thinking goes like this: Some distant two-dimensional surface contains all the data needed to fully describe our world — and much like in a hologram, this data is projected to appear in three dimensions. Like the characters on a TV screen, we live on a flat surface that happens to look like it has depth. It might sound absurd. But when physicists assume it's true in their calculations, all sorts of big physics problems — such as the nature of black holes and the reconciling of gravity and quantum mechanics — become much simpler to solve. In short, the laws of physics seem to make more sense when written in two dimensions than in three. "It's not considered some wild speculation among most theoretical physicists," says Leonard Susskind, the Stanford physicist who first formally defined the idea decades ago. "It's become a working, everyday tool to solve problems in physics." But there's an important distinction to be made here. There's no direct evidence that our universe actually is a two-dimensional hologram. These calculations aren't the same as a mathematical proof. Rather, they're intriguing suggestions that our universe could be a hologram. And as of yet, not all physicists believe we have a good way of testing the idea experimentally. 2 Paradox of tolerance- to be completely open to the aff we must exclude perspectives that wouldn’t be open to the aff which means it’s impossible to have complete tolerance for an idea since that tolerance relies on excluding a perspective. 3 Decision Making Paradox- in order to decide to do the affirmative we need a decision-making procedure to enact it, vote for it, and to determine it is a good decision. But to chose a decision-making procedure requires another meta level decision making procedure leading to infinite regress since every decision requires another decision to chose how to make a decision. 4 The Place Paradox- if everything exists in a place in space time, that place must also have a place that it exists and that larger place needs a larger location to infinity. Therefore, identifying ought statements is impossible since those statements assume acting on objects in the space-time continuum. 5 Grain Paradox- A single grain of millet makes no sound upon falling, but a thousand grains make a sound. But a thousand nothings cannot make something which means the physical world is paradoxical. 6 Arrows Paradox- If we divide time into discrete 0-duration slices, no motion is happening in each of them, so taking them all as a whole, motion is impossible. 7 Bonini’s Paradox- As a model of a complex system becomes more complete, it becomes less understandable; for it to be more understandable it must be less complete and therefore less accurate. Therefore no philosophical or political model can be useful. 10 Good Samaritan Paradox -- affirming negates because in order to say you want to fix x problem, that assumes x problem exists in the first place, thus eliminating nukes presupposes nukes exist which means negation is a prior question 11 Zeno’s Paradox – motion is impossible, because moving half way causes half more and half more which is infinitely regressive and means elimination of arsenals is logically impossible
9/4/21
1 - Paradoxes v2
Tournament: Loyola | Round: 3 | Opponent: Bishops AC | Judge: Abhishek Rao 6 Permissibility and presumption flow neg: A Probability, there is one way for a statement to be true and an infinite amount of ways for it to be false B If I knew nothing about P I would presume both P and not P true, a contradiction C} if every action is permissible then ought not statements like the resolution are incoherent D All moral truths require absolute certainty 1 Absent certainty we can always ask why should I, making our obligation unconstitutive.
1 The holographic principle is the most reasonable conclusion Stromberg 15Joseph Stromberg- “Some physicists believe we're living in a giant hologram — and it's not that far-fetched” https://www.vox.com/2015/6/29/8847863/holographic-principle-universe-theory-physics Vox. June 29th 2015 War Room Debate AI Some physicists actually believe that the universe we live in might be a hologram. The idea isn't that the universe is some sort of fake simulation out of The Matrix, but rather that even though we appear to live in a three-dimensional universe, it might only have two dimensions. It's called the holographic principle. The thinking goes like this: Some distant two-dimensional surface contains all the data needed to fully describe our world — and much like in a hologram, this data is projected to appear in three dimensions. Like the characters on a TV screen, we live on a flat surface that happens to look like it has depth. It might sound absurd. But when physicists assume it's true in their calculations, all sorts of big physics problems — such as the nature of black holes and the reconciling of gravity and quantum mechanics — become much simpler to solve. In short, the laws of physics seem to make more sense when written in two dimensions than in three. "It's not considered some wild speculation among most theoretical physicists," says Leonard Susskind, the Stanford physicist who first formally defined the idea decades ago. "It's become a working, everyday tool to solve problems in physics." But there's an important distinction to be made here. There's no direct evidence that our universe actually is a two-dimensional hologram. These calculations aren't the same as a mathematical proof. Rather, they're intriguing suggestions that our universe could be a hologram. And as of yet, not all physicists believe we have a good way of testing the idea experimentally. 2 Paradox of tolerance- to be completely open to the aff we must exclude perspectives that wouldn’t be open to the aff which means it’s impossible to have complete tolerance for an idea since that tolerance relies on excluding a perspective. 3 Decision Making Paradox- in order to decide to do the affirmative we need a decision-making procedure to enact it, vote for it, and to determine it is a good decision. But to chose a decision-making procedure requires another meta level decision making procedure leading to infinite regress since every decision requires another decision to chose how to make a decision. 4 The Place Paradox- if everything exists in a place in space time, that place must also have a place that it exists and that larger place needs a larger location to infinity. Therefore, identifying ought statements is impossible since those statements assume acting on objects in the space-time continuum. 5 Grain Paradox- A single grain of millet makes no sound upon falling, but a thousand grains make a sound. But a thousand nothings cannot make something which means the physical world is paradoxical. 6 Arrows Paradox- If we divide time into discrete 0-duration slices, no motion is happening in each of them, so taking them all as a whole, motion is impossible.
9/25/21
1 - Reps - Deleuze
Tournament: Blue Key | Round: 6 | Opponent: American Heritage Broward JA | Judge: Tajaih Robinson 5 Their scholarship is hateful and a reason to lose the round—their author endorsed pedophilia and actively advocated against the age of consent law. Doezema 18, Marie Doezema (Parisian Journalist). “France, Where Age of Consent Is Up for Debate.” The Atlantic, 10 March 2018. https://www.theatlantic.com/international/archive/2018/03/frances-existential-crisis-over-sexual-harassment-laws/550700/WWDH After May 1968, French intellectuals would challenge the state’s authority to protect minors from sexual abuse. In one prominent example, on January 26, 1977, Le Monde, a French newspaper, published a petition signed by the era’s most prominent intellectuals—including Jean-Paul Sartre, Simone de Beauvoir, Gilles Deleuze, Roland Barthes, Philippe Sollers, André Glucksmann and Louis Aragon—in defense of three men on trial for engaging in sexual acts with minors. “French law recognizes in 13- and 14-year-olds a capacity for discernment that it can judge and punish,” the petition stated, “But it rejects such a capacity when the child's emotional and sexual life is concerned.” Furthermore, the signatories argued, children and adolescents have the right to a sexual life: “If a 13-year-old girl has the right to take the pill, what is it for?” It’s unclear what impact, if any, the petition had. The defendants were sentenced to five years in prison, but did not serve their full sentences. Drop the debater—academic spaces have way too many sympathizers who ignore violence against children, and every act must be challenged in the most unflinching terms because anything else reinforces the epistemic bias in favor of rationalizing disgusting behavior. Grant 18, Alec Grant (Independent Scholar, retired from the Uiversity of Brighton where he was a Reader in Narrative Mental Health). “Sanitizing Academics and Damaged Lives” Mad In The UK, 12 April 2018. https://www.madintheuk.com/2018/12/sanitizing-academics-and-damaged-lives/WWDH Academics who sympathize with paedophilia constitute its intellectual public relations arm. Their role is to make child-adult sex presentable, more acceptable to the public, fit for polite society, sugar-coated, glossed with a scholarly veneer, sanitized. Snapshots of sanitizing academic activity from the last 40 years show how this seeps into and contaminates public policy, education and practice in insidious ways. This is done via the workings of power, privilege, perverse cronyism, and, as Pilgrim (2018) argues, as a result of widespread moral stupor and denial. It’s astonishing that this happens in the face of the psychological and development features of complex post-trauma which are often a consequence of child sexual abuse. By pathologizing adult survivors, often with the ‘Borderline Personality Disorder’ (BPD) tag, mainstream psychiatric business-as-usual plays out its role in suppressing the truth about the consequences of paedophilia among adult survivors. Pilgrim (2018) reminds us that care and mutuality are core ethical features of all sexual practices. As someone who was for many years associated with cognitive therapy, I’m interested in ‘cognitive, or thought distortions’, which are used by people in rationalising their behaviour in self-serving ways. We know from Pilgrim and many other writers, researchers and practitioners about the rationalisations of perpetrators of child sexual abuse and exploitation. They include: Children are not victims but willing participants; They want it; They enjoy it; It’s about friendship; It’s about love; It helps children develop and mature. According to Pilgrim (2018), the ‘heyday’ period of academic versions of such rationalisations was the 1970s. 1977 was the year of an unsuccessful lobby by French intellectuals to defend intergenerational sex. Included among these were the otherwise well-respected philosophers Jean-Paul Sartre, Simone de Beauvoir, Jaques Derrida, Roland Barthes and Michel Foucault. These figures were at the forefront of the use of academic authority to lobby governments to liberalise and decriminalise adult-child sexual contact. In 1978, Foucault took part in a France-Culture broadcast with two other gay theorists, Hocquengham and Danet, to discuss the legal aspects of sex between adults and children. They wanted a repeal of the law preventing this because they took the view that in a liberal (they really meant libertarian) society, sexual preferences generally should not be the business of the law. Foucault, Hocquengham and Danet made the following assertions: that children can, and have the capacity to, consent to such relations without being coerced into doing so; that abuse and post-abuse trauma isn’t real; that the law is part of an oppressive and repressive heteronormative social control discourse which unfairly targets sexual minorities; that children don’t constitute a vulnerable population; that children can and are capable of making the first move in seducing adults (they introduced here the category of ‘the seducing child’); that the laws against sexual relations between children and adults actually function to protect children from their own desires, making them an oppressed and repressed group; that – in the language of the sociologist Stanley Cohen – international public horror about sexual relations between adults and children is a form of moral panic which feeds into constructing the ‘paedophile’ as a folk devil, in turn provoking public vigilantism; that sex between adults and children is actually a trivial matter when compared with ‘real crimes’ such as the murder of old ladies; that many members of the judiciary and other authority figures and groups don’t actually believe paedophilia to be a crime; and that consent should be a private contractual matter between the adult and the child. Fast forward to 1981. The Paedophile Information Exchange (PIE) has been active for seven years. This was a pro-paedophile activist group, founded in the UK in 1974 and officially disbanded in 1984. The group, an international organisation of people who traded in obscene material, campaigned for the abolition of the age of consent. Dr Brian Taylor, the research director and member of PIE, and sociology lecturer at the University of Sussex produced the controversial book Perspectives on Paedophilia, which had the aim of enlightening social workers and youth workers about the benefits of paedophilia. Taylor, who identified as gay, advocated ‘guilt-free pederasty’ (sexual relations between two males, one of whom is a minor). He argued that people generally are hostile to paedophilia only because they don’t understand it, and If they did wouldn’t be so against it. So it was simply a matter of clearing up prejudice and ignorance.
10/31/21
1 - Reps - Pan
Tournament: Palm Classic | Round: 1 | Opponent: Harker SY | Judge: Felicity Park 5 1 The aff’s portrayal of China as a threat legitimizes otherizing power politics which transforms our conceptions of China into a social reality Pan (Chengxin, Lecturer in International Relations, School of International and Political Studies at Deakin University, “The "China Threat" in American Self-Imagination”, Alternatives 29 (2004), 305-331, Galileo). 2004. China and its relationship with the United States has long been a fascinating subject of study in the mainstream U.S. international relations community. This is reflected, for example, in the current heated debates over whether China is primarily a strategic threat to or a market bonanza for the United States and whether containment or engagement is the best way to deal with it.* While U.S. China scholars argue fiercely over "what China precisely is," their debates have been underpinned by some common ground, especially in terms of a positivist epistemology. Firstly, they believe that China is ultimately a knowable object, whose reality can be, and ought to be, empirically revealed by scientific means. For example, after expressing his dissatisfaction with often conflicting Western perceptions of China, David M. Lampton, former president of the National Committee on U.S.-China Relations, suggests that "it is time to step back and look at where China is today, where it might be going, and what consequences that direction will hold for the rest of the world."2 Like many other China scholars, Lampton views his object of study as essentially "something we can stand back from and observe with clinical detachment."^ Secondly, associated with the first assumption, it is commonly believed that China scholars merely serve as "disinterested observers" and that their studies of China are neutral, passive descriptions of reality. And thirdly, in pondering whether China poses a threat or offers an opportunity to the United States, they rarely raise the question of "what the United States is." That is, the meaning of the United States is believed to be certain and beyond doubt. I do not dismiss altogether the conventional ways of debating China. It is not the purpose of this article to venture my own "observation" of "where China is today," nor to join the "containment" versus "engagement" debate per se. Rather, I want to contribute to a novel dimension of the China debate by questioning the seemingly unproblematic assumptions shared by most China scholars in the mainstream IR community in the United States. To perform this task, I will focus attention on a particularly significant component of the China debate; namely, the "China threat" literature. More specifically, I want to argue that U.S. conceptions of China as a threatening other are always intrinsically linked to how U.S. policymakers/mainstream China specialists see themselves (as representatives of the indispensable, security-conscious nation, for example). As such, they are not value-free, objective descriptions of an independent, preexisting Chinese reality out there, but are better understood as a kind of normative, meaning-giving practice that often legitimates power politics in U.S.-China relations and helps transform the "China threat" into social reality. In other words, it is self-fulfilling in practice, and is always part of the "China threat" problem it purports merely to describe. In doing so, I seek to bring to the fore two interconnected themes of self/other constructions and of theory as practice inherent in the "China threat" literature—themes that have been overridden and rendered largely invisible by those common positivist assumptions. These themes are of course nothing new nor peculiar to the "China threat" literature. They have been identified elsewhere by critics of some conventional fields of study such as ethnography, anthropology, oriental studies, political science, and international relations.* Yet, so far, the China field in the West in general and the U.S. "China threat" literature in particular have shown remarkable resistance to systematic critical refiection on both their normative status as discursive practice and their enormous practical implications for international politics.^ It is in this context that this article seeks to make a contribution.
2 The discursive construction of China as a threat makes violence inevitable – turns case Pan: (Chengxin, Lecturer in International Relations, School of International and Political Studies at Deakin University, “The "China Threat" in American Self-Imagination”, Alternatives 29 (2004), 305-331, Galileo). 2004.
I have argued above that the "China threat" argument in mainstream U.S. IR literature is derived, primarily, from a discursive construction of otherness. This construction is predicated on a particular narcissistic understanding of the U.S. self and on a positivist- based realism, concerned with absolute certainty and security, a concern central to the dominant U.S. self-imaginary. Within these frameworks, it seems imperative that China be treated as a threatening, absolute other since it is unable to fit neatly into the U.S.-led evolutionary scheme or guarantee absolute security for the United States, so that U.S. power preponderance in the post-Cold War world can still be legitimated. Not only does this reductionist representation come at the expense of understanding China as a dynamic, multifaceted country but it leads inevitably to a policy of containment that, in turn, tends to enhance the influence of realpolitik thinking, nationalist extremism, and hard-line stance in today's China. Even a small dose of the containment strategy is likely to have a highly dramatic impact on U.S.-China relations, as the 1995-1996 missile crisis and the 2001 spy-plane incident have vividly attested. In this respect, Chalmers Johnson is right when he suggests that "a policy of containment toward China implies the possibility of war, just as it did during the Cold War vis-a-vis the former Soviet Union. The balance of terror prevented war between the United States and the Soviet Union, but this may not work in the case of China." For instance, as the United States presses ahead with a missile defence shield to "guarantee" its invulnerability from rather unlikely sources of missile attacks, it would be almost certain to intensify China's sense of vulnerability and compel it to expand its current small nuclear arsenal so as to maintain the efficiency of its limited deterrence. In consequence, it is not impossible that the two countries, and possibly the whole region, might be dragged into an escalating arms race that would eventually make war more likely. Neither the United States nor China is likely to be keen on fighting the other. But as has been demonstrated, the "China threat" argument, for all its alleged desire for peace and security, tends to make war preparedness the most "realistic" option for both sides. At this juncture, worthy of note is an interesting comment made by Charlie Neuhauser, a leading CIA China specialist, on the Vietnam War, a war fought by the United States to contain the then-Communist "other." Neuhauser says, "Nobody wants it. We don't want it, Ho Chi Minh doesn't want it; it's simply a question of annoying the other side."94 And, as we know, in an unwanted war some fifty-eight thousand young people from the United States and an estimated two million Vietnamese men, women, and children lost their lives. Therefore, to call for a halt to the vicious circle of theory as practice associated with the "China threat" literature, tinkering with the current positivist-dominated U.S. IR scholarship on China is no longer adequate. Rather, what is needed is to question this un-self-reflective scholarship itself, particularly its connections with the dominant way in which the United States and the West in general represent themselves and others via their positivist epistemology, so that alternative, more nuanced, and less dangerous ways of interpreting and debating China might become possible. That’s a voting issue – question their reps before the passage of the plan – they do not get to weigh the case
2/13/22
1 - TT v1
Tournament: Loyola | Round: 3 | Opponent: Bishops AC | Judge: Abhishek Rao The role of the ballot is to determine whether the resolution is a true or false statement – anything else moots 7 minutes of the nc – their framing collapses since you must say it is true that a world is better than another before you adopt it. They justify substantive skews since there will always be a more correct side of the issue but we compensate for flaws in the lit. Scalar methods like comparison increases intervention – the persuasion of certain DA or advantages sway decisions – T/F binary is descriptive and technical. Negate because either the aff is true meaning its bad for us to clash w/ it because it turns us into Fake News people OR it’s not meaning it’s a lie that you can’t vote on for ethics a priori's 1st – even worlds framing requires ethics that begin from a priori principles like reason or pleasure so we control the internal link to functional debates. The ballot says vote aff or neg based on a topic – five dictionaries define to negate as to deny the truth of and affirm as to prove true so it's constitutive and jurisdictional.
9/25/21
1 - TT v2
Tournament: Loyola | Round: 6 | Opponent: Harvard Westlake EJ | Judge: Neville Tom 2 The role of the ballot is to determine whether the resolution is a true or false statement – anything else moots 7 minutes of the nc – their framing collapses since you must say it is true that a world is better than another before you adopt it. They justify substantive skews since there will always be a more correct side of the issue but we compensate for flaws in the lit. Scalar methods like comparison increases intervention – the persuasion of certain DA or advantages sway decisions – T/F binary is descriptive and technical. Negate because either the aff is true meaning its bad for us to clash w/ it because it turns us into Fake News people OR it’s not meaning it’s a lie that you can’t vote on for ethics a priori's 1st – even worlds framing requires ethics that begin from a priori principles like reason or pleasure so we control the internal link to functional debates. The ballot says vote aff or neg based on a topic – five dictionaries define to negate as to deny the truth of and affirm as to prove true so it's constitutive and jurisdictional. I denied the truth of the resolution by disagreeing with the aff which means I've met my burden.
9/23/21
1 - TT v3
Tournament: Lex | Round: 2 | Opponent: Bridgeland PT | Judge: Brett Cryan The ROB is to determine the truth of falsity of the resolution – 1 Textuality – five dictionaries define to negate as to deny the truth of and affirm as to prove true. That OW – a Jurisdiction – judges are constrained through their constitutive purpose and proves it’s a side constraint on what arguments they can vote on. b Predictability – people base prep off the pregiven terms in the resolution. 2 Isomorphism – alternative ROBs aren’t binary truth/false because of topic lit biases which increases intervention and takes the debate out of the hands of debaters. 3 Inclusion – any offense functions under it as long as debaters implicate their positions to prove the truth or falsity of the resolution which maximizes substantive clash through ground and is a sequencing question for engaging in debate. 4 Logic – any statement relies on a conception of truth to function – for example, I’m hungry is the same as its true that I’m hungry – logic is a litmus test for any argument and proves your ROB collapse since it relies on truth.
1/18/22
1 - Theory - CSA
Tournament: Loyola | Round: 3 | Opponent: Bishops AC | Judge: Abhishek Rao 2 Interpretation: If the affirmative defends anything other than the topic then they must provide a counter-solvency advocate for their specific advocacy in the 1AC. (To clarify, you must have an author that states we should not do your aff, insofar as the aff is not a whole res phil aff) Violation: Standards:
Fairness – This is a litmus test to determining whether your aff is fair – a) Limits – there are infinite things you could defend outside the exact text of the resolution which pushes you to the limits of contestable arguments, even if your interp of the topic is better, the only way to verify if it’s substantively fair is proof of counter-arguments. Nobody knows your aff better than you, so if you can’t find an answer, I can’t be expected to. Our interp narrows out trivially true advocacies since counter-solvency advocates ensure equal division of ground for both sides. b) Shiftiness-Having a counter-solvency advocate helps us conceptualize what their advocacy is and how it’s implemented. Intentionally ambiguous affirmatives we don’t know much about can’t spike out of DA’s and CP’s if they have an advocate that delineates these things. 2. Research – Forces the aff to go to the other side of the library and contest their own view points, as well as encouraging in depth-research about their own position. Having one also encourages more in-depth answers since I can find responses. Key to education since we definitionally learn more about positions when we contest our own.
9/25/21
1 - Theory - CX Checks Bad
Tournament: Lex | Round: 2 | Opponent: Bridgeland PT | Judge: Brett Cryan 1 Interpretation – The Aff must defend theory interpretations and arguments unconditionally as presented in the 1ac. In other words, the aff may not run cx checks Violation – they said neg should check interps in cx under the advocacy text The standard is Theory recourse – CX checks 1 Causes sidestepping, encouraging you to have hidden abusive args since I either call you out on it in cx and you kick it or I concede it and you win, which makes debates innocuous 2 Causes ambiguity – what constitutes a sufficient "check" is unclear. Even if we isolate the abusive practice in CX, the aff can still go for the arg and establish new parameters for checking 3 Prep skew – even if you don’t kick the abuse, you get extra time to prep my interp since you know what I’ll indict. That gives you nearly double the time to prep and creates irreciprocal burdens. Fairness and education are voters – its how judges evaluate rounds and why schools fund debate DTD – it’s key to norm set and deter future abuse Competing interps – Reasonability invites arbitrary judge intervention and a race to the bottom of questionable argumentation – it also collapses since brightlines operate on an offense-defense paradigm No RVIs – A – Encourages theory baiting – outweighs because if the shell is frivolous, they can beat it quickly B – its illogical for you to win for proving you were fair – outweighs since logic is a litmus test for other arguments
1/18/22
1 - Theory - Cite Box
Tournament: Valley RR | Round: 5 | Opponent: Murphy Independent AW | Judge: Jared Burke, Conal Thomas-McGinnis 3 A Interpretation: Debaters must disclose all constructive positions in cite boxes on the 20-21 NDCA LD wiki. To clarify, they can’t say check open source. Debatecoaches no date Violation: SS Standards: 1 Pre-round prep: prep becomes atrocious when you make people sift through 20 word docs to figure out which links you’re reading and which impacts to prep. Discourages tricks—you can just hide a bunch of blippy arguments. Also key for inclusion since disadvantaged people have computers more prone to lag and even 3 or 4 can crash the program for them—outweighs accessibility is a multiplier for their impacts. Disclosing in cite boxes solves—people can quickly get a summary of your position and go to open source if they need more information 2 wiki rules—the wiki tells you to disclose like everyone else. Freeloading is bad and o/w—it cultivates passive citizenship and turns any hope of actually solving their impacts which is a voter for education. 1NC theory first - 1 Abuse was self-inflicted- They started the chain of abuse and forced me down this strategy 2 Norming- We have more speeches to norm over whether it’s a good idea 3 It was introduced first so it comes lexically prior.
9/25/21
1 - Theory - Coastline Spec
Tournament: Blue Key | Round: 6 | Opponent: American Heritage Broward JA | Judge: Tajaih Robinson 2 Interpretation: The affirmative must specify a measurement unit to measure the coastline of States and what territories are included. Weiner 18: Sophie Weiner, “Why it's Impossible to Accurately Measure a Coastline” march 3, 2018. https://www.popularmechanics.com/science/environment/a19068718/why-its-impossible-to-accurately-measure-a-coastline/.LHP AV Try measuring the coastline of the United States, and it's almost guaranteed you'll find a different answer than anyone before you. Even official sources like the Congressional Research Institute, the CIA, and NOAA came up with wildly different answers (29,093 miles, 19,924 miles, and 95,471 miles, respectively). How could their measurements be so different? Meet the Coastline Paradox. As explained in this video from RealLifeLore, the Coastline Paradox has been vexing researchers and cartographers since its discovery by mathematician Lewis Fry Richardson in 1951. The explanation for the paradox is surprisingly simple: unlike human-drawn geometrical shapes, a coastline is full of nooks and crannies made by nature. The more one zooms in on the coastline, the more these inconsistencies multiply. This means that the length of a coastline is completely dependent on what size of measurement unit you use to study it. For example, the coastline of the UK is only 2,800 kilometers long when measured in lengths of 100 kilometers. Shrink that to 50 kilometer measurements and suddenly the coastline is 3,400 kilometers. Coastlines are like fractals--the further you zoom in, the more complex it gets (famed fractal researcher Benoit Mandelbrot expanded Richardson's work on the paradox). If you were to try to measure a coastline on an atomic level, the length would approach infinity. Violation: they didn’t Vote Neg: 1 Resolvability – there’s no way to determine whether arguments apply because there’s no basis for determining whether it’s part a States territory or under their jurisdiction – that’s an impact – every round needs a winner and else the judge makes an arbitrary decision 2 Engagement – a the neg can never clash with case because we don’t know whether our args will apply – this is especially true with stuff close to borders – they’ll just shift in the 1ar, pigeonholing us into stale generics that destroy innovative education and quality neg ground
10/31/21
1 - Theory - Combo Shell
Tournament: Valley | Round: 6 | Opponent: Walt Whitman EY | Judge: Breigh Plat 1 Interp: cant say aff theory is dtd, no rvi, ci and aff fairness issues come before nc arguments, Violation – Their UV 1 Inf – Abuse - They can just READ a theory shell thas dtd/no rvi/ci that means their standard automatically comes before any 1nc standard since aff fairness comes first, it also means it comes as the highest layer cuz i cant weigh between other shells cuz the aff has the highest fairness adv. So this means that as long as they j read a shell i violate in the 1ar i will lose Fairness and education are voters – its how judges evaluate rounds and why schools fund debate DTD – it’s key to norm set and deter future abuse Neg theory is DTD - 1ARs control the direction of the debate because it determines what the 2NR has to go for – DTD allows us some leeway in the round by having some control in the direction Competing interps – Reasonability invites arbitrary judge intervention and a race to the bottom of questionable argumentation – it also collapses since brightlines operate on an offense-defense paradigm No RVIs – A – Going all in on theory kills substance education which outweighs on timeframe B - Discourages checking real abuse which outweighs on norm-setting C – Encourages theory baiting – outweighs because if the shell is frivolous, they can beat it quickly D – its illogical for you to win for proving you were fair – outweighs since logic is a litmus test for other arguments E - Kills norm setting since debaters can never admit they’re wrong – outweighs since norm setting is the constitutive purpose of theory F – They are the logic of criminalization that over-punish people-of-color for trying to create productive discourse
9/26/21
1 - Theory - Combo Shell v2
Tournament: Blue Key | Round: 6 | Opponent: American Heritage Broward JA | Judge: Tajaih Robinson 1 Interpretation: Debaters may not justify 1ar theory is dtd, no rvi, competing interps, no 2n theory paradigm issues , and it’s the highest layer Violation: its all in the underview Standard: Infinite Abuse - their norm justifies the affirmative auto winning every round since they can read a risk free 1AR shell with DTD and Competing interps which I cannot answer since the theory shell since they make paradigm issues like evaluate the theory debate after the 1ar in the 1ar. And since I don’t have 2n paradigm issues I can’t contest it. Even if I try to uplayer the shell and read meta theory to get an out in the 2NR I can’t since your shell is the highest layer and nor can I go for paradigm issues like reasonability to gut check the shell since you denied that as well. Norming is an independent voter since justifying the value of debate necessarily justifies the norms of the activity being good in order for debate to be valuable. Fairness and education are voters – its how judges evaluate rounds and why schools fund debate Neg theory is DTD - 1ARs control the direction of the debate because it determines what the 2NR has to go for – DTD allows us some leeway in the round by having some control in the direction Competing interps – Reasonability invites arbitrary judge intervention and a race to the bottom of questionable argumentation – it also collapses since brightlines operate on an offense-defense paradigm No RVIs – A – Going all in on theory kills substance education which outweighs on timeframe B - Discourages checking real abuse which outweighs on norm-setting C – Encourages theory baiting – outweighs because if the shell is frivolous, they can beat it quickly D – its illogical for you to win for proving you were fair – outweighs since logic is a litmus test for other arguments E - Kills norm setting since debaters can never admit they’re wrong – outweighs since norm setting is the constitutive purpose of theory F – They are the logic of criminalization that over-punish people-of-color for trying to create productive discourse I get 2nr theory anything else allows them to sever which kills me.
10/31/21
1 - Theory - Community Spec
Tournament: Voices | Round: Doubles | Opponent: Archbishop Mitty AS | Judge: Quentin Clark, Gordon Krauss, Samantha McLoughlin 7 Interpretation – Debaters must spec their favorite community character in their first speech – mine is the GOAT Abed Violation – They don’t Standard – Relatability – Community is a show about real life problems that people can relate to outweighs on portability because we can discuss solutions to problems such as getting good grades, choosing a career, etc. This isn’t infinitely regressive – its an expectation to be able to discuss community
10/10/21
1 - Theory - Contact Info
Tournament: Lex | Round: 4 | Opponent: Ridge SN | Judge: Daniel Shahab Diaz 1 Interpretation: Debaters must, on the page with their name and the school they attend, disclose their contact information Violation: They didn’t
Prefer 1 Inclusion – Novices would have a way to contact you about your positions and learn from them and debaters would tell you before round about triggering positions that you’ve read before. Independent voter because inclusion is a gateway issue for debate to occur in the first place 2 Prep Skew- Pre-round disclosure can’t happen if you don’t have a preferable means of contact because I would never know the aff. Proven by this round where I was unable to contact you until you joined the room. Fairness and education are voters – its how judges evaluate rounds and why schools fund debate DTD – it’s key to norm set and deter future abuse Competing interps – Reasonability invites arbitrary judge intervention and a race to the bottom of questionable argumentation – it also collapses since brightlines operate on an offense-defense paradigm No RVIs – A – Encourages theory baiting – outweighs because if the shell is frivolous, they can beat it quickly B – its illogical for you to win for proving you were fair – outweighs since logic is a litmus test for other arguments
1/15/22
1 - Theory - Disclose Before Flip
Tournament: Palm Classic | Round: Triples | Opponent: Catonsville AT | Judge: James Stuckert, Truman Le, John Boals 1 Interp: The affirmative must correctly tell the negative which aff they will be reading, including any and all changes, within ten minutes of pairings being released. Violation: screenshots –
1 Prep and clash – they force us to spend pre-round prep prepping the wrong aff which means I’m unprepared to engage - that decks clash and fairness 2 Strat skew - forces us to make a flip decision in the dark since we don't know if the aff is new or one of the 6 on the wiki, and leaves us guessing at whether we'll have prep vs the aff you choose Fairness and education are voters – its how judges evaluate rounds and why schools fund debate DTD – it’s key to norm set and deter future abuse Competing interps – Reasonability invites arbitrary judge intervention and a race to the bottom of questionable argumentation – it also collapses since brightlines operate on an offense-defense paradigm No RVIs – A – Encourages theory baiting – outweighs because if the shell is frivolous, they can beat it quickly B – its illogical for you to win for proving you were fair – outweighs since logic is a litmus test for other arguments
2/13/22
1 - Theory - Disclose Spikes
Tournament: Loyola | Round: 2 | Opponent: Dulles TY | Judge: Joseph Georges 2 Interpretation—Tommy must disclose all preemptive interpretations and offensive theory spikes 30 minutes before the round Violation—they didn’t
Vote neg for clash – they skirt meaningful discussion and debate by surprising us with a bunch of blippy spikes and 1ac shells which kills substantive education and clash – outweighs because it’s the only unique benefit of debate
9/4/21
1 - Theory - Disclose Tournament Names
Tournament: Apple Valley | Round: 6 | Opponent: Eagan AE | Judge: Rex Evans 3 Interp: Debaters must disclose tournaments on the 2021-2022 NDCA LD wiki under the actual name of the tournament on tabroom for every round at said tournament. Violation: The name is Mid America Cup on tab https://www.tabroom.com/index/tourn/index.mhtml?tourn_id=20873
1 The standard is inclusion - they make debate inaccessible to novices or small schools who compete on the circuit but don’t have access to resources or have knowledge of debate lingo to know the shorthand nicknames for tournaments. Two internal links to accessibility - 1) lets debaters see if you won or lost on tab going for specific strategies or hitting specific strategies, letting debaters adapt around that and b) lets debaters see what speaks judges gave to help them see how good you were at going for x argument. Independently links into reciprocity since if I disclosed one way and you didnt’ you had the advantage in this round. Outweighs - none of their standards matter if debaters can’t access them and means reasonability is uniquely wrong since even a 1 risk of exclusion is bad, you obviously don’t say some level of exclusion is justified
11/6/21
1 - Theory - Hedge
Tournament: Loyola | Round: 2 | Opponent: Dulles TY | Judge: Joseph Georges 3 NC theory first - 1 They started the chain of abuse and forced me down this strategy 2 We have more speeches to norm over it 3 It was introduced first so it comes lexically prior. Neg abuse outweighs Aff abuse – 1 Infinite prep time before round to frontline 2 2AR judge psychology 3 1st and last speech 4 Infinite perms and uplayering in the 1AR. Reasonability on 1AR shells – 1AR theory is very aff-biased because the 2AR gets to line-by-line every 2NR standard with new answers that never get responded to DTA on 1AR shells - They can blow up blippy 20 second shells in the 2AR but I have to split my time and can’t preempt 2AR spin which necessitates judge intervention RVIs on 1AR theory – 1AR being able to spend 20 seconds on a shell and still win forces the 2N to allocate at least 2:30 on the shell which means RVIs check back time skew No new 1ar theory paradigm issues- A New 1ar paradigms moot any 1NC theoretical offense B introducing them in the aff allows for them to be more rigorously tested
9/4/21
1 - Theory - Hedge v2
Tournament: Jack Howe | Round: 2 | Opponent: Brentwood BB | Judge: Vanessa Ngywen 7 NC theory first - 1 They started the chain of abuse and forced me down this strategy 2 We have more speeches to norm over it 3 It was introduced first so it comes lexically prior. Neg abuse outweighs Aff abuse – 1 Infinite prep time before round to frontline 2 2AR judge psychology 3 1st and last speech 4 Infinite perms and uplayering in the 1AR. Reasonability on 1AR shells – 1AR theory is very aff-biased because the 2AR gets to line-by-line every 2NR standard with new answers that never get responded to DTA on 1AR shells - They can blow up blippy 20 second shells in the 2AR but I have to split my time and can’t preempt 2AR spin which necessitates judge intervention RVIs on 1AR theory – 1AR being able to spend 20 seconds on a shell and still win forces the 2N to allocate at least 2:30 on the shell which means RVIs check back time skew No new 1ar theory paradigm issues- A New 1ar paradigms moot any 1NC theoretical offense B introducing them in the aff allows for them to be more rigorously tested Yes 2NR Theory – anything else allows them to sever with no check which is awful for negs
9/18/21
1 - Theory - Hedge v3
Tournament: Valley | Round: 6 | Opponent: Walt Whitman EY | Judge: Breigh Plat 4 NC theory first - 1 Abuse was self-inflicted- They started the chain of abuse and forced me down this strategy 2 Norming- We have more speeches to norm over whether it’s a good idea 3 It was introduced first so it comes lexically prior. Neg abuse outweighs Aff abuse – 1 Infinite prep time before round to frontline 2 2AR judge psychology and 1st and last speech 3 Infinite perms and uplayering in the 1AR. Reasonability on 1AR shells – 1AR theory is very aff-biased because the 2AR gets to line-by-line every 2NR standard with new answers that never get responded to– reasonability checks 2AR sandbagging by preventing really abusive 1NCs while still giving the 2N a chance. DTA on 1AR shells - They can blow up blippy 20 second shells in the 2AR while I have to split my time and can’t preempt 2AR spin which necessitates judge intervention and means 1AR theory is irresolvable so you shouldn’t stake the round on it. RVIs on 1AR theory – 1AR being able to spend 20 seconds on a shell and still win forces the 2N to allocate at least 2:30 on the shell which means RVIs check back time skew – ows on quantifiaiblity No new 1ar theory paradigm issues- A the 1NC has already occurred with current paradigm issues in mind so new 1ar paradigms moot any theoretical offense B introducing them in the aff allows for them to be more rigorously tested which o/w’s on time frame since we can set higher quality norms. Reject 1ar Theory and voting issues A 7 - 6 time skew B No 3nr, so 2ar gets to weigh however they want C Judges are more likely to by 2a arguments as they are the last speech D Too many theory flows make it impossible to test the aff E You get a 2-1 speech advantage F We only get 2 speeches of new arguments to deliberate over your shell which isn’t enough time G there’s no such thing as infinite abuse as NC only has 7 minutes H 1ar theory is used as a strategic advantage Reject aff fairness concerns 1 Aff can speak last – a strategic 2AR can make 1 response to each 2NR argument to automatically get rid of them 2 The 2NR has to multi-point each argument since if they only have 1 response, 3 Judge psychology A Aff gets the first and last speech which the judge can remember more clearly so the judge is more likely to vote for aff offense or forget to evaluate neg offense B Most people think the aff is harder, even if it’s not, so they already give leeway on aff arguments to make affirming easier Outweighs since it’s an implicit bias so the neg can’t do anything to overcome it 4 The aff has infinite prep time to frontline their aff and find the perfect strategy, whereas the neg has to assemble one in 4 minutes 5The aff has infinite prep time to frontline their aff and find the perfect strategy, whereas the neg has to assemble one in 4 minutes – means they can determine the most efficient and perfect strategy and write blocks with perfect efficiency, which solves the time skew and makes affirming easier
9/26/21
1 - Theory - Hedge v4
Tournament: Blue Key | Round: 4 | Opponent: Scarsdale KS | Judge: Samantha McLoughlin 5 Reject 1AR theory A NO 3NR so 2ar gets to weigh however they want B time skew- 1NC theory first A Abuse was self-inflicted- B 1nc t/theory is more common, so resolving it first sets better norms B It definitionally comes prior since it was introduced first. Negating is harder so you grant me an rvi on all 1ar shells- Judge psychology they get a 2ar judge psychology advantage. 1ar theory drop the arg. C They can spam no risk shells that make a 2n impossible since we have to adequately answer all of them and win substance. D NC theory first - 1 Abuse was self-inflicted- They started the chain of abuse and forced me down this strategy 2 Norming- We have more speeches to norm over whether it’s a good idea 3 It was introduced first so it comes lexically prior. E Neg abuse outweighs Aff abuse – 1 Infinite prep time before round to frontline 2 2AR judge psychology and 1st and last speech 3 Infinite perms and uplayering in the 1AR.
10/30/21
1 - Theory - Hidden A Prioris Bad
Tournament: Loyola | Round: 2 | Opponent: Dulles TY | Judge: Joseph Georges 1 Interpretation: Tommy must explicitly number all a prioris, label them as their own off, or have a line break between an arguments before and after it. To clarify, hidden a-prioris bad. Violation: there are hidden a prioris in _ Inclusion- People who can’t flow as well, process fast blips, or have a hard time reading huge blocks of text due to disabilities get crowded out of the debate because they always lose to autowin arguments– that outweighs since inclusion is an impact filter Fairness and education are voters – its how judges evaluate rounds and why schools fund debate Neg theory is DTD - 1ARs control the direction of the debate because it determines what the 2NR has to go for – DTD allows us some leeway in the round by having some control in the direction Competing interps – Reasonability invites arbitrary judge intervention and a race to the bottom of questionable argumentation – it also collapses since brightlines operate on an offense-defense paradigm No RVIs – A – Going all in on theory kills substance education which outweighs on timeframe B - Discourages checking real abuse which outweighs on norm-setting C – Encourages theory baiting – outweighs because if the shell is frivolous, they can beat it quickly D – its illogical for you to win for proving you were fair – outweighs since logic is a litmus test for other arguments E - Kills norm setting since debaters can never admit they’re wrong – outweighs since norm setting is the constitutive purpose of theory
9/4/21
1 - Theory - Multiple Framework Warrants Bad
Tournament: Apple Valley | Round: 6 | Opponent: Eagan AE | Judge: Rex Evans 2 Interpretation: Debaters may only read one independent framework justification. To clarify, your framework can contain at most a syllogism that leads to your framework’s conclusion and one independent reason to prefer that is not dependent on the syllogism. Violation: There are multiple independent reasons to prefer the util framework e.g. actor spec, TJFs, reductionism. 1 Strat Skew and Clash- Each of these arguments is functionally a NIB on the framework because if I drop even one of these, it’s game over. Reading one solves since it ensures aff framework flexibility while also ensuring legitimate clash. That outweighs on probability because util frameworks spam 15 blippy framework justifications and always go for one that doesn’t get responded to because of time.
11/6/21
1 - Theory - Must Open Source
Tournament: Harvard | Round: 5 | Opponent: Hunter AH | Judge: Henry Eberhart 1 Interpretation: Debaters must disclose all constructive positions on open source with highlighting on the 2021-2022 NDCA LD wiki after the round in which they read them. Violation – screenshots
1 Debate resource inequities—you’ll say people will steal cards, but that’s good—it’s the only way to truly level the playing field for students such as novices in under-privileged programs. Antonucci 05 Michael (Debate coach for Georgetown; former coach for Lexington High School); “eDebate open source? resp to Morris”; December 8; http://www.ndtceda.com/pipermail/edebate/2005-December/064806.html a. Open source systems are preferable to the various punishment proposals in circulation. It's better to share the wealth than limit production or participation. Various flavors of argument communism appeal to different people, but banning interesting or useful research(ers) seems like the most destructive solution possible. Indeed, open systems may be the only structural, rule-based answer to resource inequities. Every other proposal I've seen obviously fails at the level of enforcement. Revenue sharing (illegal), salary caps (unenforceable and possibly illegal) and personnel restrictions (circumvented faster than you can say 'information is fungible') don't work. This would - for better or worse. b. With the help of a middling competent archivist, an open source system would reduce entry barriers. This is especially true on the novice or JV level. Young teams could plausibly subsist entirely on a diet of scavenged arguments. A novice team might not wish to do so, but the option can't hurt. c. An open source system would fundamentally change the evidence economy without targeting anyone or putting anyone out of a job. It seems much smarter (and less bilious) to change the value of a professional card-cutter's work than send the KGB after specific counter-revolutionary teams. 2 Depth of clash – it allows debaters to have nuanced researched objections to their opponents evidence before the round at a much faster rate, which leads to higher quality ev comparison – outweighs cause thinking on your feet is NUQ but the best quality responses come from full access to a case. Fairness and education are voters – its how judges evaluate rounds and why schools fund debate DTD – it’s key to norm set and deter future abuse Competing interps – Reasonability invites arbitrary judge intervention and a race to the bottom of questionable argumentation – it also collapses since brightlines operate on an offense-defense paradigm No RVIs – A – Encourages theory baiting – outweighs because if the shell is frivolous, they can beat it quickly B – its illogical for you to win for proving you were fair – outweighs since logic is a litmus test for other arguments
2/20/22
1 - Theory - Must Open Source
Tournament: Harvard | Round: 5 | Opponent: Hunter AH | Judge: Henry Eberhart 1 Interpretation: Debaters must disclose all constructive positions on open source with highlighting on the 2021-2022 NDCA LD wiki after the round in which they read them. Violation – screenshots
1 Debate resource inequities—you’ll say people will steal cards, but that’s good—it’s the only way to truly level the playing field for students such as novices in under-privileged programs. Antonucci 05 Michael (Debate coach for Georgetown; former coach for Lexington High School); “eDebate open source? resp to Morris”; December 8; http://www.ndtceda.com/pipermail/edebate/2005-December/064806.html a. Open source systems are preferable to the various punishment proposals in circulation. It's better to share the wealth than limit production or participation. Various flavors of argument communism appeal to different people, but banning interesting or useful research(ers) seems like the most destructive solution possible. Indeed, open systems may be the only structural, rule-based answer to resource inequities. Every other proposal I've seen obviously fails at the level of enforcement. Revenue sharing (illegal), salary caps (unenforceable and possibly illegal) and personnel restrictions (circumvented faster than you can say 'information is fungible') don't work. This would - for better or worse. b. With the help of a middling competent archivist, an open source system would reduce entry barriers. This is especially true on the novice or JV level. Young teams could plausibly subsist entirely on a diet of scavenged arguments. A novice team might not wish to do so, but the option can't hurt. c. An open source system would fundamentally change the evidence economy without targeting anyone or putting anyone out of a job. It seems much smarter (and less bilious) to change the value of a professional card-cutter's work than send the KGB after specific counter-revolutionary teams. 2 Depth of clash – it allows debaters to have nuanced researched objections to their opponents evidence before the round at a much faster rate, which leads to higher quality ev comparison – outweighs cause thinking on your feet is NUQ but the best quality responses come from full access to a case. Fairness and education are voters – its how judges evaluate rounds and why schools fund debate DTD – it’s key to norm set and deter future abuse Competing interps – Reasonability invites arbitrary judge intervention and a race to the bottom of questionable argumentation – it also collapses since brightlines operate on an offense-defense paradigm No RVIs – A – Encourages theory baiting – outweighs because if the shell is frivolous, they can beat it quickly B – its illogical for you to win for proving you were fair – outweighs since logic is a litmus test for other arguments
2/20/22
1 - Theory - Must Read ROB
Tournament: Valley | Round: 1 | Opponent: Harker PG | Judge: TJ Maher 2 Interpretation: The affirmative debater must articulate a distinct ROB in the form of a delineated text in the first affirmative speech. Violation: Prefer- 1 Strat Skew – They can read multiple pieces of offense under different ROBs and then read a new one in the 1AR so they never lose under the ROB. it just becomes a 2NR debate about whether the ROB is better than the 1NC’s which moots engagement. That means infinite abuse – All you have to do is dump on the 1N ROB and marginally extend your warrants in the 2AR and the neg can’t do anything about it since there is no 3NR to answer the 2AR weighing or extrapolations 2 Reciprocity – (a) restarting the ROB debate in the 1ar puts you at a 7-6 advantage– putting it in the aff makes it 13-13 (b) you have one more speech to contest my ROB and weigh (c) I can only read a ROB in the 1N so you should read it in your first speech– that’s definitionally an equal burden.
9/25/21
1 - Theory - Must Record Speeches
Tournament: Harvard | Round: 4 | Opponent: Scarsdale KS | Judge: David Herrera 3 Interp – Debaters must have recordings of their speeches and send them if requested Violation – They didn't Prefer 1 Cheating – debaters can fake internet drop offs and then steal prep which decks reciprocity. O/Ws since it destroys competitive incentives and educational value since they are structurally ahead 2 Accidents possible, external conditions like power going out, wifi dropping off, or excessive background noise make it impossible to hear in real time, recordings ensure that a speech isn’t given twice, which allows them to remodify and change their strat or incite judge intervention which is the worst violation of procedural fairness 3 Key to check clipping cards and make cheaters lose with literal proof
2/19/22
1 - Theory - New Affs Bad
Tournament: Jack Howe | Round: 3 | Opponent: Ayala AM | Judge: Srinidhi Yerraguntala 1 Interpretation—the aff must disclose the plan text, framework, and advantage area 30 minutes before the round. To clarify, disclosure can occur on the wiki or over message. Violation—they didn’t
Vote neg for prep and clash—two internal links—a) neg prep—4 minutes of prep is not enough to put together a coherent 1nc or update generics—30 minutes is necessary to learn a little about the affirmative and piece together what 1nc positions apply and cut and research their applications to the affirmative b) aff quality—plan text disclosure discourages cheap shot affs. If the aff isn’t inherent or easily defeated by 20 minutes of research, it should lose—this will answer the 1ar’s claim about innovation—with 30 minutes of prep, there’s still an incentive to find a new strategic, well justified aff, but no incentive to cut a horrible, incoherent aff that the neg can’t check against the broader literature. Fairness and education are voters – its how judges evaluate rounds and why schools fund debate Neg theory is DTD - 1ARs control the direction of the debate because it determines what the 2NR has to go for – DTD allows us some leeway in the round by having some control in the direction Competing interps – Reasonability invites arbitrary judge intervention and a race to the bottom of questionable argumentation – it also collapses since brightlines operate on an offense-defense paradigm No RVIs – A – Going all in on theory kills substance education which outweighs on timeframe B - Discourages checking real abuse which outweighs on norm-setting C – Encourages theory baiting – outweighs because if the shell is frivolous, they can beat it quickly D – its illogical for you to win for proving you were fair – outweighs since logic is a litmus test for other arguments E - Kills norm setting since debaters can never admit they’re wrong – outweighs since norm setting is the constitutive purpose of theory F – They are the logic of criminalization that over-punish people-of-color for trying to create productive discourse
9/19/21
1 - Theory - New Affs Bad v2
Tournament: Harvard | Round: 1 | Opponent: Lake Highland Prep YA | Judge: Eric He 1 Interpretation: Debaters must disclose affirmative frameworks, advocacy texts, and advantage areas thirty minutes before round if they haven’t read the affirmative before Violation: They didn’t
Violation: They didn’t Standards: 1 Clash- Not disclosing incentivizes surprise tactics and poorly refined positions that rely on artificial and vague negative engagement to win debates. Their interpretation discourages third- and fourth-line testing by limiting the amount of time we have to prepare and forcing us to enter the debate with zero idea of what the affirmative is. Negatives are forced to rely on generics instead of smart contextual strategies destroying nuanced argumentation. 2 Shiftiness- Not knowing enough about the affirmative coming into round incentivizes 1ar shiftiness about what the aff is and what their framework/advocacy entails. That means even if we could read generics or find prep, they’d just find ways to recontextualize their obscure advocacy in the 1ar. 3 Independently drop the debater for lying – this aff is not new insofar as it was read by another debater at the same school which means they straight up misdisclosed – outweighs for prep skew because I spent all my preround prep on a new aff even though the aff wasn’t truly new and independenly sets terrible norms where people lie for a strategic advantage.
2/18/22
1 - Theory - No General Principle
Tournament: Valley | Round: 6 | Opponent: Walt Whitman EY | Judge: Breigh Plat 2 Interpretation - the affirmative can only garner offense from the hypothetical implementation of their plan text Resolved means a legislative policy Words and Phrases 64 Words and Phrases Permanent Edition. “Resolved”. 1964. Definition of the word “resolve,” given by Webster is “to express an opinion or determination by resolution or vote; as ‘it was resolved by the legislature;” It is of similar force to the word “enact,” which is defined by Bouvier as meaning “to establish by law”. "Resolved" requires a policy. Merriam Webster '18 (Merriam Webster; 2018 Edition; Online dictionary and legal resource; Merriam Webster, "resolve," https://www.merriam-webster.com/dictionary/resolve; RP)
a legal or official determination especially: a legislative declaration Violation- They defend the resolution as a general principle and refuse to defend impacts under implementation A Clash, the resolution serves as a predictable stasis point to enhance accessible research and equitable ground, but obfuscating that limit makes negative preparation impossible because any ground we receive is self-serving, concessionary, and from distorted literature bases---defining a role for negation is essential to sustaining competition and comes before any affirmative offense---the impact is debatability
B Limits —re-contextualizing the resolution lets them defend any method exploding limits, which erases neg ground and renders research burdens untenable for points of difference for third- and fourth-line testing, DAs, PICs, CPs, that are all intuitive points of research are null and void, our interp link turns creativity by allowing both sides to predict arguments, research deficits, and clash---we access the a stronger internal link because of equitable burdens
TVA – Read the affirmative advocacy and offense while defending that negatives can read and weigh neg offense through defending implementation
9/26/21
1 - Theory - No General Principle v2
Tournament: Blue Key | Round: 4 | Opponent: Scarsdale KS | Judge: Samantha McLoughlin 2 Interpretation: The affirmative must not defend general principle. Violation: They do – that was on the contention. Standards: 1 – Topic Education – moots topic ed because it allows debaters to recycle generic arguments. 2 – Reciprocal burdens – proving a deductive argument is false only requires you win defense against one premise and proving an inductive argument is false is more difficult because of status quo bias. Our model solves because it eschews the idea that either side unilaterally carries the burden of proof, and requires both debaters to give an account of why their world is more desirable not principle. 3 – Ground: It gives them the ability to shift out of all CPs by saying they don’t disprove the general principle of the AFF which is bad – Good policymaking requires making comparisons between similar courses of action – saying that CPs are bad doesn’t answer this because we should have to opportunity to argue that in round. CPs teach us to find the best policy possible – debate should teach us to be better decisionmakers because it’s the only transferable skill to the rest of our lives, also controls the I/L to ground because they get infinite advocacies but I only get one.
10/30/21
1 - Theory - Oceans
Tournament: Valley | Round: 4 | Opponent: Mountain House ES | Judge: Rohit Lakshman 3 Interp: Debaters must talk about the oceans in their 1AC Violation: they don’t 1 ocean education is key to real world education because most our life comes from the ocean. Which means learning about how the oceans provides us with the most relevant information we can use Dilevics 16 Andrew Dilevics is a writer for planet aid, https://www.planetaid.org/blog/how-ocean-pollution-affects-humans, wlhsDT The ocean plays an essential role for life on earth. It provides over 70 percent of the oxygen we breathe and over 97 percent of the world’s water supply. Everyday, the ocean is under attack from natural sources and manmade pollution. Pollution does not only affect marine life and their environment, it also affects mankind. 2 Exploring the ocean makes people more literate and better informed about the real world Ocean Research Advisory Panel (ORAP) 2002 ORAP, (ORAP) was formed to advise the National Ocean Research Leadership Council (NORLC) and is called on to provide independent recommendations to Federal Government. From 2006-2011, the ORAP operated as the Ocean Research and Resources Advisory Panel (ORRAP), https://www.nopp.org/about-nopp/committees/orap/, wlhsDT Quality of life, economic health, and security for people of our nation and the world are increasingly dependent upon the areas of science, technology, engineering, and mathematics. A well-informed, scientifically literate populace, capable of making judicious decisions, serves as the vanguard of our society. Yet many recent studies suggest that the general American public is not as knowledgeable about scientific and technical concepts as modern society requires, indicating the need for improved methods to address public education. Too few public education campaigns, severe shortages of well-trained teachers in scientific and technical subjects, and failure to substantially increase the numbers of underrepresented and underserved groups working in the fields of science, technology, engineering, and mathematics, pose significant obstacles to achieving broad science literacy. Coordinated national efforts are needed to address these obstacles and ensure the health of the education and research enterprises that fuel the prosperity of our nation. The oceans and coasts are naturally fascinating to humans and have a vast impact on their lives. Ocean related concepts and technologies offer captivating methods for educating the public about aspects of science, technology, engineering, and mathematics, and can serve as powerful tools for strengthening scientific literacy. There is, however, an equally important, intrinsic need for ocean literacy itself. Within the realm of the oceans and coastal environment, the interdependence among public need, policy decisions, and scientific and technical knowledge is particularly compelling. It is essential that the public be made aware of the many ways in which the systems of Earth, in particular, the oceans, affect everyday life, as well as the significant influence that people have on the health of the oceans and their coasts. The public must: • Understand the role of the coupled ocean-atmosphere-cryosphere system that drives our weather and climate; • Appreciate that environmental pressures introduced on land have consequences that extend through the coasts to the ocean; • Comprehend how oceanic conditions nurture the continued existence of marine ecosystems and maintenance of sustainable fish stocks; and, • Encourage exploration into promising new biotechnologies and other yet-to-be-discovered societal benefits uniquely existing within the oceans. Increasingly, scientific research in the oceans is focused on efforts to deploy observing systems that can monitor those processes of greatest impact on mankind. The use of such systems will require a better public understanding of ocean processes so that the public may use the information effectively, as well as ensure the availability of the technically trained workforce needed to operate these systems. Public education2 must be used to achieve the complementary goals of improving ocean literacy and strengthening scientific literacy across every facet of the socio-economic spectrum. Museums, aquariums, science centers, and public/cable television programming offer enriching opportunities for reaching large audiences; and, promoting lifelong learning about science and technology, and communicating the relevance of each to daily life. Existing initiatives promoting systemic reform and further implementation of the National Science Education Standards 3 (NSES) offer promising opportunities for increased public knowledge of the oceans and coasts. The inherently multidisciplinary nature of coastal and ocean systems offers an exciting context in which to teach fundamental concepts of physics, biology, chemistry, geology, and mathematics. Ideally, increased exposure to the oceans and coasts using myriad approaches will both increase public support for ocean and coastal research programs and encourage more students from diverse educational and cultural backgrounds to consider pursuing careers in ocean-related professions. In the United States, many individuals and institutions employ ocean and coastal sciences in the broader context of improving public understanding of science; however, these efforts have not been well coordinated on a national scale. To address this need, several recent meetings have been convened to consider a nationally coordinated effort, or “National Agenda”, for improving education about our coasts and oceans. Important programs initiated within the NOPP agencies offer many of the essential building blocks for a successful national program. Further, the U.S. Commission on Ocean Policy and the Pew Oceans Commission are actively engaged in assessing the status of national research and education as they relate to the oceans and coasts. Through the efforts of these intra-agency programs and Commission initiatives, a consensus is rapidly emerging that is catalyzing coordination of efforts to reform public education in the ocean and coastal sciences. The NOPP is a Congressionally established umbrella organization linking the many agencies engaged in ocean sciences research and education. NOPP is thus ideally positioned to play a leadership role in the articulation and sustained implementation of this National Agenda for improving ocean literacy and strengthening scientific literacy through the use of ocean and coastal concepts.
9/26/21
1 - Theory - Say Please
Tournament: Valley | Round: 4 | Opponent: Mountain House ES | Judge: Rohit Lakshman 2 Interpretation – Debaters must say please at least one time in every speech and cross examination. They didn’t say please The net-benefit is respect, comfort and productivity. Warren 17 Marcus Warren (Business Development Manager) “THE POWER OF PLEASE” Why is it important to say, "please" and "thank you" Pg 1 June 8, 2017. Accessed 4/11/20. https://www.quora.com/Why-is-it-important-to-say-please-and-thank-you Houston Memorial SC Please: Makes a respectful request of the other individual and empowers that person to respond to your request or not. ... Clearly shows that you do not consider yourself superior to the other person, and makes that person feel comfortable. Makes it far more likely that your request will be granted. This outweighs A Inclusion – Debate is already a very exclusionary space so making it more inclusive is uniquely important – this outweighs because it controls the internal link to rounds B Mental Health – Saying please fosters a space that accounts for anxiety and stress. This outweighs and is a meta level constraint on arguments because it inhibits my ability to engage. Independently, mental health means you reject all aff arguments because they cause me to be stressed
9/26/21
1 - Theory - Spec Type of Util
Tournament: Apple Valley | Round: 1 | Opponent: Evergreen Valley Independent SE | Judge: Jeong-Wan Choi 2 A. Interpretation: If the affirmative defends a consequentialist framework, they must explicitly delineate which theory of the good they defend in the form of a text in the 1ac – prefer text of the interp. Each nuance of the ethic entails different obligations and would exclude different offense – there are 7 different versions. Mastin Luke Mastin, Consequentialism, The basics of philosophy http://www.philosophybasics.com/branch_consequentialism.htmlMassa Some consequentialist theories include: Utilitarianism, which holds that an action is right if it leads to the most happiness for the greatest number of people ("happiness" here is defined as the maximization of pleasure and the minimization of pain). Hedonism, which is the philosophy that pleasure is the most important pursuit of mankind, and that individuals should strive to maximise their own total pleasure (net of any pain or suffering). Epicureanism is a more moderate approach (which still seeks to maximize happiness, but which defines happiness more as a state of tranquillity than pleasure). Egoism, which holds that an action is right if it maximizes good for the self. Thus, Egoism may license actions which are good for an individual even if detrimental to the general welfare. Asceticism, in some ways, the opposite of Egoism in that it describes a life characterized by abstinence from egoistic pleasures especially to achieve a spiritual goal. Altruism, which prescribes that an individual take actions that have the best consequences for everyone except for himself, according to Auguste Comte's dictum, "Live for others". Thus, individuals have a moral obligation to help, serve or benefit others, if necessary at the sacrifice of self-interest. Rule Consequentialism, which is a theory (sometimes seen as an attempt to reconcile Consequentialism and Deontology), that moral behaviour involves following certain rules, but that those rules should be chosen based on the consequences that the selection of those rules have. Some theorists holds that a certain set of minimal rules are necessary to ensure appropriate actions, while some hold that the rules are not absolute and may be violated if strict adherence to the rule would lead to much more undesirable consequences. Negative Consequentialism, which focuses on minimizing bad consequences rather than promoting good consequences. This may actually require active intervention (to prevent harm from being done), or may only require passive avoidance of bad outcomes. B. Violation: They don’t and maximizing well-being doesn’t cut it. Crisp, Roger, "Well-Being", The Stanford Encyclopedia of Philosophy (Fall 2017 Edition), Edward N. Zalta (ed.), URL = https://plato.stanford.edu/archives/fall2017/entries/well-being/.Massa Well-being is most commonly used in philosophy to describe what is non-instrumentally or ultimately good for a person. The question of what well-being consists in is of independent interest, but it is of great importance in moral philosophy, especially in the case of utilitarianism, according to which the only moral requirement is that well-being be maximized. Significant challenges to the very notion have been mounted, in particular by G.E. Moore and T.M. Scanlon. It has become standard to distinguish theories of well-being as either hedonist theories, desire theories, or objective list theories. According to the view known as welfarism, well-being is the only value. Also important in ethics is the question of how a person’s moral character and actions relate to their well-being. C. Standards:
Shiftiness – They can shift out of my turns based on whatever theory of the good they operate under due to the nature of a vague standard. Especially true because the warrants for their standard could justify different versions of consequentialism as coming first and I wouldn’t know until the 1ar which gives them access to multiple contingent standards. 2. Strat – I lose 6 minutes of time during the AC to generate a strategy because I don't know what turns or strategy, I can go for during the 1N absent which proves CX doesn’t check since it would occur after the skew. 3. Resolvability – Makes the round irresolvable since we can’t weigh different mechanisms for the good – Benatar would probably link harder under a hedonistic conception of util – weighing ground is key since it ensures we can compare arguments that clash to access the ballot.
11/6/21
1 - Theory - Standard Spec
Tournament: Lex | Round: 4 | Opponent: Ridge SN | Judge: Daniel Shahab Diaz 2 Interpretation: Affirmatives must specify and separately delineate a standard text in the 1AC. Violation: they didn’t Standards 1 Shiftiness- They can shift out of my turns based on whatever theory of the good they operate under due to the nature of a vague standard. Especially true because the warrants for their standard could justify different versions of Structural Violence coming first and I wouldn’t know until the 1AR which gives them access to multiple contingent standards. 2 Real World- Philosophers need to be as specific as possible when delineating their theory since there are so many nuances and contextual applications of philosophy that require us to understand the core differences within the philosophy. This spec shell isn’t regressive- it literally determines what framework the affirmative defends and how to link offense back to it
1/15/22
1 - Theory - USA
Tournament: Apple Valley | Round: 3 | Opponent: Los Altos BF | Judge: David Robinson 2 Interpretation: If the affirmative delineates specific functions of its advocacy as normal means i.e. enforcement, actor, definitions of compulsory voting, exceptions, etc, then it must have a unified solvency advocate that agrees with all those specifications. Violation: They don’t; they use separate authors to justify their method and don’t have an author for the plan Negate- 1 Limits- Not having a unified solvency advocate that agrees with all your “normal means” specifications allow you to choose any permutation of specifications which explodes neg prep burden. Unified solvency advocates grant sufficient aff flexibility while still ensuring a reasonable case list since specification all comes from one source. 2 Ground- They can choose any permutation of best definition for compulsory voting that suites them, the best enforcement mechanism, the best punishment contextualized to their country, the best targeted group of people, all with any exceptions they want in cojunction with each other which makes it really easy for them to delink core negative ground like Incarceration DA’s, Specific Voting DA’s, First time voter pics, etc which is supercharged by no normal means on the topic. NC theory first - 1 They started the chain of abuse and forced me down this strategy 2 We have more speeches to norm over it 3 It was introduced first so it comes lexically prior. Neg abuse outweighs Aff abuse – 1 Infinite prep time before round to frontline 2 2AR judge psychology 3 1st and last speech 4 Infinite perms and uplayering in the 1AR. Reasonability on 1AR shells – 1AR theory is very aff-biased because the 2AR gets to line-by-line every 2NR standard with new answers that never get responded to DTA on 1AR shells - They can blow up blippy 20 second shells in the 2AR but I have to split my time and can’t preempt 2AR spin which necessitates judge intervention RVIs on 1AR theory – 1AR being able to spend 20 seconds on a shell and still win forces the 2N to allocate at least 2:30 on the shell which means RVIs check back time skew No new 1ar theory paradigm issues- A New 1ar paradigms moot any 1NC theoretical offense B introducing them in the aff allows for them to be more rigorously tested
11/26/21
JF - CP - ADR
Tournament: Palm Classic | Round: 5 | Opponent: Harker NA | Judge: Lena Mizrahi 5 Text – The People’s Republic of China should implement cooperative active debris removal measures That solves ESA 17 ( April 14, 2017 “Active Debris Removal” https://www.esa.int/Our_Activities/Space_Safety/Space_Debris/Active_debris_removal) ESA, as a space technology and operations agency, has identified active removal technologies as a strategic goal. Active Debris Removal (ADR) is necessary to stabilise the growth of space debris, but even more important is that any newly launched objects comply with post-mission disposal guidelines – especially orbital decay in less than 25 years. If this were not the case, most of the required ADR effort would go to compensate for the non-compliance of new objects. Studies performed with long-term evolution models like DELTA have shown that a ‘business as usual’ scenario will lead to a progressive, uncontrolled increase of object numbers in LEO, with collisions becoming the primary debris source. The IADC mitigation measures will reduce the growth, but long-term proliferation is still expected, even with full mitigation compliance, and even if all launch activities are halted. This is an indication that the population of large and massive objects has reached a critical concentration in LEO. But even in a future scenario in which no further objects are added to the space environment (no launches, no debris release, no explosions), the results of simulations by ESA and NASA show that the number of debris objects would continue to grow even under these idealised conditions – under which a collision rate of once every 10 years can be assumed. Furthermore, an IADC study with six different models from 2013 show that in an almost perfect scenario with 90 compliance with the mitigation guidelines and with no explosions on orbit, the population suffers a steady increase, and a collision could be expected every 5–9 years. All these studies are a clear indicator that the population of large and massive objects has reached a critical density in LEO, and that mitigation alone is not sufficient. It is necessary to introduce a programme of remediation measures as well: active debris removal, in order to reduce the number of large and massive (mostly physically intact) objects . The current LEO environment contains about 3200 intact objects. An ESA analysis shows that the (lower) level of around 2500 intact objects (the status in the mid-1990s) would have a 50 probability of decreasing the overall debris population. If this is considered to be a desirable goal for remediation, the number of intact objects has to be reduced even while the world’s spaceflight activities continue. Averaged over the eight years 2004–12, about 72 objects were placed into LEO per year. However, since 2012, there has been a steep increase in the number of satellites placed in LEO, with the count now running at 125 objects per year (average over the four years 2013–16), mainly due to the increased use of small satellites. In addition, in 2015, several companies announced their intention to deploy large constellations of more than around 1000 satellites in LEO to provide fast Internet around the world. Limiting launch rates neither feasible nor helpful Therefore, limiting the launch rate or a further reduction of the allowed lifetime in orbit after the end of the mission (which would be two options to reduce the overall number of intact objects in space) do not seem feasible, because they cannot be mandated. For all new objects, strong compliance with post-mission mitigation measures would allow maintaining the number of intact objects at a level similar to the current one, and avoid having to deal with more objects in addition to those already in orbit. Therefore, in order to reduce the number of big objects in LEO, the only option is to actively remove large objects now in orbit and having a long remaining lifetime in space. This would provide several benefits: The most critical objects (those that would generate the most fragments in case of any collision, and that have a higher collision risk) could be removed from the environment first; Decommissioned objects could also be removed; A controlled deorbit could be performed (as large removal targets typically are also most critical in terms of on-ground risk). Studies at ESA and NASA show that with a removal sequence planned according to a target selection based on mass, area, or cumulative collision risk, the environment can be stabilised when on the order of 5–10 objects are removed from LEO per year (although the effectiveness of each removal decreases as more objects are removed). Active removal is efficient Active removal can be more efficient in terms of the number of collisions prevented versus objects removed when the following principles are applied for the selection of removal targets, which can be used to generate a criticality index and the according list: The selected objects should have a high mass (they have the largest environmental impact in case of collision); Should have high collision probabilities (e.g. they should be in densely populated regions and have a large cross-sectional area); Should be in high altitudes (where the orbital lifetime of the resulting fragments is long). Long-term environment simulations can be used to analyse orbital regions that are hotspots for collisions. The most densely populated region in LEO is around 800–1000 km altitude at high inclinations. The collision hotspots can be ranked by the number of collisions predicted to occur under a business as usual scenario. Polar Hotspots High-ranking hotspot regions are at around: 1000 km and 82º inclination; 800 km and 98º inclination; 850 km and 71º inclination. The concentration of critical-size objects in these narrow orbital bands could allow multi-target removal missions. Such missions could be specifically designed for one orbit type were a number of objects of the same type are contained While removal targets should be selected from a global perspective, legal constraints dealing with the ownership of space debris objects, and the validation thereof, cannot be neglected. Also, it should be kept in mind that legal responsibility for a coupled remover/target stack (i.e. when a removal spacecraft attaches itself to a inoperative body for deorbiting) is shared. While removal technology should be generic, i.e. applicable to a wide range of removal targets, which may also include nonESA objects, special emphasis on firm agreements with the owners of the object is required.
2/13/22
JF - CP - China Asteroid Mining
Tournament: Palm Classic | Round: 1 | Opponent: Harker SY | Judge: Felicity Park 3 CP Text: The People’s Republic of China should - end all private appropriation of outer space except for Asteroid Mining. - de-militarize its civilian, military, and commercial space industry. - dismantle and remove ASAT weapons. - dismantle the People’s Liberation Army. - end China-Russian cooperation in Outer Space. The Counterplan solves the Case since it’s about Space Militarization which the CP explicitly gets rid of. Concede Space Key to Heg – means the CP access all of the Spill-over Offense to American leadership. China’s Asteroid Mining efforts are light-years ahead of everyone else – now is key for Asteroid Mining. Successful Mining solves Warming through Green Transition. Cohen 21 Ariel Cohen 10-26-2021 "China’s Space Mining Industry Is Prepping For Launch – But What About The US?" https://www.forbes.com/sites/arielcohen/2021/10/26/chinas-space-mining-industry-is-prepping-for-launch~-~-but-what-about-the-us/?sh=6b8bea862ae0 (I am a Senior Fellow at the Atlantic Council and the Founding Principal of International Market Analysis, a Washington, D.C.-based global risk advisory boutique.)Elmer Exploration of space-based natural resources are on the Chinese policy makers’ mind. The question is, what Joe Biden thinks? In April of this year, China’s Shenzen Origin Space Technology Co. Ltd. launched the NEO-1, the first commercial spacecraft dedicated to the mining of space resources – from asteroids to the lunar surface. Falling costs of space launches and spacecraft technology alongside existing infrastructure provides a unique opportunity to explore extraterrestrial resource extraction. Current technologies are equipped to analyze and categorize asteroids within our solar system with a limited degree of certainty. One of the accompanying payloads to the NEO-1 was the Yuanwang-1, or “little hubble” satellite, which searches the stars for possible asteroid mining targets. The NEO-1 launch marks another milestone in private satellite development, adding a new player to space based companies which include Japan’s Astroscale. Private asteroid identification via the Sentinel Space Telescope was supported by NASA until 2015. As private investment in space grows, the end goal is to be capable of harvesting resources to bring to Earth. “Through the development and launch of the spacecraft, Origin Space is able to carry out low-Earth orbit space junk cleanup and prototype technology verification for space resource acquisition, and at the same time demonstrate future asteroid defense related technologies.” In the end, it will come down to progressively lowering the cost of launched unit of weight and booster rocket reliability – before fundamentally new engines may drive the launch costs even further down. The April launch demonstrates that China is already succeeding while the West is spinning its wheels. The much touted Planetary Resources and Deep Space Industries (DSI) DSI -1 were supposed to be the vanguard of extra-terrestrial resource acquisition with major backers including Google’s GOOG -1.4 Larry Page. But both have since been acquired, the former by block chain company ConsenSys and the latter by Bradford Space, neither of which are prioritizing asteroid mining. This is too bad, given that that supply chain crunches here on Earth – coupled with the global green energy transition – are spiking demand for strategic minerals that are increasingly hard to come by on our environmentally stressed planet. And here China currently holds a monopoly on rare earth element (REE) extraction and processing to the tune of 90. REE’s 17 minerals essential for modern computing and manufacturing technologies for everything from solar panels to semi-conductors. Resource-hungry China also has major involvement in global critical mineral supply chains, which include cobalt, tungsten, and lithium. As I’ve written before, the Chinese hold of upstream and downstream markets is staggering. Possessing 30 of the global mined ore, 80 of the global processing facilities, and an ever increasing list of high dollar investments around the world, China boasts over $36 billion invested in mining projects in Africa alone. Beijing’s space program clearly indicates that the Chinese would also like to tighten their grip on space-based resources as well. According to research, it is estimated that a small asteroid roughly 200 meters in length that is rich in platinum could be worth up to $300 million. Merrill Lynch predicts the space industry — including extraterrestrial mining industry – to value $2.7 trillion in the next three decades. REEs are fairly common in the solar system, but to what degree remains unknown. The most sought after are M-type asteroids which are mostly metal and hundreds of cubic meters. While these are not the most common, the 27,115 Near Earth asteroids are bound to contain a few. This – and military applications – are no doubt a driving factor of China’s ever increasing space ambitions. Warming causes Extinction Kareiva 18, Peter, and Valerie Carranza. "Existential risk due to ecosystem collapse: Nature strikes back." Futures 102 (2018): 39-50. (Ph.D. in ecology and applied mathematics from Cornell University, director of the Institute of the Environment and Sustainability at UCLA, Pritzker Distinguished Professor in Environment and Sustainability at UCLA)Re-cut by Elmer In summary, six of the nine proposed planetary boundaries (phosphorous, nitrogen, biodiversity, land use, atmospheric aerosol loading, and chemical pollution) are unlikely to be associated with existential risks. They all correspond to a degraded environment, but in our assessment do not represent existential risks. However, the three remaining boundaries (climate change, global freshwater cycle, and ocean acidification) do pose existential risks. This is because of intrinsic positive feedback loops, substantial lag times between system change and experiencing the consequences of that change, and the fact these different boundaries interact with one another in ways that yield surprises. In addition, climate, freshwater, and ocean acidification are all directly connected to the provision of food and water, and shortages of food and water can create conflict and social unrest. Climate change has a long history of disrupting civilizations and sometimes precipitating the collapse of cultures or mass emigrations (McMichael, 2017). For example, the 12th century drought in the North American Southwest is held responsible for the collapse of the Anasazi pueblo culture. More recently, the infamous potato famine of 1846–1849 and the large migration of Irish to the U.S. can be traced to a combination of factors, one of which was climate. Specifically, 1846 was an unusually warm and moist year in Ireland, providing the climatic conditions favorable to the fungus that caused the potato blight. As is so often the case, poor government had a role as well—as the British government forbade the import of grains from outside Britain (imports that could have helped to redress the ravaged potato yields). Climate change intersects with freshwater resources because it is expected to exacerbate drought and water scarcity, as well as flooding. Climate change can even impair water quality because it is associated with heavy rains that overwhelm sewage treatment facilities, or because it results in higher concentrations of pollutants in groundwater as a result of enhanced evaporation and reduced groundwater recharge. Ample clean water is not a luxury—it is essential for human survival. Consequently, cities, regions and nations that lack clean freshwater are vulnerable to social disruption and disease. Finally, ocean acidification is linked to climate change because it is driven by CO2 emissions just as global warming is. With close to 20 of the world’s protein coming from oceans (FAO, 2016), the potential for severe impacts due to acidification is obvious. Less obvious, but perhaps more insidious, is the interaction between climate change and the loss of oyster and coral reefs due to acidification. Acidification is known to interfere with oyster reef building and coral reefs. Climate change also increases storm frequency and severity. Coral reefs and oyster reefs provide protection from storm surge because they reduce wave energy (Spalding et al., 2014). If these reefs are lost due to acidification at the same time as storms become more severe and sea level rises, coastal communities will be exposed to unprecedented storm surge—and may be ravaged by recurrent storms. A key feature of the risk associated with climate change is that mean annual temperature and mean annual rainfall are not the variables of interest. Rather it is extreme episodic events that place nations and entire regions of the world at risk. These extreme events are by definition “rare” (once every hundred years), and changes in their likelihood are challenging to detect because of their rarity, but are exactly the manifestations of climate change that we must get better at anticipating (Diffenbaugh et al., 2017). Society will have a hard time responding to shorter intervals between rare extreme events because in the lifespan of an individual human, a person might experience as few as two or three extreme events. How likely is it that you would notice a change in the interval between events that are separated by decades, especially given that the interval is not regular but varies stochastically? A concrete example of this dilemma can be found in the past and expected future changes in storm-related flooding of New York City. The highly disruptive flooding of New York City associated with Hurricane Sandy represented a flood height that occurred once every 500 years in the 18th century, and that occurs now once every 25 years, but is expected to occur once every 5 years by 2050 (Garner et al., 2017). This change in frequency of extreme floods has profound implications for the measures New York City should take to protect its infrastructure and its population, yet because of the stochastic nature of such events, this shift in flood frequency is an elevated risk that will go unnoticed by most people. 4. The combination of positive feedback loops and societal inertia is fertile ground for global environmental catastrophes Humans are remarkably ingenious, and have adapted to crises throughout their history. Our doom has been repeatedly predicted, only to be averted by innovation (Ridley, 2011). However, the many stories of human ingenuity successfully addressing existential risks such as global famine or extreme air pollution represent environmental challenges that are largely linear, have immediate consequences, and operate without positive feedbacks. For example, the fact that food is in short supply does not increase the rate at which humans consume food—thereby increasing the shortage. Similarly, massive air pollution episodes such as the London fog of 1952 that killed 12,000 people did not make future air pollution events more likely. In fact it was just the opposite—the London fog sent such a clear message that Britain quickly enacted pollution control measures (Stradling, 2016). Food shortages, air pollution, water pollution, etc. send immediate signals to society of harm, which then trigger a negative feedback of society seeking to reduce the harm. In contrast, today’s great environmental crisis of climate change may cause some harm but there are generally long time delays between rising CO2 concentrations and damage to humans. The consequence of these delays are an absence of urgency; thus although 70 of Americans believe global warming is happening, only 40 think it will harm them (http://climatecommunication.yale.edu/visualizations-data/ycom-us-2016/). Secondly, unlike past environmental challenges, the Earth’s climate system is rife with positive feedback loops. In particular, as CO2 increases and the climate warms, that very warming can cause more CO2 release which further increases global warming, and then more CO2, and so on. Table 2 summarizes the best documented positive feedback loops for the Earth’s climate system. These feedbacks can be neatly categorized into carbon cycle, biogeochemical, biogeophysical, cloud, ice-albedo, and water vapor feedbacks. As important as it is to understand these feedbacks individually, it is even more essential to study the interactive nature of these feedbacks. Modeling studies show that when interactions among feedback loops are included, uncertainty increases dramatically and there is a heightened potential for perturbations to be magnified (e.g., Cox, Betts, Jones, Spall, and Totterdell, 2000; Hajima, Tachiiri, Ito, and Kawamiya, 2014; Knutti and Rugenstein, 2015; Rosenfeld, Sherwood, Wood, and Donner, 2014). This produces a wide range of future scenarios. Positive feedbacks in the carbon cycle involves the enhancement of future carbon contributions to the atmosphere due to some initial increase in atmospheric CO2. This happens because as CO2 accumulates, it reduces the efficiency in which oceans and terrestrial ecosystems sequester carbon, which in return feeds back to exacerbate climate change (Friedlingstein et al., 2001). Warming can also increase the rate at which organic matter decays and carbon is released into the atmosphere, thereby causing more warming (Melillo et al., 2017). Increases in food shortages and lack of water is also of major concern when biogeophysical feedback mechanisms perpetuate drought conditions. The underlying mechanism here is that losses in vegetation increases the surface albedo, which suppresses rainfall, and thus enhances future vegetation loss and more suppression of rainfall—thereby initiating or prolonging a drought (Chamey, Stone, and Quirk, 1975). To top it off, overgrazing depletes the soil, leading to augmented vegetation loss (Anderies, Janssen, and Walker, 2002). Climate change often also increases the risk of forest fires, as a result of higher temperatures and persistent drought conditions. The expectation is that forest fires will become more frequent and severe with climate warming and drought (Scholze, Knorr, Arnell, and Prentice, 2006), a trend for which we have already seen evidence (Allen et al., 2010). Tragically, the increased severity and risk of Southern California wildfires recently predicted by climate scientists (Jin et al., 2015), was realized in December 2017, with the largest fire in the history of California (the “Thomas fire” that burned 282,000 acres, https://www.vox.com/2017/12/27/16822180/thomas-fire-california-largest-wildfire). This catastrophic fire embodies the sorts of positive feedbacks and interacting factors that could catch humanity off-guard and produce a true apocalyptic event. Record-breaking rains produced an extraordinary flush of new vegetation, that then dried out as record heat waves and dry conditions took hold, coupled with stronger than normal winds, and ignition. Of course the record-fire released CO2 into the atmosphere, thereby contributing to future warming. Out of all types of feedbacks, water vapor and the ice-albedo feedbacks are the most clearly understood mechanisms. Losses in reflective snow and ice cover drive up surface temperatures, leading to even more melting of snow and ice cover—this is known as the ice-albedo feedback (Curry, Schramm, and Ebert, 1995). As snow and ice continue to melt at a more rapid pace, millions of people may be displaced by flooding risks as a consequence of sea level rise near coastal communities (Biermann and Boas, 2010; Myers, 2002; Nicholls et al., 2011). The water vapor feedback operates when warmer atmospheric conditions strengthen the saturation vapor pressure, which creates a warming effect given water vapor’s strong greenhouse gas properties (Manabe and Wetherald, 1967). Global warming tends to increase cloud formation because warmer temperatures lead to more evaporation of water into the atmosphere, and warmer temperature also allows the atmosphere to hold more water. The key question is whether this increase in clouds associated with global warming will result in a positive feedback loop (more warming) or a negative feedback loop (less warming). For decades, scientists have sought to answer this question and understand the net role clouds play in future climate projections (Schneider et al., 2017). Clouds are complex because they both have a cooling (reflecting incoming solar radiation) and warming (absorbing incoming solar radiation) effect (Lashof, DeAngelo, Saleska, and Harte, 1997). The type of cloud, altitude, and optical properties combine to determine how these countervailing effects balance out. Although still under debate, it appears that in most circumstances the cloud feedback is likely positive (Boucher et al., 2013). For example, models and observations show that increasing greenhouse gas concentrations reduces the low-level cloud fraction in the Northeast Pacific at decadal time scales. This then has a positive feedback effect and enhances climate warming since less solar radiation is reflected by the atmosphere (Clement, Burgman, and Norris, 2009). The key lesson from the long list of potentially positive feedbacks and their interactions is that runaway climate change, and runaway perturbations have to be taken as a serious possibility. Table 2 is just a snapshot of the type of feedbacks that have been identified (see Supplementary material for a more thorough explanation of positive feedback loops). However, this list is not exhaustive and the possibility of undiscovered positive feedbacks portends even greater existential risks. The many environmental crises humankind has previously averted (famine, ozone depletion, London fog, water pollution, etc.) were averted because of political will based on solid scientific understanding. We cannot count on complete scientific understanding when it comes to positive feedback loops and climate change.
2/13/22
JF - CP - Global ConCon
Tournament: Harvard | Round: 1 | Opponent: Lake Highland Prep YA | Judge: Eric He 3 Counterplan Text -- States ought to call a global constitutional convention and establish a constitution reflecting intergenerational concern with exclusive authority to ban appropriation of outer space by private entities and bind participating bodies to its result via the Madrid Protocol The CP applies intergenerational equity to future generations – that’s better than trying to decide now whether the plan is beneficial across deep time – every country would say yes. Tan 2k David Tan, LL.M., Harvard Law School; LL.B. (Hons), B.Com., University of Melbourne. Former Tutor in Law, Trinity College, University of Melbourne, “Towards a New Regime for the Protection of Outer Space as the "Province of All Mankind",” 2000, The Yale Journal of International Law, Vol. 25, https://digitalcommons.law.yale.edu/cgi/viewcontent.cgi?article=1114andcontext=yjil Edith Brown Weiss has advanced the theory of “intergenerational equity,” which provides for generational rights and obligations.158 Her thesis consists of a normative framework of intersecting theories of intergenerational and intragenerational equity that are derived from an underlying planetary trust, embodying the notion that generations act as stewards to sustain the welfare and well-being of all generations. This planetary trust obliges “each generation to preserve the diversity of the resource base and to pass the planet to future generations in no worse condition than it receives it.”159 The principle of the conservation of options requires each generation “to conserve the diversity of the natural and cultural resource base, so that it does not unduly restrict the options available to future generations in solving their problems and satisfying their own values, and should be entitled to diversity comparable to that enjoyed by previous generations.”*60 The theory of intergenerational equity is an appealing one. Unfortunately, Weiss’s model generally rests upon an intertemporal human rights model for preserving the global environment. This presents many problems, ranging from the questionable existence of the right to a decent environment to the issue of remedies in respect of claims made by future generations against present generations.161 Whether the global awareness of the harm to our sense of intergenerational identity, as evidenced by the various U.N. General Assembly resolutions and numerous international conventions, will be sufficient to mobilize the implementation and enforcement of effective legal measures on behalf of future generations is doubtful. But more importantly, the notions of intergenerational identity and sustainable development will prove to be invaluable concepts in framing the discussion in Part VI. Current literature has concentrated on the notion of sustainable development as involving the integration of economic and environmental considerations at all levels of decision-making.162 But the outer-space environment has been largely ignored, as if it were simply economic development on Earth that must be environmentally sound. There is no reason, however, why the precautionary principles that emerge from the concept of sustainable development in the Stockholm Declaration, the Rio Declaration, and the World Charter for Nature should not apply equally to the outer-space environment. Few states, if any, will take issue with the proposition that the exploration and use of outer space should be sustainable. It is in the common interest of all states, whether spacefaring or otherwise, to subscribe to a regime that allows for the development of space activities in a manner that leaves the space environment in a substantially unimpaired condition for future generations. One might even ultimately find that the uniqueness and vulnerability of the outer-space environment demand that the international community as a whole recognize sustainable development as a “global ethic”163 that transcends terrestrial boundaries, as a peremptory norm that prohibits “policies and practices that support current living standards by depleting the productive base, including natural resources, and that leaves future generations with poorer prospects and greater risks than our own.”164 We should not confine our actions to those we are now able to determine as directly or indirectly benefiting ourselves or our descendants. On the contrary, we should “cultivate our natural sense of obligation not to act wastefully or wantonly even when we cannot calculate how such acts would make any present or future persons worse off.”165 It seems impossible to find universally agreed-upon limits on the freedom of exploration and use of outer space. Rather than focus on indeterminate rules of custom-formation, we should concentrate on establishing fair and workable arrangements and institutions that can successfully accommodate the competing interests of all nations. With these guidelines in mind, we will now examine new methods of treaty-making that will enhance the willingness of states to participate in an environmental program that seeks to achieve an acceptable balance between pollution control and freedom of space exploration. That solves the aff – it addresses shared anxieties while building political consensus. Gardiner 14 1 Stephen M. Gardiner, Professor of Philosophy and Ben Rabinowitz Endowed Professor of Human Dimensions of the Environment at the University of Washington, Seattle, “A Call for a Global Constitutional Convention Focused on Future Generations,” 2014, Ethics and International Affairs, Vol. 28, Issue 3, pp. 299-315, https://doi.org/10.1017/S0892679414000379, EA A Constitutional Convention In my view, the above line of reasoning leads naturally to a more specific proposal: that we—concerned individuals, interested community groups, national governments, and transnational organizations—should initiate a call for a global constitutional convention focused on future generations. This proposal has two components. The first component is procedural. The proposal takes the form of a “call to action.” It is explicitly an attempt to engage a range of actors, based on a claim that they have or should take on a set of responsibilities, and a view about how to go about discharging those responsibilities. The second component is substantive. The main focus for action is a push for the creation of a constitutional convention at the global level, whose role is to pave the way for an overall constitutional system that appropriately embodies intergenerational concern. The substantive idea rests on several key ideas. Still, for the purposes of a basic proposal, I suggest that these be understood in a relatively open way that, as far as is practicable, does not prejudge the outcome of the convention, and especially its main recommendations. First, the convention itself should be understood as “a representative body called together for some occasional or temporary purpose” and “constituted by statute to represent the people in their primary relations.”14 Second, a constitutional system should be thought of in a minimalist sense as “a set of norms (rules, principles or values) creating, structuring, and possibly defining the limits of government power or authority.”15 Third, the “instigating” role of the convention should be to discuss, develop, make recommendations toward, and set in motion a process for the establishment of a constitution. Fourth, its primary subject matter should be the need to adequately reflect and embody intergenerational concern, where this would include at least the protection of future generations, the promotion of their interests (where “interests” is to be broadly conceived so as to include rights, claims, welfare, and so on), and the discharging of duties with respect to them. It may also (and in my view should) include some way of reflecting concern for past generations, including responsiveness to at least certain of their interests and views. However, I will leave that issue aside in what follows. The proposal to initiate a call for a global constitutional convention has at least two attractive features. First, it is based in a deep political reality, and does not underplay the challenge. It acknowledges the problem as it is, both specific and general, and calls attention to the heart of that problem, including to the failures of the current system, the need for an alternative, and the background issue of responsibility. Moreover, though the proposal is dramatic and rhetorically eye-catching, it is so in a way that is appropriately responsive to the seriousness of the issue at hand, the persistent political inertia surrounding more modest initiatives, and the fact that (grave though concerns about it are) climate change is only one instance of the tyranny of the contemporary (and the wider perfect moral storm), and we should expect others to arise over the coming decades and centuries. The second attractive feature of the proposal is that, though ambitious, it is not alienating. While it does not succumb to despair in the face of the challenge, neither does it needlessly polarize and divide from the outset (for example, by leaping to specific recommendations about how to fill the institutional gap). Instead, it acknowledges that there are fundamental difficulties and anxieties, but uses them to start the right kind of debate, rather than to foreclose it. As a result, the proposal is a promising candidate to serve as the subject of a wide and overlapping political consensus, at least among those who share intergenerational concern. Selective Mirroring To quell some initial anxieties, it is perhaps worth clarifying the open-ended and non-alienating character of the proposal. One temptation would be to view the call for a global constitutional convention as a fairly naked plea for world government, a prospect that would be deeply alienating—indeed anathema—to many. However, that is not my intention. Though it is possible that a global constitutional convention would lead in this direction, it is by no means certain. At a minimum, no such body could plausibly recommend any form of “world government” without simultaneously advancing detailed suggestions about how to avoid the standard threats such an institution might pose. Moreover, it seems perfectly conceivable, even likely under current ways of thinking, that a global constitutional convention would pursue what we might call a selective mirroring strategy. Specifically, a convention would seek to develop a broader system of institutions and practices that reflected the desirable features of a powerful and highly centralized global authority but neutralized the standing threats posed by it (for example, it might employ familiar strategies such as the separation of powers). In all likelihood, one feature of a selective mirroring approach would be the significant preservation of existing institutions to serve as a bulwark against the excesses of any newly created ones. Whether and how such a strategy might be made effective against the perfect moral storm, and whether something closer to a “world government” would do better, would be a central issue for discussion by the convention. It spills over to foster broader intergenerational representation, but independence is key Gardiner 14 2 Stephen M. Gardiner, Professor of Philosophy and Ben Rabinowitz Endowed Professor of Human Dimensions of the Environment at the University of Washington, Seattle, “A Call for a Global Constitutional Convention Focused on Future Generations,” 2014, Ethics and International Affairs, Vol. 28, Issue 3, pp. 299-315, https://doi.org/10.1017/S0892679414000379, EA One set of guidelines concerns how the global constitutional convention relates to other institutions. The first guideline concerns relative independence: (1) Autonomy: Any global constitutional convention should have considerable autonomy from other institutions, and especially from those dominated by factors that generate or facilitate the tyranny of the contemporary (and the perfect moral storm, more generally). Thus, for example, attempts should be made to insulate the global constitutional convention from too much influence from short-term and narrowly economic forces. The second guideline concerns limits to that independence: (2) Mutual Accountability: Any global constitutional convention should be to some extent accountable to other major institutions, and they should be accountable to it. Thus, for example, though the global constitutional convention should not be able to decide unilaterally that national institutions should be radically supplanted, nevertheless such institutions should not have a simple veto on the recommendations of the convention, including those that would result in sharp limits to their powers. A third guideline concerns adequacy: (3) Functional Adequacy: The global constitutional convention should be constructed in such a way that it is highly likely to produce recommendations that are functionally adequate to the task. Thus, for example, the tasks of the global constitutional convention should not be assigned to any currently existing body whose design and authority is clearly unsuitable. In my view, this guideline rules out proposals such as the Royal Society’s suggestion that governance of geoengineering should be taken up by the United Nations’ Commission on Sustainable Development,20 or the Secretary-General’s recommendation of a new United Nations’ High Commissioner for Future Generations.21 Though such proposals may have merit for some purposes (for example, as pragmatic, incremental suggestions to highlight the importance of intergenerational issues), they are too modest, in my opinion, to reflect the gravity of the threats posed by climate change in particular, and the perfect moral storm more generally. Aims A second set of guidelines concerns the aims of the global constitutional convention. Here, the perfect moral storm analysis would suggest: (4) Comprehensiveness: The convention should be under a mandate to consider a very broad range of global, intergenerational issues, to focus on such issues at a foundational level, and to recommend institutional reform accordingly. (5) Standing Authority: Though the convention may recommend the establishment of some temporary and issue-specific bodies, its focus should be on the establishment of institutions with standing authority over the long term. These guidelines are significant in that they stand against existing issue-specific approaches to global and intergenerational problems, and encourage not only a less ad hoc but also a more proactive approach. In particular, the global constitutional convention might be expected to recommend institutions that would be charged with identifying, monitoring, and taking charge of intergenerational issues as such. For example, such institutions should address not only specific policy issues (such as climate change, large asteroid detection, and long-term nuclear waste) but also the need to identify similar threats before they arise. Proactive measures mitigate a laundry list of emerging catastrophic risks – extinction. Beckstead 14 Nick Beckstead, Nick Bostrom, Niel Bowerman, Owen Cotton-Barratt, William MacAskill, Seán Ó hÉigeartaigh, Toby Ord, * Future of Humanity Institute, University of Oxford, Director, Future of Humanity Institute, University of Oxford, * Global Priorities Project, Centre for Effective Altruism; Department of Physics, University of Oxford, Global Priorities Project, Centre for Effective Altruism; Future of Humanity Institute, University of Oxford, * Uehiro Centre for Practical Ethics, University of Oxford, Cambridge Centre for the Study of Existential Risk; Future of Humanity Institute, University of Oxford, * Programme on the Impacts of Future Technology, Oxford Martin School, University of Oxford, “Policy Brief: Unprecedented Technological Risks,” 2014, The Global Priorities Project, The Future of Humanity Institute, The Oxford Martin Programme on the Impacts of Future Technology, and The Centre for the Study of Existential Risk, https://www.fhi.ox.ac.uk/wp-content/uploads/Unprecedented-Technological-Risks.pdf, Accessed: 03/13/21, EA In the near future, major technological developments will give rise to new unprecedented risks. In particular, like nuclear technology, developments in synthetic biology, geoengineering, distributed manufacturing and artificial intelligence create risks of catastrophe on a global scale. These new technologies will have very large benefits to humankind. But, without proper regulation, they risk the creation of new weapons of mass destruction, the start of a new arms race, or catastrophe through accidental misuse. Some experts have suggested that these technologies are even more worrying than nuclear weapons, because they are more difficult to control. Whereas nuclear weapons require the rare and controllable resources of uranium-235 or plutonium-239, once these new technologies are developed, they will be very difficult to regulate and easily accessible to small countries or even terrorist groups. Moreover, these risks are currently underregulated, for a number of reasons. Protection against such risks is a global public good and thus undersupplied by the market. Implementation often requires cooperation among many governments, which adds political complexity. Due to the unprecedented nature of the risks, there is little or no previous experience from which to draw lessons and form policy. And the beneficiaries of preventative policy include people who have no sway over current political processes — our children and grandchildren. Given the unpredictable nature of technological progress, development of these technologies may be unexpectedly rapid. A political reaction to these technologies only when they are already on the brink of development may therefore be too late. We need to implement prudent and proactive policy measures in the near future, even if no such breakthroughs currently appear imminent.
2/18/22
JF - CP - International Law Regulation
Tournament: CPS | Round: 2 | Opponent: Ayala AM | Judge: Ruchir Rastogi Cites are broken - check open source
12/18/21
JF - CP - International Law Regulation v2
Tournament: CPS | Round: 4 | Opponent: Harker SY | Judge: Parth Shah 2 Counterplan text: The Committee on the Peaceful use of Outer Space ought to establish an application system for property rights on celestial bodies for every country. Applications and approval of property rights should be granted upon the condition of - open disclosure of data gathered in the exploration of a celestial body - Applications must be publicly announced - Property Rights will be made tradeable between private entities - Property Rights will be set to expire on the conclusion of a successful extraction mission - Private Entities will only be allowed one property right grant per celestial body and cannot have more than one grant at a time The counterplan establishes international norms for safe extraction of resources on celestial bodies while increasing RandD in outer space. Steffen 21 Olaf Steffen, Olaf is a scientist at the Institute of Composite Structures and Adaptive Sytems at the German Aerospace Center. 12-2-2021, "Explore to Exploit: A Data-Centred Approach to Space Mining Regulation," Institute of Composite Structures and Adaptive Systems, German Aerospace Center, https://www.sciencedirect.com/science/article/pii/S0265964621000515 accessed 12/12/21 Adam 4. The data-centred approach to space mining regulation 4.1. Core description of the regulatory regime and mining rights acquisition process The data gathered in the exploration of a celestial body is not only of value for space mining companies for informing them whether, where and how to exploit resources from the body in question, but also for science. The irretrievability of information relating to the solar system contained in the body that will be lost during resource exploitation carries a value for humanity and future generations and can thus be assigned the characteristic of a common heritage for all mankind as invoked in the Moon Agreement. This characteristic makes exploration data an exceptional and unique candidate for use in a mechanism for acquiring mining rights because its preservation is of public interest and its disclosure in exchange for exclusive mining rights does not place any additional burden on the mining company. The following principles would form the cornerstones of the proposed regulatory regime and rights acquisition mechanism based on exploration data: Without preconditions, no entity has a right to mine the resources of a celestial body. An international regulatory body administers the existing rights of companies for mining a specific celestial body. Mining rights to such bodies can be applied for from this international regulatory body, with applications made public. The application expires after a pre-set period. Mining rights are granted on the provision and disclosure of exploration data on the celestial body within the pre-set period, proposedly gathered in situ, characterising this body and its resources in a pre-defined manner. The explorer's mining right to the resources of the celestial body is published by the regulatory body in a mining rights grant. The data concerning the celestial body are made public as part of the rights grant within the domain of all participating members of the regulatory regime. The exclusive mining rights to any specific body are tradeable. The scope of the regulatory body with respect to the granting of mining rights is not revenue-oriented. The international regulatory body would thus act as a curator of a rights register and an attached database of exploration data. The concept is superficially comparable to patent law, where exclusive rights are granted following the disclosure of an invention to incentivise the efforts made in the development process. In the following section, the characteristics of such a regulatory regime are further discussed with respect to the formation of monopolies, market dynamics, conflict avoidance, inclusivity towards less developed countries and the viability of implementation. 4.2. Discussion and means of implementation The proposed regulatory mechanism has advantages both from a business/investor and society perspective. First, it prevents already highly capitalised companies from acquiring exploitation rights in bulk to deny competitors those objects that are easiest to exploit or most valuable, which would otherwise be possible in any kind of pay-for-right mechanism and could result in preventing market access to smaller, emerging companies. Thus, early monopoly formation can be avoided. The use of data disclosure for the granting of mining rights ensures the scientific community has access to this invaluable source of information. In this way, space mining prospecting missions can lead to a boost in research on small celestial bodies at a speed unmatchable by pure government/agency funded science probes. This usefulness to the scientific community could lead to sustained partnerships between prospecting companies and scientific institutions and could even provide a source of funding for the companies through RandD grants and public-private partnerships. The results of the exploration efforts contribute to research on the formation of planets and the history of the solar system and provide valuable insight for space defence against asteroids. The transition of exploration from a tailored mission profile with a purpose-built spacecraft to a standard task in space flight would also lead to a cost reduction of the respective exploration spacecraft through economies of scale. This describes the very benefits Elvis 24 and Crawford 25 imagined as possible effects of a space economy. Thus, there is an immediate return for society from the exploitation rights grant. It also reconciles the adverse interests of space development and space science as laid out by Schwartz 26. It ensures that, by exploitation, information contained in celestial bodies is not lost for future generations.The application period should not be set in a manner that creates a situation that can be abused through the potential for stockpiling inventory rights. Rather, it is intended to prevent conflict in the phase before exploration data gathered by a mission, as a prerequisite to the mining rights grant, is available. In other words, only one exploration effort at a time can be permitted for a specific body. The time frame between the application and the granting of mining rights (meaning: availability of the required exploration data set) should be tight and should only consider necessary exploration time on site, transit time and possibly a reasonable launch preparation and data processing markup. These contributors to the application period make it clear that the time frame could be dynamic and individualistic, depending on the exploration target (transit time and duration of exploration) and the technology of the exploration probe (transit time). After the expiration of the application period, applications for the exploration target would again be permissible. To prevent the previously mentioned stockpiling of inventory rights, credible proof of an imminent exploration intention would need to be part of the application process, for example, a fixed launch contract or the advanced build status of the exploration probe. Such a mechanism would not contradict the statement in the OST that outer space shall be free for both exploration and scientific investigation. Applications would not apply to purely scientific exploration. An application would only be necessary as a prerequisite for mining. Even resource prospecting could take place without an application (for whatever reason), with a subsequent application comprising in situ data already gathered. For such cases, the application process would need to provide a short period for objections to enable the secretive explorer to make their efforts public. The publication of the application for the mining rights, which is nothing more than a statement of intention to explore, thus provides a strong measure for avoiding conflict. The transparency of where exploration spacecraft are located and, at a later stage, where mining activities take place, provides additional benefits for the sustainable use of space, trust building and deterrence against malign misuse of mining technology. Involuntary spacecraft collisions of competitors in deep space are prevented by the reduction of exploration efforts at the same destination through the application for mining rights by one applicant at a time. As pointed out by Newman and Williamson 20, this is relevant because space debris does not de-orbit in deep space as in the case of LEO. Deep space may be vast, but the velocities involved mean that small debris particles are no less dangerous. Considering NEO mining with fleets of small spacecraft, malfunctions and/or destructive events could create debris clouds crossing Earth's orbit around the sun on a regular basis, presenting another danger to satellites in Earth's own orbit. Thus, by effectively preventing the collision of two spacecraft, one source of debris creation can be mitigated through this regulation mechanism. With respect to Deudney's 11 scepticism of asteroid mining and the dual-use character of technology to manipulate orbits of celestial bodies, it has to be stated that this potential is truly inherent to asteroid mining. An asteroid redirect mission for scientific purposes was pursued by NASA 49 before reorientation towards a manned lunar mission. In one way or another, each type of asteroid mining will require the delivery of the targeted resource to a destination via a comparable technology as formerly envisioned by NASA, be it as a raw material or a useable resource processed in situ, even if this is not necessarily done through redirecting the whole asteroid and placing it in a lunar orbit. However, to be misused as a weapon, space mined resources would have to surpass a certain mass threshold to survive atmospheric entry at the target. This seems unfeasible for currently discussed mining concepts using small-scale spacecraft as described in this article. Redirecting larger masses or whole asteroids would require far more powerful mining vessels or small amounts of thrust over long periods of time. The continuous, (for a mining activity) untypical change in the orbit of an asteroid would make a redirect attempt with hostile intent easily identifiable, effectively deterring such an activity in the first place by ensuring the identification of the aggressor long before the projectile hits its target. The proposed database would provide a catalogue of asteroids with exploration and mining activities in place that should be tracked more closely because of their interaction with spacecraft. This would, in fact, be necessary per se as a precaution to avoid catastrophic mishaps, such as the accidental change of a NEO's orbit to intercept Earth by changing its mass through mining. Space mining fails now due to profitability and unsafe tech which only the cp solves Steffen 21 Olaf Steffen, Olaf is a scientist at the Institute of Composite Structures and Adaptive Sytems at the German Aerospace Center. 12-2-2021, "Explore to Exploit: A Data-Centred Approach to Space Mining Regulation," Institute of Composite Structures and Adaptive Systems, German Aerospace Center, https://www.sciencedirect.com/science/article/pii/S0265964621000515 accessed 12/12/21 Adam The data-driven mechanism also addresses another potential risk of an emerging space-based resource economy: the reinforcing of the incontestable market positions of the market leaders based on an advantage in knowledge unattainable by new competitors. Explorations of celestial bodies will have a likelihood of failing from the perspective of the actual value of the explored object vs. the expected value. In this case, the costs of exploration would be a loss for the company, which could be significant and possibly ruinous considering the budgets needed for contemporary space agency-led exploration missions. Sanchez and McInnes 5 explicitly mention the uncertainties in object distribution models used in their asteroid distribution study and for the conclusions drawn concerning reachable object masses with certain delta-v capabilities of spacecraft. With an increasing number of exploration missions led by a company, the data collected may lead to better in-house models and a higher probability of exploring the ‘right’ body for the value/resources aimed at. This may even provide information on the best spacecraft designs for matching the targeted objects’ orbit distribution. This risk is known from the digital platform economy, where the companies that are now leading have an uncatchable advantage in user data compared with market newcomers, translatable to a more refined and comfortable user experience, attracting additional users and thus offering superior services to business customers. This also holds true for space mining companies. Through their lack of legacy mission data, market newcomers would have a higher risk of misallocating exploration missions, making investments in those companies riskier than in established companies. To avoid the preferred investment in a single or a few companies, the risk of the investment in emerging companies is reduced by the proposed mechanism by ensuring the equal access to data for market newcomers and established companies alike. From a prospecting risk perspective, the market entrance of a new company becomes progressively less risky for investors with increasing amounts of publicly available exploration data, promoting progressive and dynamic development. The long lead times of asteroid mining ventures coincide with a long time frame for an ROI. The exclusive mining rights granted after the exploration phase give investors security half-way into their space mining endeavours. The proposed tradability of the rights offers an early chance of gaining investment proceeds. It also offers the possibility of new business models: the classical asteroid mining system concept, as shown by Andrews et al. 43, for example, covers exploration, exploitation and resource transfer. This maximises the investment needed to develop the technologies required for the entire process chain. Giving exploration a value could lead to a division of labour. Dedicated prospecting companies could emerge, providing mining companies with the data and mining rights to a body with the specific resource profile they are seeking. In this way, the investment needed for a successful mining endeavour is divided between different specialised companies. This considerably reduces the risk for investors as well as the investment needed for a company to meet their business goals, which are now aimed at just a particular part of the overall space mining endeavour. Third-party applications for mining rights should be possible to allow a mining company to subcontract to exploration companies. Such a regulatory mechanism design would also be more easily inclusive of less developed countries. They could simply contract exploration missions made affordable through economies of scale to become part of the emerging space mining economy as holders of tradeable mining rights. Through a wise selection of such missions’ targets, they could gain powerful positions of influence.
12/20/21
JF - CP - Regulation
Tournament: Palm Classic | Round: Doubles | Opponent: Sequoia AS | Judge: Annabelle Long, Spencer Paul, Jonathan Meza 2 Counterplan Text - States ought to - ban anti-satellite weapons - mandate a transition to zero-emissions modes of transportation - ban the use of environmentally harmful housecleaning products - ban ozone-depleting pesticides - ban nitrous oxide That solves GreenDiary n/d Environmental News and Blog, “”How to prevent Ozone depletion (and what would happen if we don’t)” https://greendiary.com/5-ways-prevent-ozone-depletion.html One of the easiest ways to reduce damages caused to the ozone layer is by limiting the use of vehicles. This is because vehicular emissions eventually result in the release of smog. This in turn also damages the ozone layer causing it to deteriorate. If you are looking for ways on how to prevent ozone depletion, then you do have certain effective option. You can choose to take the public transport or use a bicycle. Another great way to restrict the use of car is by opting for Car Pooling. If you do want to use a vehicle, then it is recommended to switch to an electric or hybrid vehicle. Even better, you can opt for vehicles that run solely on solar power. Scroll to the end of the article for a list of the same. 2. Use eco-friendly household cleaning products Usage of eco-friendly and natural cleaning products for household chores is a great way to prevent ozone depletion. This is because many of these cleaning agents contain toxic chemicals that interfere with the ozone layer. A lot of supermarkets and health stores sell cleaning products that are toxic-free and made out of natural ingredients. 3. Avoid using pesticides and prevent ozone depletion Overuse-of-pesticides Pesticides may be an easy solution for getting rid of weed, but are harmful for the ozone layer. The best solution for this would be to try using natural remedies, rather than heading out for pesticides. You can perhaps try to weed manually or mow your garden consistently so as to avoid weed-growth. Or else, try Urban Aerofarming, which requires less water, less space and little to no amount of pesticides. To know more about Urban Aerofarms, scroll down. You can check out the different DIY ideas to make your own eco-friendly pesticides at home to prevent ozone depletion. 4. Developing stringent regulations for rocket launches The world is progressing at a drastic pace. As we progress on various scientific discoveries, the need of the hour also requires people to travel out of space. The number of rocket launches has increased drastically. This in turn is equally damaging the ozone layer in many ways. A study shows that the harm caused by rocket launches would outpace the harm caused due to CFCs. At present, the global rocket launches do not contribute hugely to ozone layer depletion. Due to the advancement of the space industry, it will become a major contributor to ozone depletion. All types of rocket engines result in combustion by products that are ozone-destroying compounds that are expelled directly in the middle and upper stratosphere layer – near the ozone layer. 5. Banning the use of dangerous nitrous oxide Ozone-Layer-DepletionIn the late 70’s the world was taken by surprise with a study that triggered a red alert pertaining to the destruction caused to the ozone layer. It had all the necessary information that helped us to understand what exactly was going on. Even the facts and figures mentioned in the study clearly pointed out towards the alarming rate of how the ozone layer was being depleted. Nations around the globe got together in 1989 and formed the Montreal Protocol. The main aim behind this was to stop the usage of CFCs. However, the protocol did not include nitrous oxide which is the most fatal chemical that can destroy the ozone layer and is still in use. Governments across the world should take a strong stand for banning the use of this harmful compound to save the ozone layer. 6. Avoiding Ozonolysis Purifiers Air-Purifier Are we risking our health and environment with the development of new technology? We believe that air purifiers are an effective way to fight air pollutants but they can actually have the harmful effects, which we are not aware of. New technology has allowed companies to make products which can “freshen” air by producing ozone which is not healthy to humans in large quantities. These ozone layers can actually react with existing particles in the air and make them more dangerous. CP solves by implementing arms control measures – their 1AC article Blatt 20 Talia, joint concentration in Social Studies and Integrative Biology at Harvard, specialization in East Asian geopolitics and security issues “Anti-Satellite Weapons and the Emerging Space Arms Race,” Harvard International Review, May 26, 2020, https://hir.harvard.edu/anti-satellite-weapons-and-the-emerging-space-arms-race/ TG The second viewpoint calls for an end to the arms race not by winning it but by calling it off entirely, through comprehensive space arms control. Such regulations are complicated and have a long history, but could be a more sustainable solution than an endless proliferation of weapons. The first iteration of arms control in space came in the 1960s. The 1963 Partial Test Ban Treaty (PTBT) banned nuclear weapons tests in outer space, and the more comprehensive 1967 Outer Space Treaty (OST), considered the cornerstone of peaceful space development, prohibited any military activity on celestial bodies including stationing weapons of mass destruction (WMD) in space. Both treaties are still in effect today, but despite additional treaties in recent decades, there are still no international regulations banning weapons other than WMD in space. The most recent attempt at an ASAT ban was proposed by Russia and China in 2014. A revision of a draft from 2008, the Treaty on Prevention of the Placement of Weapons in Outer Space and of the Threat or Use of Force Against Outer Space Objects (PPWT) was rejected by the United States because it lacked verification and permitted the stockpiling of terrestrial-based ASAT systems. It only banned space-based ASATs, which would enable China and Russia to continue developing ground-launched systems known as direct-ascent ASATs. The PPWT was an empty solution for an arms race, clearly designed to benefit Russia and China rather than prevent additional weapons development. But a comprehensive agreement that the US, Russia, and China all find satisfactory seems unlikely. The Proposed Prevention of an Arms Race in Space Treaty (PAROS) has been discussed since the 1980s without much progress. Perhaps a more feasible solution is a limited test ban treaty: an agreement to stop testing debris-producing ASATs. It has precedent—the PTBT successfully prevented the testing of nuclear weapons in space—and could stave off the worst effects of debris accumulation by eliminating debris-producing tests. Additionally, in the long term, a test ban could reduce countries’ confidence in their ASATs; capabilities atrophy without regular testing, meaning countries would be less likely to base their military strategies on ASATs in the event of a conflict. By banning specific systems, a test ban treaty is not too vague as to be unenforceable like the PPWT, but it could be limited enough to not affect broader space development. Russia and China might find the terms acceptable; after all, debris threatens their satellites too, and they have a reciprocal interest in reining in US weapons development. It’s hard to conceive of a future for humanity that does not feature space in some capacity. Big businesses are already pursuing space commerce more aggressively, with visions of space colonies and large scale resource extraction. But the continued, unchecked proliferation of ASATs could close off space entirely—and help induce a nuclear war. Now, more than ever, it remains urgent and imperative that international negotiations reach an arms control treaty.
2/18/22
JF - CP - SBSP
Tournament: Harvard | Round: 1 | Opponent: Lake Highland Prep YA | Judge: Eric He 2 CP Text: The People’s Republic of China should - end all private appropriation of outer space except for Space-Based Solar Power. - increase its space-based solar power cooperation with the United States. - de-militarize its civilian, military, and commercial space industry. - dismantle and remove ASAT weapons. - dismantle the People’s Liberation Army. - end China-Russian cooperation in Outer Space. Space-Based Solar Power constitutes Appropriation. Matignon 19 Louis De Gouyon Matignon 4-15-2019 "THE LEGAL STATUS OF CHINESE SPACE-BASED SOLAR POWER STATIONS" https://www.spacelegalissues.com/the-legal-status-of-chinese-space-based-solar-power-stations/ (PhD in space law)Elmer Near-Earth space is formed of different orbital layers. Terrestrial orbits are limited common resources and inherently repugnant to any appropriation: they are not property in the sense of law. Orbits and frequencies are res communis (a Latin term derived from Roman law that preceded today’s concepts of the commons and common heritage of mankind; it has relevance in international law and common law). It’s the first-come, first-served principle that applies to orbital positioning, which without any formal acquisition of sovereignty, records a promptness behaviour to which it grants an exclusive grabbing effect of the space concerned. Geostationary orbit is a limited but permanent resource: this de facto appropriation by the first-comers – the developed countries – of the orbit and the frequencies is protected by Space Law and the International Telecommunications Law. The challenge by developing countries of grabbing these resources is therefore unjustified on the basis of existing law. Denying new entrants geostationary-access or making access more difficult does not constitute appropriation; it simply results from the traditional system of distribution of access rights. The practice of developed States is based on free access and priority given to the first satellites placed in geostationary orbit. The geostationary orbit is part of outer space and, as such, the customary principle of non-appropriation and the 1967 Space Treaty apply to it. The equatorial countries have claimed sovereignty, then preferential rights over this space. These claims are contrary to the 1967 Treaty and customary law. However, they testify to the concern of the equatorial countries, shared by developing countries, in the face of saturation and seizure of geostationary positions by developed countries. The regime of res communis of outer space in Space Law (free access and non-appropriation) does not meet the demand of the developing countries that their possibilities of future access to the geostationary orbit and associated radio frequencies are guaranteed. New rules appear necessary and have been envisaged to ensure the access of all States to these positions and frequencies. As a conclusion, we may say that those Chinese space-based solar power stations would be considered space objects, the solar energy they would be exploiting would be free of use, and the orbital position they would occupy would have to obey the first-come, first-served principle that applies to orbital positioning. Concerning Article I of the 1967 Outer Space Treaty, which imposes that “The exploration and use of outer space, including the Moon and other celestial bodies, shall be carried out for the benefit and in the interests of all countries, irrespective of their degree of economic or scientific development, and shall be the province of all mankind”, “the benefit and in the interests of all countries” doesn’t prohibit private exploitation, as it is the case with satellite navigation, satellite television and commercial satellite imagery for example. Chinese Private Companies are pursuing Space-Based Solar Power. McKirdy and Fang 19 Euan McKirdy and Nanlin Fang 3-3-2019 "Space power plant and a mission to Mars: China’s new plans to conquer the final frontier" https://www.cnn.com/2019/03/03/asia/china-plans-solar-power-in-space-intl/index.html (Journalists at CNN)Elmer China Aerospace Science and Technology Corporation plans to launch small solar satellites that can harness energy in space as soon as 2021. Then it will test larger plants capable of advanced functions, such as beaming energy back to Earth via lasers. A receiving station will be built in Xian, around 500 miles northeast of the Chinese city of Chongqing. The city is a regional space hub where a facility to develop the solar power farms has been founded. By 2050, the company plans that a full-sized space-based solar plant would be ready for commercial use, the Chinese media report said. China’s key – they’ve been working on this for decades longer than anyone else. Rosenbaum and Russo 19 Eric Rosenbam and Donovan Russo 3-17-2019 "China plans a solar power play in space that NASA abandoned decades ago" https://www.cnbc.com/2019/03/15/china-plans-a-solar-power-play-in-space-that-nasa-abandoned-long-ago.html (Senior Editors at CNBC)Elmer The space race heats up China’s ambitions in space rival that of the United States. Its two main objectives were originally human spaceflight (accomplished in 2003) and a permanent Chinese space station, which is coming closer to reality — it announced in early March that a manned space station similar to ISS is now on schedule for 2022, earlier than expected. As the two geopolitical foes increasingly turn their attention to a technological and military race beyond the earth’s atmosphere, space-based solar power projects are an overlooked, often criticized idea. But with China recently announcing that within the next decade it expects to finish the high voltage power transmission and wireless energy tests that would be needed for a space-based solar power system, the concept is likely to get renewed attention. All of the plans in the space race have potential implications for a new military build-out in space of increasing relevance to the world’s powers. The Trump administration formalized plans in February for a branch of U.S. military known as the Space Force. The solar power station plans being contemplated by China include the launch of small- to medium-sized solar power projects in the stratosphere to generate electricity between 2021 and 2025, followed by a space-based solar power station that can generate at least a megawatt of electricity in 2030, and a commercial-scale solar power plant in space by 2050. “The dramatically stated interest on the part of the Chinese will do a lot to engender interest,” Mankins said. “Around a decade ago the Chinese started working seriously on this, and about five years ago they started coming to international meetings. Before that, they were in the dark. Now they are coming out of the shadows and talking much more openly about this.” He added, “There is absolutely progress from the Chinese at this point. This is not posturing; this is a real plan from serious organizations with revered scientists in China. They have a perfectly good technical plan, and they can do it by 2030,” Mankins said, describing a small-scale solar power project producing megawatts of electricity, but not a commercial-scale project able to produce gigawatts needed to compete with utilities. A space-based solar power station would capture the sun’s energy that never makes it to the planet and use laser beams to send the energy back to Earth to meet energy demand needs. China said in a recent announcement about the project that a big advantage of space-based solar power is its ability to offer energy supply on a constant basis and with greater intensity than terrestrial solar farms. One of the issues with renewable-energy projects like solar and wind power plants are their intermittency — that refers to the fact that the sun isn’t shining and the wind is not blowing 24-hours a day, limiting the periods of time during which these projects can be a source of power generation. Space-based solar would not only offer a solution to intermittency, but also delivery. Today, utility power generation is regional, if not local, but electricity generated in space and near the equator could be beamed almost anywhere across the globe, except for the poles. “You could beam electricity from Canada to the Tierra del Fuego at the southern tip of South America from a satellite at equator,” Mankins said. Roughly one billion people live in the Americas. Hopkins said the current Chinese view is, “We want to be major dominant power in space solar power by 2050. This has the potential to really turn the geopolitics in our favor if we are a leader, so let’s look at it seriously.” Meanwhile, the U.S. says, “Are you kidding? Let’s worry about something else.” New life for a ‘losing proposition’ The idea of collecting solar power in space was popularized by science fiction author Isaac Asimov in 1941 in a short story that envisioned space stations that could transport energy from the sun to other planets with microwave beams. In 1968, Asimov’s vision was brought closer to reality when an American aerospace engineer named Peter Glaser wrote the first formal proposal for a solar-based system in space. After experimenting in the 1970s with transporting solar power, Glaser was able to land a contract with NASA to fund research. However, the project suffered with changes in federal administrations and it was not until 1999 that NASA’s solar power exploratory research and technology program jumped back in to study the issue. In the end “NASA didn’t want to do it,” Mankins said. But a lot has changed, especially relating to the cost equation and rapid advances in technologies like robotics. A NASA spokeswoman said it is not currently studying space-based solar power for use on Earth. It is exploring several advanced power and energy technologies to enable long-duration human exploration of the Moon and Mars, such as its Kilopower project, a small, lightweight nuclear fission system that could power future outposts on the Moon to support astronauts, rovers and surface operations. Next year, this project is expected to transition from ground-based testing to an in-space demonstration mission. Space-Based Solar Power solves Paris Goals that checks back Warming. Ravisetti 21 Monisha Ravisetti 11-8-2021 "Harvesting energy with space solar panels could power the Earth 24/7" https://www.cnet.com/news/harvesting-energy-with-space-solar-panels-could-power-the-earth-247/ (Science Writer at CNet)Elmer Solar power has been a key part of humanity's clean energy repertoire. We spread masses of sunlight-harvesting panels on solar fields, and many people power their homes by decorating their roofs with the rectangles. But there's a caveat to this wonderful power source. Solar panels can't collect energy at night. To work at peak efficiency, they need as much sunlight as possible. So to maximize these sun catchers' performance, researchers are toying with a plan to send them to a place where the sun never sets: outer space. Theoretically, if a bunch of solar panels were blasted into orbit, they'd soak up the sun even on the foggiest days and the darkest nights, storing an enormous amount of power. If that power were wirelessly beamed down to Earth, our planet could breathe in renewable clean energy, 24/7. That would significantly reduce our carbon footprint. Against the backdrop of a worsening climate crisis, the success of space-based solar power could be more important than ever. The state of the climate is in the spotlight right now as world leaders gather in Glasgow, Scotland, for the COP26 summit, which has been called the "world's best last chance" to get the crisis under control. CNET Science is highlighting a few futuristic strategies intended to aid countries in cutting back on human-generated carbon emissions. Next-generation tech like space-based solar power can't solve our climate problems -- we still need to rapidly decarbonize our energy systems -- but green innovation could help achieve the goals of the Paris Agreement: Limit global warming to well below 2 degrees Celsius (3.6 degrees Fahrenheit) by the end of the century. An unlimited supply of renewable energy from the sun might help us do that. Warming causes Extinction Kareiva 18, Peter, and Valerie Carranza. "Existential risk due to ecosystem collapse: Nature strikes back." Futures 102 (2018): 39-50. (Ph.D. in ecology and applied mathematics from Cornell University, director of the Institute of the Environment and Sustainability at UCLA, Pritzker Distinguished Professor in Environment and Sustainability at UCLA)Re-cut by Elmer In summary, six of the nine proposed planetary boundaries (phosphorous, nitrogen, biodiversity, land use, atmospheric aerosol loading, and chemical pollution) are unlikely to be associated with existential risks. They all correspond to a degraded environment, but in our assessment do not represent existential risks. However, the three remaining boundaries (climate change, global freshwater cycle, and ocean acidification) do pose existential risks. This is because of intrinsic positive feedback loops, substantial lag times between system change and experiencing the consequences of that change, and the fact these different boundaries interact with one another in ways that yield surprises. In addition, climate, freshwater, and ocean acidification are all directly connected to the provision of food and water, and shortages of food and water can create conflict and social unrest. Climate change has a long history of disrupting civilizations and sometimes precipitating the collapse of cultures or mass emigrations (McMichael, 2017). For example, the 12th century drought in the North American Southwest is held responsible for the collapse of the Anasazi pueblo culture. More recently, the infamous potato famine of 1846–1849 and the large migration of Irish to the U.S. can be traced to a combination of factors, one of which was climate. Specifically, 1846 was an unusually warm and moist year in Ireland, providing the climatic conditions favorable to the fungus that caused the potato blight. As is so often the case, poor government had a role as well—as the British government forbade the import of grains from outside Britain (imports that could have helped to redress the ravaged potato yields). Climate change intersects with freshwater resources because it is expected to exacerbate drought and water scarcity, as well as flooding. Climate change can even impair water quality because it is associated with heavy rains that overwhelm sewage treatment facilities, or because it results in higher concentrations of pollutants in groundwater as a result of enhanced evaporation and reduced groundwater recharge. Ample clean water is not a luxury—it is essential for human survival. Consequently, cities, regions and nations that lack clean freshwater are vulnerable to social disruption and disease. Finally, ocean acidification is linked to climate change because it is driven by CO2 emissions just as global warming is. With close to 20 of the world’s protein coming from oceans (FAO, 2016), the potential for severe impacts due to acidification is obvious. Less obvious, but perhaps more insidious, is the interaction between climate change and the loss of oyster and coral reefs due to acidification. Acidification is known to interfere with oyster reef building and coral reefs. Climate change also increases storm frequency and severity. Coral reefs and oyster reefs provide protection from storm surge because they reduce wave energy (Spalding et al., 2014). If these reefs are lost due to acidification at the same time as storms become more severe and sea level rises, coastal communities will be exposed to unprecedented storm surge—and may be ravaged by recurrent storms. A key feature of the risk associated with climate change is that mean annual temperature and mean annual rainfall are not the variables of interest. Rather it is extreme episodic events that place nations and entire regions of the world at risk. These extreme events are by definition “rare” (once every hundred years), and changes in their likelihood are challenging to detect because of their rarity, but are exactly the manifestations of climate change that we must get better at anticipating (Diffenbaugh et al., 2017). Society will have a hard time responding to shorter intervals between rare extreme events because in the lifespan of an individual human, a person might experience as few as two or three extreme events. How likely is it that you would notice a change in the interval between events that are separated by decades, especially given that the interval is not regular but varies stochastically? A concrete example of this dilemma can be found in the past and expected future changes in storm-related flooding of New York City. The highly disruptive flooding of New York City associated with Hurricane Sandy represented a flood height that occurred once every 500 years in the 18th century, and that occurs now once every 25 years, but is expected to occur once every 5 years by 2050 (Garner et al., 2017). This change in frequency of extreme floods has profound implications for the measures New York City should take to protect its infrastructure and its population, yet because of the stochastic nature of such events, this shift in flood frequency is an elevated risk that will go unnoticed by most people. 4. The combination of positive feedback loops and societal inertia is fertile ground for global environmental catastrophes Humans are remarkably ingenious, and have adapted to crises throughout their history. Our doom has been repeatedly predicted, only to be averted by innovation (Ridley, 2011). However, the many stories of human ingenuity successfully addressing existential risks such as global famine or extreme air pollution represent environmental challenges that are largely linear, have immediate consequences, and operate without positive feedbacks. For example, the fact that food is in short supply does not increase the rate at which humans consume food—thereby increasing the shortage. Similarly, massive air pollution episodes such as the London fog of 1952 that killed 12,000 people did not make future air pollution events more likely. In fact it was just the opposite—the London fog sent such a clear message that Britain quickly enacted pollution control measures (Stradling, 2016). Food shortages, air pollution, water pollution, etc. send immediate signals to society of harm, which then trigger a negative feedback of society seeking to reduce the harm. In contrast, today’s great environmental crisis of climate change may cause some harm but there are generally long time delays between rising CO2 concentrations and damage to humans. The consequence of these delays are an absence of urgency; thus although 70 of Americans believe global warming is happening, only 40 think it will harm them (http://climatecommunication.yale.edu/visualizations-data/ycom-us-2016/). Secondly, unlike past environmental challenges, the Earth’s climate system is rife with positive feedback loops. In particular, as CO2 increases and the climate warms, that very warming can cause more CO2 release which further increases global warming, and then more CO2, and so on. Table 2 summarizes the best documented positive feedback loops for the Earth’s climate system. These feedbacks can be neatly categorized into carbon cycle, biogeochemical, biogeophysical, cloud, ice-albedo, and water vapor feedbacks. As important as it is to understand these feedbacks individually, it is even more essential to study the interactive nature of these feedbacks. Modeling studies show that when interactions among feedback loops are included, uncertainty increases dramatically and there is a heightened potential for perturbations to be magnified (e.g., Cox, Betts, Jones, Spall, and Totterdell, 2000; Hajima, Tachiiri, Ito, and Kawamiya, 2014; Knutti and Rugenstein, 2015; Rosenfeld, Sherwood, Wood, and Donner, 2014). This produces a wide range of future scenarios. Positive feedbacks in the carbon cycle involves the enhancement of future carbon contributions to the atmosphere due to some initial increase in atmospheric CO2. This happens because as CO2 accumulates, it reduces the efficiency in which oceans and terrestrial ecosystems sequester carbon, which in return feeds back to exacerbate climate change (Friedlingstein et al., 2001). Warming can also increase the rate at which organic matter decays and carbon is released into the atmosphere, thereby causing more warming (Melillo et al., 2017). Increases in food shortages and lack of water is also of major concern when biogeophysical feedback mechanisms perpetuate drought conditions. The underlying mechanism here is that losses in vegetation increases the surface albedo, which suppresses rainfall, and thus enhances future vegetation loss and more suppression of rainfall—thereby initiating or prolonging a drought (Chamey, Stone, and Quirk, 1975). To top it off, overgrazing depletes the soil, leading to augmented vegetation loss (Anderies, Janssen, and Walker, 2002). Climate change often also increases the risk of forest fires, as a result of higher temperatures and persistent drought conditions. The expectation is that forest fires will become more frequent and severe with climate warming and drought (Scholze, Knorr, Arnell, and Prentice, 2006), a trend for which we have already seen evidence (Allen et al., 2010). Tragically, the increased severity and risk of Southern California wildfires recently predicted by climate scientists (Jin et al., 2015), was realized in December 2017, with the largest fire in the history of California (the “Thomas fire” that burned 282,000 acres, https://www.vox.com/2017/12/27/16822180/thomas-fire-california-largest-wildfire). This catastrophic fire embodies the sorts of positive feedbacks and interacting factors that could catch humanity off-guard and produce a true apocalyptic event. Record-breaking rains produced an extraordinary flush of new vegetation, that then dried out as record heat waves and dry conditions took hold, coupled with stronger than normal winds, and ignition. Of course the record-fire released CO2 into the atmosphere, thereby contributing to future warming. Out of all types of feedbacks, water vapor and the ice-albedo feedbacks are the most clearly understood mechanisms. Losses in reflective snow and ice cover drive up surface temperatures, leading to even more melting of snow and ice cover—this is known as the ice-albedo feedback (Curry, Schramm, and Ebert, 1995). As snow and ice continue to melt at a more rapid pace, millions of people may be displaced by flooding risks as a consequence of sea level rise near coastal communities (Biermann and Boas, 2010; Myers, 2002; Nicholls et al., 2011). The water vapor feedback operates when warmer atmospheric conditions strengthen the saturation vapor pressure, which creates a warming effect given water vapor’s strong greenhouse gas properties (Manabe and Wetherald, 1967). Global warming tends to increase cloud formation because warmer temperatures lead to more evaporation of water into the atmosphere, and warmer temperature also allows the atmosphere to hold more water. The key question is whether this increase in clouds associated with global warming will result in a positive feedback loop (more warming) or a negative feedback loop (less warming). For decades, scientists have sought to answer this question and understand the net role clouds play in future climate projections (Schneider et al., 2017). Clouds are complex because they both have a cooling (reflecting incoming solar radiation) and warming (absorbing incoming solar radiation) effect (Lashof, DeAngelo, Saleska, and Harte, 1997). The type of cloud, altitude, and optical properties combine to determine how these countervailing effects balance out. Although still under debate, it appears that in most circumstances the cloud feedback is likely positive (Boucher et al., 2013). For example, models and observations show that increasing greenhouse gas concentrations reduces the low-level cloud fraction in the Northeast Pacific at decadal time scales. This then has a positive feedback effect and enhances climate warming since less solar radiation is reflected by the atmosphere (Clement, Burgman, and Norris, 2009). The key lesson from the long list of potentially positive feedbacks and their interactions is that runaway climate change, and runaway perturbations have to be taken as a serious possibility. Table 2 is just a snapshot of the type of feedbacks that have been identified (see Supplementary material for a more thorough explanation of positive feedback loops). However, this list is not exhaustive and the possibility of undiscovered positive feedbacks portends even greater existential risks. The many environmental crises humankind has previously averted (famine, ozone depletion, London fog, water pollution, etc.) were averted because of political will based on solid scientific understanding. We cannot count on complete scientific understanding when it comes to positive feedback loops and climate change.
2/18/22
JF - CP - Space Elevators
Tournament: Harvard | Round: 5 | Opponent: Hunter AH | Judge: Henry Eberhart 2 Text – Private Appropriation of Outer Space except for Space Elevators is Unjust. Space Elevators constitute Appropriation – they impede orbits. Matignon 19 Louis de Gouyon Matignon 3-3-2019 "LEGAL ASPECTS OF THE SPACE ELEVATOR TRANSPORTATION SYSTEM" https://www.spacelegalissues.com/space-law-legal-aspects-of-the-space-elevator-transportation-system/ PhD in space law (co-supervised by both Philippe Delebecque, from Université Paris 1 Panthéon-Sorbonne, France, and Christopher D. Johnson, from Georgetown University || regularly write articles on the website Space Legal Issues so as to popularise space law and public international lawElmer An Earth-based space elevator would consist of a cable with one end attached to the surface near the equator and the other end in space beyond geostationary orbit. An orbit is the curved path through which objects in space move around a planet or a star. The 1967 Treaty’s regime and customary law enshrine the principle of non-appropriation and freedom of access to orbital positions. Space Law and International Telecommunication Laws combined to protect this use against any interference. The majority of space-launched objects are satellites that are launched in Earth’s orbit (a very small part of space objects – scientific objects for space exploration – are launched into outer space beyond terrestrial orbits). It is important to precise that an orbit does not exist: satellites describe orbits by obeying the general laws of universal attraction. Depending on the launching techniques and parameters, the orbital trajectory of a satellite may vary. Sun-synchronous satellites fly over a given location constantly at the same time in local civil time: they are used for remote sensing, meteorology or the study of the atmosphere. Geostationary satellites are placed in a very high orbit; they give an impression of immobility because they remain permanently at the same vertical point of a terrestrial point (they are mainly used for telecommunications and television broadcasting). A geocentric orbit or Earth orbit involves any object orbiting Planet Earth, such as the Moon or artificial satellites. Geocentric (having the Earth as its centre) orbits are organised as follow: 1) Low Earth orbit (LEO): geocentric orbits with altitudes (the height of an object above the average surface of the Earth’s oceans) from 100 to 2 000 kilometres. Satellites in LEO have a small momentary field of view, only able to observe and communicate with a fraction of the Earth at a time, meaning a network or constellation of satellites is required in order to provide continuous coverage. Satellites in lower regions of LEO also suffer from fast orbital decay (in orbital mechanics, decay is a gradual decrease of the distance between two orbiting bodies at their closest approach, the periapsis, over many orbital periods), requiring either periodic reboosting to maintain a stable orbit, or launching replacement satellites when old ones re-enter. 2) Medium Earth orbit (MEO), also known as an intermediate circular orbit: geocentric orbits ranging in altitude from 2 000 kilometres to just below geosynchronous orbit at 35 786 kilometres. The most common use for satellites in this region is for navigation, communication, and geodetic/space environment science. The most common altitude is approximately 20 000 kilometres which yields an orbital period of twelve hours. 3) Geosynchronous orbit (GSO) and geostationary orbit (GEO) are orbits around Earth at an altitude of 35 786 kilometres matching Earth’s sidereal rotation period. All geosynchronous and geostationary orbits have a semi-major axis of 42 164 kilometres. A geostationary orbit stays exactly above the equator, whereas a geosynchronous orbit may swing north and south to cover more of the Earth’s surface. Communications satellites and weather satellites are often placed in geostationary orbits, so that the satellite antennae (located on Earth) that communicate with them do not have to rotate to track them, but can be pointed permanently at the position in the sky where the satellites are located. 4) High Earth orbit: geocentric orbits above the altitude of 35 786 kilometres. The competing forces of gravity, which is stronger at the lower end, and the outward/upward centrifugal force, which is stronger at the upper end, would result in the cable being held up, under tension, and stationary over a single position on Earth. With the tether deployed, climbers could repeatedly climb the tether to space by mechanical means, releasing their cargo to orbit. Climbers could also descend the tether to return cargo to the surface from orbit. Private Companies are pursuing Space Elevators. Alfano 15 Andrea Alfano 8-18-2015 “All Of These Companies Are Working On A Space Elevator” https://www.techtimes.com/articles/77612/20150818/companies-working-space-elevator.htm (Writer at the Tech Times)Elmer Space elevators are solid proof that any mundane object sounds way cooler if you stick the word "space" in front of it. But there's much more than coolness at stake when building a space elevator – this technology has the potential to revolutionize space transportation, and the Canadian private space company Thoth Technology that was recently awarded a patent for its space elevator design isn't the only company in the game. One of the other major players is a U.S.-based company called LiftPort Group, founded by space entrepreneur Michael Laine in 2003. Its plan for a space elevator is vastly different from the one for which Thoth received a patent, however. Whereas Thoth's plans entail tethering a 12-mile-high inflatable space elevator to the Earth, LiftPort is shooting for the moon. Originally, LiftPort had planned to build an Earth elevator, too, but it abandoned the idea in 2007 in favor of building a lunar elevator. The basic design for a lunar elevator is an anchor in the moon that is attached to a cable that extends to a space station situated at a very special point. Known as a Lagrange Point, this is the gravitational tipping point between the Earth and the moon, where their gravitational pulls essentially cancel one another out. A robot could then travel up and down the tether, ferrying cargo between the moon and the station. Out farther in space, a counterweight would balance out the system. Both types of space elevator are intended to increase space access, but in very different ways. Thoth's Earth elevator aims to make launches easier by starting off 12 miles above the Earth's surface. LiftPort's space elevator aims to increase access to the moon in particular, because it is much easier to launch a rocket to the Lagrange Point and dock it at a space station than it is to get to the moon directly. There's a third major company based in Japan called Obayashi Corp. whose plans look like a hybrid of Thoth's and LiftPort's. Obayashi is not a space company, however – it's actually a construction company. Like Thoth, Obayashi plans to build an Earth elevator. But its Earth elevator would consist of a cable tethered to the blue planet, a robotic cargo-carrier, a space station, and a counterweight. It essentially looks like LiftPort's plans, but stuck to the Earth instead of to the moon. They’re feasible. Smith 17 Vincent Smith 6-21-2017 "3 Challenges for Engineering A Space Elevator" https://www.engineering.com/story/3-challenges-for-engineering-a-space-elevator (Engineer)Elmer There's a lot of junk orbiting Earth. Thousands of hours have been poured into previous NASA missions, ensuring the least possible contamination by even the tiniest motes of dust and dirt. The kinds of instrumentation that would monitor a space elevator would need to be similarly discerning. However, the fact that it would be a permanent fixture means that sooner or later, a space elevator would cross paths with meteors and even remnants of previous space missions left behind as space debris. The extreme of this phenomenon even has a name: Kessler Syndrome, where the density of low earth debris becomes so large that nothing can pass it safely into outer space. This cascading problem of space debris collisions was featured in the film Gravity. As Bullock and Clooney can tell you, this phenomenon could cause catastrophic damage to the overall structure (or knock it off balance, returning to our 'oscillation' concerns). Edwards recognized this, and devoted an entire section of his report to addressing it. According to the report, part of dealing with this obstacle is recognizing and tracking low-earth orbit objects large enough to do damage to the structure. According to Section 10.3 of the report, “A study was done at Johnson Space Center on the construction of a system that could track objects down to 1cm in size with 100m accuracy using effectively current technology. This is very close to the tracking network we would need for the space elevator.” For situations in which avoidance is not always possible (the amount of low-earth orbit debris increases significantly from altitudes of approximately 300 to 1,000 miles), Edwards posits that increasing the thickness of the cable will make it robust enough to withstand all but the largest of objects, which could be tracked and avoided ahead of time using the systems previously mentioned. Even for these exceptional pieces of debris, Edwards illustrates in a section simply labeled “Meteors” that only direct impact by an object (ii) over 3cm in diameter, (iii) with enough force to stay on the initial plane of impact (as opposed to being deflected or redirected by contact with the elevator apparatus), would create the kind of catastrophic damage that we associate with a complete severing of the cable. Designing the cable with curvature and panels specifically for deflection has been proposed by both Edwards as well as several other survivability reports, including this one, put together for the 2010 International Space Elevator Consortium (ISEC). Definitive answers as to the effectiveness of these measures are hopefully forthcoming, but it's at least comforting to know that there are first, second, and third lines of defense prepared for just such occasions. Regardless of completion, Elevators spur investment in Nanotechnology Liam O’Brien 16. University of Wollongong. 07/2016. “Nanotechnology in Space.” Young Scientists Journal; Canterbury, no. 19, p. 22. Nanotechnology is at the forefront of scientific development, continuing to astound and innovate. Likewise, the space industry is rapidly increasing in sophistication and competition, with companies such as SpaceX, Blue Origin and Virgin Galactic becoming increasingly prevalent in what could become a new commercial space race. The various space programs over the past 60 years have led to a multitude of beneficial impacts for everyday society. Nanotechnology, through research and development in space has the potential to do the same. Potential applications of nanotechnology in space are numerous, many of them have the potential to capture and inspire generations to come. One of these applications is the space elevator. By using carbon nanotubes, a super light yet strong material, this concept would be an actual physical structure from the surface of the Earth to an altitude of approximately 36 000 km. The tallest building in the world would fit into this elevator over 42 000 times. The counterweight, used to keep the elevator taught, is proposed to be an asteroid. This would need to be at a distance of 100 000 km, a quarter of the distance to the moon. The benefits of such a structure would be enormous. 95 of a space shuttle's weight at take-off is fuel, costing US$ 20 000 per kilogram to send something into space. However, with a space elevator the cost per kilogram can be reduced to as little as US$ 200. Exploration to other planets can begin at the tower, and travel to and from the moon could become as simple as a morning commute to work. Solar sails provide the means to travel large distances and incredible speeds. Much like sails on a boat use wind, the solar sail uses light as a source of propulsion. Ideally these sails would be kilometres in length and only a few micrometres in thickness. This provides us with the ability to travel at speeds previously unheard of. Using carbon nanotubes once again, a solar sail has the capability to travel at 39 756 km/s which is 13 of the speed of light! This sail could reach Pluto in an astonishing 1.7 days, and Alpha Centauri in just 32 years. Space travel to other planets, other stars, could be possible with solar sails. The Planetary Society is funding for a space sail of itself, and has successfully launched one into orbit. NASA has also sent a sail into orbit, allowing it to burn up in the atmosphere after 240 days. Investing time and resources into nanotechnology for space exploration has benefits for society today. Materials such as graphene are being used in modern manufacturing at an increasing rate as the applications become utilised. Carbon nanotubes will change the way we think about materials and their strength. These nanotubes have a tensile strength one hundred times that of steel, yet are only a sixth of the weight. Imagine light weight vehicles using less petrol and energy as well as being just as strong as regular vehicles. With potentials to revolutionize the way we think about space travel, nanotechnology has a bright future. As a new field of science, it has the capability to push the human race to the outer reaches of our galaxy and hopefully one day to other stars. It will inspire generations of explorers and dreamers to challenge themselves and advance the human race into the next era. As Richard Feynman said in his 1959 talk 'There's Plenty of Room at the Bottom' "A field in which little has been done, but in which an enormous amount can be done. There is still plenty more to achieve. Nanomaterials solve Warming and Water Scarcity. Khullar 17 Bhavya Khullar 9-4-2017 "Nanomaterials Could Combat Climate Change and Reduce Pollution" https://www.scientificamerican.com/article/nanomaterials-could-combat-climate-change-and-reduce-pollution/ (Former Programme Officer with the Food Safety and Toxins Unit, Centre for Science and Environment (CSE))Elmer August 18, 2017 — The list of environmental problems that the world faces may be huge, but some strategies for solving them are remarkably small. First explored for applications in microscopy and computing, nanomaterials—materials made up of units that are each thousands of times smaller than the thickness of a human hair—are emerging as useful for tackling threats to our planet’s well-being. Scientists across the globe are developing nanomaterials that can efficiently use carbon dioxide from the air, capture toxic pollutants from water and degrade solid waste into useful products. “Nanomaterials could help us mitigate pollution. They are efficient catalysts and mostly recyclable. Now, they have to become economical for commercialization and better to replace present-day technologies completely,” says Arun Chattopadhyay, a member of the chemistry faculty at the Center for Nanotechnology, Indian Institute of Technology Guwahati. HARVESTING CO2 To help slow the climate-changing rise in atmospheric CO2levels, researchers have developed nanoCO2 harvesters that can suck atmospheric carbon dioxide and deploy it for industrial purposes. “Nanomaterials can convert carbon dioxide into useful products like alcohol. The materials could be simple chemical catalysts or photochemical in nature that work in the presence of sunlight,” says Chattopadhyay, who has been working with nanomaterials to tackle environmental pollutants for more than a decade. Many research groups are working to address a problem that, if solved, could be a holy grail in combating climate change: how to pull CO2 out of the atmosphere and convert it into useful products. Chattopadhyay isn’t alone. Many research groups are working to address a problem that, if solved, could be a holy grail in combating climate change: how to pull CO2 out of the atmosphere and convert it into useful products. Nanoparticles offer a promising approach to this because they have a large surface-area-to-volume ratio for interacting with CO2 and properties that allow them to facilitate the conversion of CO2into other things. The challenge is to make them economically viable. Researchers have tried everything from metallic to carbon-based nanoparticles to reduce the cost, but so far they haven’t become efficient enough for industrial-scale application. One of the most recent points of progress in this area is work by scientists at the CSIR-Indian Institute of Petroleum and the Lille University of Science and Technology in France. The researchers developed a nanoCO2 harvester that uses water and sunlight to convert atmospheric CO2 into methanol, which can be employed as an engine fuel, a solvent, an antifreeze agent and a diluent of ethanol. Made by wrapping a layer of modified graphene oxide around spheres of copper zinc oxide and magnetite, the material looks like a miniature golf ball, captures CO2 more efficiently than conventional catalysts and can be readily reused, according to Suman Jain, senior scientist of the Indian Institute of Petroleum, Dehradun in India, who developed the nanoCO2harvester. Jain says that the nanoCO2 harvester has a large molecular surface area and captures more CO2 than a conventional catalyst with similar surface area would, which makes the conversion more efficient. But due to their small size, the nanoparticles have a tendency to clump up, making them inactive with prolonged use. Jain adds that synthesizing useful nanoparticle-based materials is also challenging because it’s hard to make the particles a consistent size. Chattopadhyay says the efficiency of such materials can be improved further, providing hope for useful application in the future. CLEANSING WATER Most toxic dyes used in textile and leather industries can be captured with nanoparticles. “Water pollutants such as dyes from human-created waste like those from tanneries could get to natural sources of water like deep tube wells or groundwater if wastewater from these industries is left untreated,” says Chattopadhyay. “This problem is rather difficult to solve.” An international group of researchers led by professor Elzbieta Megiel of the University of Warsaw in Poland reports that nanomaterials have been widely studied for removing heavy metals and dyes from wastewater. According to the research team, adsorption processes using materials containing magnetic nanoparticles are highly effective and can be easily performed because such nanoparticles have a large number of sites on their surface that can capture pollutants and don’t readily degrade in water. Chattopadhyay adds that appropriately designed magnetic nanomaterials can be used to separate pollutants such as arsenic, lead, chromium and mercury from water. However, the nanotech-based approach has to be more efficient than conventional water purification technology to make it worthwhile. In addition to removing dyes and metals, nanomaterials can also be used to clean up oil spills. Researchers led by Pulickel Ajayan at Rice University in Houston, Texas, have developed a reusable nanosponge that can remove oil from contaminated seawater. The technology shows promise, but it’s not yet ready for prime time. “While the nanosponge is a good material to deal with oil spills, these results are confined to the laboratory,” says Ashok Ganguli, director of the Institute of Nano Science and Technology in Mohali, Punjab, India. “Large-scale synthesis is required if we have to remove oil from seawater which is spread over several miles.” Although scientists have yet to successfully synthesize nanomaterials for cleaning oil spills at a scale large enough for practical application, “this may become possible with more research and industry partnerships,” Chattopadhyay says. Warming causes Extinction Kareiva 18, Peter, and Valerie Carranza. "Existential risk due to ecosystem collapse: Nature strikes back." Futures 102 (2018): 39-50. (Ph.D. in ecology and applied mathematics from Cornell University, director of the Institute of the Environment and Sustainability at UCLA, Pritzker Distinguished Professor in Environment and Sustainability at UCLA)Re-cut by Elmer In summary, six of the nine proposed planetary boundaries (phosphorous, nitrogen, biodiversity, land use, atmospheric aerosol loading, and chemical pollution) are unlikely to be associated with existential risks. They all correspond to a degraded environment, but in our assessment do not represent existential risks. However, the three remaining boundaries (climate change, global freshwater cycle, and ocean acidification) do pose existential risks. This is because of intrinsic positive feedback loops, substantial lag times between system change and experiencing the consequences of that change, and the fact these different boundaries interact with one another in ways that yield surprises. In addition, climate, freshwater, and ocean acidification are all directly connected to the provision of food and water, and shortages of food and water can create conflict and social unrest. Climate change has a long history of disrupting civilizations and sometimes precipitating the collapse of cultures or mass emigrations (McMichael, 2017). For example, the 12th century drought in the North American Southwest is held responsible for the collapse of the Anasazi pueblo culture. More recently, the infamous potato famine of 1846–1849 and the large migration of Irish to the U.S. can be traced to a combination of factors, one of which was climate. Specifically, 1846 was an unusually warm and moist year in Ireland, providing the climatic conditions favorable to the fungus that caused the potato blight. As is so often the case, poor government had a role as well—as the British government forbade the import of grains from outside Britain (imports that could have helped to redress the ravaged potato yields). Climate change intersects with freshwater resources because it is expected to exacerbate drought and water scarcity, as well as flooding. Climate change can even impair water quality because it is associated with heavy rains that overwhelm sewage treatment facilities, or because it results in higher concentrations of pollutants in groundwater as a result of enhanced evaporation and reduced groundwater recharge. Ample clean water is not a luxury—it is essential for human survival. Consequently, cities, regions and nations that lack clean freshwater are vulnerable to social disruption and disease. Finally, ocean acidification is linked to climate change because it is driven by CO2 emissions just as global warming is. With close to 20 of the world’s protein coming from oceans (FAO, 2016), the potential for severe impacts due to acidification is obvious. Less obvious, but perhaps more insidious, is the interaction between climate change and the loss of oyster and coral reefs due to acidification. Acidification is known to interfere with oyster reef building and coral reefs. Climate change also increases storm frequency and severity. Coral reefs and oyster reefs provide protection from storm surge because they reduce wave energy (Spalding et al., 2014). If these reefs are lost due to acidification at the same time as storms become more severe and sea level rises, coastal communities will be exposed to unprecedented storm surge—and may be ravaged by recurrent storms. A key feature of the risk associated with climate change is that mean annual temperature and mean annual rainfall are not the variables of interest. Rather it is extreme episodic events that place nations and entire regions of the world at risk. These extreme events are by definition “rare” (once every hundred years), and changes in their likelihood are challenging to detect because of their rarity, but are exactly the manifestations of climate change that we must get better at anticipating (Diffenbaugh et al., 2017). Society will have a hard time responding to shorter intervals between rare extreme events because in the lifespan of an individual human, a person might experience as few as two or three extreme events. How likely is it that you would notice a change in the interval between events that are separated by decades, especially given that the interval is not regular but varies stochastically? A concrete example of this dilemma can be found in the past and expected future changes in storm-related flooding of New York City. The highly disruptive flooding of New York City associated with Hurricane Sandy represented a flood height that occurred once every 500 years in the 18th century, and that occurs now once every 25 years, but is expected to occur once every 5 years by 2050 (Garner et al., 2017). This change in frequency of extreme floods has profound implications for the measures New York City should take to protect its infrastructure and its population, yet because of the stochastic nature of such events, this shift in flood frequency is an elevated risk that will go unnoticed by most people. 4. The combination of positive feedback loops and societal inertia is fertile ground for global environmental catastrophes Humans are remarkably ingenious, and have adapted to crises throughout their history. Our doom has been repeatedly predicted, only to be averted by innovation (Ridley, 2011). However, the many stories of human ingenuity successfully addressing existential risks such as global famine or extreme air pollution represent environmental challenges that are largely linear, have immediate consequences, and operate without positive feedbacks. For example, the fact that food is in short supply does not increase the rate at which humans consume food—thereby increasing the shortage. Similarly, massive air pollution episodes such as the London fog of 1952 that killed 12,000 people did not make future air pollution events more likely. In fact it was just the opposite—the London fog sent such a clear message that Britain quickly enacted pollution control measures (Stradling, 2016). Food shortages, air pollution, water pollution, etc. send immediate signals to society of harm, which then trigger a negative feedback of society seeking to reduce the harm. In contrast, today’s great environmental crisis of climate change may cause some harm but there are generally long time delays between rising CO2 concentrations and damage to humans. The consequence of these delays are an absence of urgency; thus although 70 of Americans believe global warming is happening, only 40 think it will harm them (http://climatecommunication.yale.edu/visualizations-data/ycom-us-2016/). Secondly, unlike past environmental challenges, the Earth’s climate system is rife with positive feedback loops. In particular, as CO2 increases and the climate warms, that very warming can cause more CO2 release which further increases global warming, and then more CO2, and so on. Table 2 summarizes the best documented positive feedback loops for the Earth’s climate system. These feedbacks can be neatly categorized into carbon cycle, biogeochemical, biogeophysical, cloud, ice-albedo, and water vapor feedbacks. As important as it is to understand these feedbacks individually, it is even more essential to study the interactive nature of these feedbacks. Modeling studies show that when interactions among feedback loops are included, uncertainty increases dramatically and there is a heightened potential for perturbations to be magnified (e.g., Cox, Betts, Jones, Spall, and Totterdell, 2000; Hajima, Tachiiri, Ito, and Kawamiya, 2014; Knutti and Rugenstein, 2015; Rosenfeld, Sherwood, Wood, and Donner, 2014). This produces a wide range of future scenarios. Positive feedbacks in the carbon cycle involves the enhancement of future carbon contributions to the atmosphere due to some initial increase in atmospheric CO2. This happens because as CO2 accumulates, it reduces the efficiency in which oceans and terrestrial ecosystems sequester carbon, which in return feeds back to exacerbate climate change (Friedlingstein et al., 2001). Warming can also increase the rate at which organic matter decays and carbon is released into the atmosphere, thereby causing more warming (Melillo et al., 2017). Increases in food shortages and lack of water is also of major concern when biogeophysical feedback mechanisms perpetuate drought conditions. The underlying mechanism here is that losses in vegetation increases the surface albedo, which suppresses rainfall, and thus enhances future vegetation loss and more suppression of rainfall—thereby initiating or prolonging a drought (Chamey, Stone, and Quirk, 1975). To top it off, overgrazing depletes the soil, leading to augmented vegetation loss (Anderies, Janssen, and Walker, 2002). Climate change often also increases the risk of forest fires, as a result of higher temperatures and persistent drought conditions. The expectation is that forest fires will become more frequent and severe with climate warming and drought (Scholze, Knorr, Arnell, and Prentice, 2006), a trend for which we have already seen evidence (Allen et al., 2010). Tragically, the increased severity and risk of Southern California wildfires recently predicted by climate scientists (Jin et al., 2015), was realized in December 2017, with the largest fire in the history of California (the “Thomas fire” that burned 282,000 acres, https://www.vox.com/2017/12/27/16822180/thomas-fire-california-largest-wildfire). This catastrophic fire embodies the sorts of positive feedbacks and interacting factors that could catch humanity off-guard and produce a true apocalyptic event. Record-breaking rains produced an extraordinary flush of new vegetation, that then dried out as record heat waves and dry conditions took hold, coupled with stronger than normal winds, and ignition. Of course the record-fire released CO2 into the atmosphere, thereby contributing to future warming. Out of all types of feedbacks, water vapor and the ice-albedo feedbacks are the most clearly understood mechanisms. Losses in reflective snow and ice cover drive up surface temperatures, leading to even more melting of snow and ice cover—this is known as the ice-albedo feedback (Curry, Schramm, and Ebert, 1995). As snow and ice continue to melt at a more rapid pace, millions of people may be displaced by flooding risks as a consequence of sea level rise near coastal communities (Biermann and Boas, 2010; Myers, 2002; Nicholls et al., 2011). The water vapor feedback operates when warmer atmospheric conditions strengthen the saturation vapor pressure, which creates a warming effect given water vapor’s strong greenhouse gas properties (Manabe and Wetherald, 1967). Global warming tends to increase cloud formation because warmer temperatures lead to more evaporation of water into the atmosphere, and warmer temperature also allows the atmosphere to hold more water. The key question is whether this increase in clouds associated with global warming will result in a positive feedback loop (more warming) or a negative feedback loop (less warming). For decades, scientists have sought to answer this question and understand the net role clouds play in future climate projections (Schneider et al., 2017). Clouds are complex because they both have a cooling (reflecting incoming solar radiation) and warming (absorbing incoming solar radiation) effect (Lashof, DeAngelo, Saleska, and Harte, 1997). The type of cloud, altitude, and optical properties combine to determine how these countervailing effects balance out. Although still under debate, it appears that in most circumstances the cloud feedback is likely positive (Boucher et al., 2013). For example, models and observations show that increasing greenhouse gas concentrations reduces the low-level cloud fraction in the Northeast Pacific at decadal time scales. This then has a positive feedback effect and enhances climate warming since less solar radiation is reflected by the atmosphere (Clement, Burgman, and Norris, 2009). The key lesson from the long list of potentially positive feedbacks and their interactions is that runaway climate change, and runaway perturbations have to be taken as a serious possibility. Table 2 is just a snapshot of the type of feedbacks that have been identified (see Supplementary material for a more thorough explanation of positive feedback loops). However, this list is not exhaustive and the possibility of undiscovered positive feedbacks portends even greater existential risks. The many environmental crises humankind has previously averted (famine, ozone depletion, London fog, water pollution, etc.) were averted because of political will based on solid scientific understanding. We cannot count on complete scientific understanding when it comes to positive feedback loops and climate change.
2/20/22
JF - CP - Space Elevators
Tournament: Harvard | Round: 5 | Opponent: Hunter AH | Judge: Henry Eberhart 2 Text – Private Appropriation of Outer Space except for Space Elevators is Unjust. Space Elevators constitute Appropriation – they impede orbits. Matignon 19 Louis de Gouyon Matignon 3-3-2019 "LEGAL ASPECTS OF THE SPACE ELEVATOR TRANSPORTATION SYSTEM" https://www.spacelegalissues.com/space-law-legal-aspects-of-the-space-elevator-transportation-system/ PhD in space law (co-supervised by both Philippe Delebecque, from Université Paris 1 Panthéon-Sorbonne, France, and Christopher D. Johnson, from Georgetown University || regularly write articles on the website Space Legal Issues so as to popularise space law and public international lawElmer An Earth-based space elevator would consist of a cable with one end attached to the surface near the equator and the other end in space beyond geostationary orbit. An orbit is the curved path through which objects in space move around a planet or a star. The 1967 Treaty’s regime and customary law enshrine the principle of non-appropriation and freedom of access to orbital positions. Space Law and International Telecommunication Laws combined to protect this use against any interference. The majority of space-launched objects are satellites that are launched in Earth’s orbit (a very small part of space objects – scientific objects for space exploration – are launched into outer space beyond terrestrial orbits). It is important to precise that an orbit does not exist: satellites describe orbits by obeying the general laws of universal attraction. Depending on the launching techniques and parameters, the orbital trajectory of a satellite may vary. Sun-synchronous satellites fly over a given location constantly at the same time in local civil time: they are used for remote sensing, meteorology or the study of the atmosphere. Geostationary satellites are placed in a very high orbit; they give an impression of immobility because they remain permanently at the same vertical point of a terrestrial point (they are mainly used for telecommunications and television broadcasting). A geocentric orbit or Earth orbit involves any object orbiting Planet Earth, such as the Moon or artificial satellites. Geocentric (having the Earth as its centre) orbits are organised as follow: 1) Low Earth orbit (LEO): geocentric orbits with altitudes (the height of an object above the average surface of the Earth’s oceans) from 100 to 2 000 kilometres. Satellites in LEO have a small momentary field of view, only able to observe and communicate with a fraction of the Earth at a time, meaning a network or constellation of satellites is required in order to provide continuous coverage. Satellites in lower regions of LEO also suffer from fast orbital decay (in orbital mechanics, decay is a gradual decrease of the distance between two orbiting bodies at their closest approach, the periapsis, over many orbital periods), requiring either periodic reboosting to maintain a stable orbit, or launching replacement satellites when old ones re-enter. 2) Medium Earth orbit (MEO), also known as an intermediate circular orbit: geocentric orbits ranging in altitude from 2 000 kilometres to just below geosynchronous orbit at 35 786 kilometres. The most common use for satellites in this region is for navigation, communication, and geodetic/space environment science. The most common altitude is approximately 20 000 kilometres which yields an orbital period of twelve hours. 3) Geosynchronous orbit (GSO) and geostationary orbit (GEO) are orbits around Earth at an altitude of 35 786 kilometres matching Earth’s sidereal rotation period. All geosynchronous and geostationary orbits have a semi-major axis of 42 164 kilometres. A geostationary orbit stays exactly above the equator, whereas a geosynchronous orbit may swing north and south to cover more of the Earth’s surface. Communications satellites and weather satellites are often placed in geostationary orbits, so that the satellite antennae (located on Earth) that communicate with them do not have to rotate to track them, but can be pointed permanently at the position in the sky where the satellites are located. 4) High Earth orbit: geocentric orbits above the altitude of 35 786 kilometres. The competing forces of gravity, which is stronger at the lower end, and the outward/upward centrifugal force, which is stronger at the upper end, would result in the cable being held up, under tension, and stationary over a single position on Earth. With the tether deployed, climbers could repeatedly climb the tether to space by mechanical means, releasing their cargo to orbit. Climbers could also descend the tether to return cargo to the surface from orbit. Private Companies are pursuing Space Elevators. Alfano 15 Andrea Alfano 8-18-2015 “All Of These Companies Are Working On A Space Elevator” https://www.techtimes.com/articles/77612/20150818/companies-working-space-elevator.htm (Writer at the Tech Times)Elmer Space elevators are solid proof that any mundane object sounds way cooler if you stick the word "space" in front of it. But there's much more than coolness at stake when building a space elevator – this technology has the potential to revolutionize space transportation, and the Canadian private space company Thoth Technology that was recently awarded a patent for its space elevator design isn't the only company in the game. One of the other major players is a U.S.-based company called LiftPort Group, founded by space entrepreneur Michael Laine in 2003. Its plan for a space elevator is vastly different from the one for which Thoth received a patent, however. Whereas Thoth's plans entail tethering a 12-mile-high inflatable space elevator to the Earth, LiftPort is shooting for the moon. Originally, LiftPort had planned to build an Earth elevator, too, but it abandoned the idea in 2007 in favor of building a lunar elevator. The basic design for a lunar elevator is an anchor in the moon that is attached to a cable that extends to a space station situated at a very special point. Known as a Lagrange Point, this is the gravitational tipping point between the Earth and the moon, where their gravitational pulls essentially cancel one another out. A robot could then travel up and down the tether, ferrying cargo between the moon and the station. Out farther in space, a counterweight would balance out the system. Both types of space elevator are intended to increase space access, but in very different ways. Thoth's Earth elevator aims to make launches easier by starting off 12 miles above the Earth's surface. LiftPort's space elevator aims to increase access to the moon in particular, because it is much easier to launch a rocket to the Lagrange Point and dock it at a space station than it is to get to the moon directly. There's a third major company based in Japan called Obayashi Corp. whose plans look like a hybrid of Thoth's and LiftPort's. Obayashi is not a space company, however – it's actually a construction company. Like Thoth, Obayashi plans to build an Earth elevator. But its Earth elevator would consist of a cable tethered to the blue planet, a robotic cargo-carrier, a space station, and a counterweight. It essentially looks like LiftPort's plans, but stuck to the Earth instead of to the moon. They’re feasible. Smith 17 Vincent Smith 6-21-2017 "3 Challenges for Engineering A Space Elevator" https://www.engineering.com/story/3-challenges-for-engineering-a-space-elevator (Engineer)Elmer There's a lot of junk orbiting Earth. Thousands of hours have been poured into previous NASA missions, ensuring the least possible contamination by even the tiniest motes of dust and dirt. The kinds of instrumentation that would monitor a space elevator would need to be similarly discerning. However, the fact that it would be a permanent fixture means that sooner or later, a space elevator would cross paths with meteors and even remnants of previous space missions left behind as space debris. The extreme of this phenomenon even has a name: Kessler Syndrome, where the density of low earth debris becomes so large that nothing can pass it safely into outer space. This cascading problem of space debris collisions was featured in the film Gravity. As Bullock and Clooney can tell you, this phenomenon could cause catastrophic damage to the overall structure (or knock it off balance, returning to our 'oscillation' concerns). Edwards recognized this, and devoted an entire section of his report to addressing it. According to the report, part of dealing with this obstacle is recognizing and tracking low-earth orbit objects large enough to do damage to the structure. According to Section 10.3 of the report, “A study was done at Johnson Space Center on the construction of a system that could track objects down to 1cm in size with 100m accuracy using effectively current technology. This is very close to the tracking network we would need for the space elevator.” For situations in which avoidance is not always possible (the amount of low-earth orbit debris increases significantly from altitudes of approximately 300 to 1,000 miles), Edwards posits that increasing the thickness of the cable will make it robust enough to withstand all but the largest of objects, which could be tracked and avoided ahead of time using the systems previously mentioned. Even for these exceptional pieces of debris, Edwards illustrates in a section simply labeled “Meteors” that only direct impact by an object (ii) over 3cm in diameter, (iii) with enough force to stay on the initial plane of impact (as opposed to being deflected or redirected by contact with the elevator apparatus), would create the kind of catastrophic damage that we associate with a complete severing of the cable. Designing the cable with curvature and panels specifically for deflection has been proposed by both Edwards as well as several other survivability reports, including this one, put together for the 2010 International Space Elevator Consortium (ISEC). Definitive answers as to the effectiveness of these measures are hopefully forthcoming, but it's at least comforting to know that there are first, second, and third lines of defense prepared for just such occasions. Regardless of completion, Elevators spur investment in Nanotechnology Liam O’Brien 16. University of Wollongong. 07/2016. “Nanotechnology in Space.” Young Scientists Journal; Canterbury, no. 19, p. 22. Nanotechnology is at the forefront of scientific development, continuing to astound and innovate. Likewise, the space industry is rapidly increasing in sophistication and competition, with companies such as SpaceX, Blue Origin and Virgin Galactic becoming increasingly prevalent in what could become a new commercial space race. The various space programs over the past 60 years have led to a multitude of beneficial impacts for everyday society. Nanotechnology, through research and development in space has the potential to do the same. Potential applications of nanotechnology in space are numerous, many of them have the potential to capture and inspire generations to come. One of these applications is the space elevator. By using carbon nanotubes, a super light yet strong material, this concept would be an actual physical structure from the surface of the Earth to an altitude of approximately 36 000 km. The tallest building in the world would fit into this elevator over 42 000 times. The counterweight, used to keep the elevator taught, is proposed to be an asteroid. This would need to be at a distance of 100 000 km, a quarter of the distance to the moon. The benefits of such a structure would be enormous. 95 of a space shuttle's weight at take-off is fuel, costing US$ 20 000 per kilogram to send something into space. However, with a space elevator the cost per kilogram can be reduced to as little as US$ 200. Exploration to other planets can begin at the tower, and travel to and from the moon could become as simple as a morning commute to work. Solar sails provide the means to travel large distances and incredible speeds. Much like sails on a boat use wind, the solar sail uses light as a source of propulsion. Ideally these sails would be kilometres in length and only a few micrometres in thickness. This provides us with the ability to travel at speeds previously unheard of. Using carbon nanotubes once again, a solar sail has the capability to travel at 39 756 km/s which is 13 of the speed of light! This sail could reach Pluto in an astonishing 1.7 days, and Alpha Centauri in just 32 years. Space travel to other planets, other stars, could be possible with solar sails. The Planetary Society is funding for a space sail of itself, and has successfully launched one into orbit. NASA has also sent a sail into orbit, allowing it to burn up in the atmosphere after 240 days. Investing time and resources into nanotechnology for space exploration has benefits for society today. Materials such as graphene are being used in modern manufacturing at an increasing rate as the applications become utilised. Carbon nanotubes will change the way we think about materials and their strength. These nanotubes have a tensile strength one hundred times that of steel, yet are only a sixth of the weight. Imagine light weight vehicles using less petrol and energy as well as being just as strong as regular vehicles. With potentials to revolutionize the way we think about space travel, nanotechnology has a bright future. As a new field of science, it has the capability to push the human race to the outer reaches of our galaxy and hopefully one day to other stars. It will inspire generations of explorers and dreamers to challenge themselves and advance the human race into the next era. As Richard Feynman said in his 1959 talk 'There's Plenty of Room at the Bottom' "A field in which little has been done, but in which an enormous amount can be done. There is still plenty more to achieve. Nanomaterials solve Warming and Water Scarcity. Khullar 17 Bhavya Khullar 9-4-2017 "Nanomaterials Could Combat Climate Change and Reduce Pollution" https://www.scientificamerican.com/article/nanomaterials-could-combat-climate-change-and-reduce-pollution/ (Former Programme Officer with the Food Safety and Toxins Unit, Centre for Science and Environment (CSE))Elmer August 18, 2017 — The list of environmental problems that the world faces may be huge, but some strategies for solving them are remarkably small. First explored for applications in microscopy and computing, nanomaterials—materials made up of units that are each thousands of times smaller than the thickness of a human hair—are emerging as useful for tackling threats to our planet’s well-being. Scientists across the globe are developing nanomaterials that can efficiently use carbon dioxide from the air, capture toxic pollutants from water and degrade solid waste into useful products. “Nanomaterials could help us mitigate pollution. They are efficient catalysts and mostly recyclable. Now, they have to become economical for commercialization and better to replace present-day technologies completely,” says Arun Chattopadhyay, a member of the chemistry faculty at the Center for Nanotechnology, Indian Institute of Technology Guwahati. HARVESTING CO2 To help slow the climate-changing rise in atmospheric CO2levels, researchers have developed nanoCO2 harvesters that can suck atmospheric carbon dioxide and deploy it for industrial purposes. “Nanomaterials can convert carbon dioxide into useful products like alcohol. The materials could be simple chemical catalysts or photochemical in nature that work in the presence of sunlight,” says Chattopadhyay, who has been working with nanomaterials to tackle environmental pollutants for more than a decade. Many research groups are working to address a problem that, if solved, could be a holy grail in combating climate change: how to pull CO2 out of the atmosphere and convert it into useful products. Chattopadhyay isn’t alone. Many research groups are working to address a problem that, if solved, could be a holy grail in combating climate change: how to pull CO2 out of the atmosphere and convert it into useful products. Nanoparticles offer a promising approach to this because they have a large surface-area-to-volume ratio for interacting with CO2 and properties that allow them to facilitate the conversion of CO2into other things. The challenge is to make them economically viable. Researchers have tried everything from metallic to carbon-based nanoparticles to reduce the cost, but so far they haven’t become efficient enough for industrial-scale application. One of the most recent points of progress in this area is work by scientists at the CSIR-Indian Institute of Petroleum and the Lille University of Science and Technology in France. The researchers developed a nanoCO2 harvester that uses water and sunlight to convert atmospheric CO2 into methanol, which can be employed as an engine fuel, a solvent, an antifreeze agent and a diluent of ethanol. Made by wrapping a layer of modified graphene oxide around spheres of copper zinc oxide and magnetite, the material looks like a miniature golf ball, captures CO2 more efficiently than conventional catalysts and can be readily reused, according to Suman Jain, senior scientist of the Indian Institute of Petroleum, Dehradun in India, who developed the nanoCO2harvester. Jain says that the nanoCO2 harvester has a large molecular surface area and captures more CO2 than a conventional catalyst with similar surface area would, which makes the conversion more efficient. But due to their small size, the nanoparticles have a tendency to clump up, making them inactive with prolonged use. Jain adds that synthesizing useful nanoparticle-based materials is also challenging because it’s hard to make the particles a consistent size. Chattopadhyay says the efficiency of such materials can be improved further, providing hope for useful application in the future. CLEANSING WATER Most toxic dyes used in textile and leather industries can be captured with nanoparticles. “Water pollutants such as dyes from human-created waste like those from tanneries could get to natural sources of water like deep tube wells or groundwater if wastewater from these industries is left untreated,” says Chattopadhyay. “This problem is rather difficult to solve.” An international group of researchers led by professor Elzbieta Megiel of the University of Warsaw in Poland reports that nanomaterials have been widely studied for removing heavy metals and dyes from wastewater. According to the research team, adsorption processes using materials containing magnetic nanoparticles are highly effective and can be easily performed because such nanoparticles have a large number of sites on their surface that can capture pollutants and don’t readily degrade in water. Chattopadhyay adds that appropriately designed magnetic nanomaterials can be used to separate pollutants such as arsenic, lead, chromium and mercury from water. However, the nanotech-based approach has to be more efficient than conventional water purification technology to make it worthwhile. In addition to removing dyes and metals, nanomaterials can also be used to clean up oil spills. Researchers led by Pulickel Ajayan at Rice University in Houston, Texas, have developed a reusable nanosponge that can remove oil from contaminated seawater. The technology shows promise, but it’s not yet ready for prime time. “While the nanosponge is a good material to deal with oil spills, these results are confined to the laboratory,” says Ashok Ganguli, director of the Institute of Nano Science and Technology in Mohali, Punjab, India. “Large-scale synthesis is required if we have to remove oil from seawater which is spread over several miles.” Although scientists have yet to successfully synthesize nanomaterials for cleaning oil spills at a scale large enough for practical application, “this may become possible with more research and industry partnerships,” Chattopadhyay says. Warming causes Extinction Kareiva 18, Peter, and Valerie Carranza. "Existential risk due to ecosystem collapse: Nature strikes back." Futures 102 (2018): 39-50. (Ph.D. in ecology and applied mathematics from Cornell University, director of the Institute of the Environment and Sustainability at UCLA, Pritzker Distinguished Professor in Environment and Sustainability at UCLA)Re-cut by Elmer In summary, six of the nine proposed planetary boundaries (phosphorous, nitrogen, biodiversity, land use, atmospheric aerosol loading, and chemical pollution) are unlikely to be associated with existential risks. They all correspond to a degraded environment, but in our assessment do not represent existential risks. However, the three remaining boundaries (climate change, global freshwater cycle, and ocean acidification) do pose existential risks. This is because of intrinsic positive feedback loops, substantial lag times between system change and experiencing the consequences of that change, and the fact these different boundaries interact with one another in ways that yield surprises. In addition, climate, freshwater, and ocean acidification are all directly connected to the provision of food and water, and shortages of food and water can create conflict and social unrest. Climate change has a long history of disrupting civilizations and sometimes precipitating the collapse of cultures or mass emigrations (McMichael, 2017). For example, the 12th century drought in the North American Southwest is held responsible for the collapse of the Anasazi pueblo culture. More recently, the infamous potato famine of 1846–1849 and the large migration of Irish to the U.S. can be traced to a combination of factors, one of which was climate. Specifically, 1846 was an unusually warm and moist year in Ireland, providing the climatic conditions favorable to the fungus that caused the potato blight. As is so often the case, poor government had a role as well—as the British government forbade the import of grains from outside Britain (imports that could have helped to redress the ravaged potato yields). Climate change intersects with freshwater resources because it is expected to exacerbate drought and water scarcity, as well as flooding. Climate change can even impair water quality because it is associated with heavy rains that overwhelm sewage treatment facilities, or because it results in higher concentrations of pollutants in groundwater as a result of enhanced evaporation and reduced groundwater recharge. Ample clean water is not a luxury—it is essential for human survival. Consequently, cities, regions and nations that lack clean freshwater are vulnerable to social disruption and disease. Finally, ocean acidification is linked to climate change because it is driven by CO2 emissions just as global warming is. With close to 20 of the world’s protein coming from oceans (FAO, 2016), the potential for severe impacts due to acidification is obvious. Less obvious, but perhaps more insidious, is the interaction between climate change and the loss of oyster and coral reefs due to acidification. Acidification is known to interfere with oyster reef building and coral reefs. Climate change also increases storm frequency and severity. Coral reefs and oyster reefs provide protection from storm surge because they reduce wave energy (Spalding et al., 2014). If these reefs are lost due to acidification at the same time as storms become more severe and sea level rises, coastal communities will be exposed to unprecedented storm surge—and may be ravaged by recurrent storms. A key feature of the risk associated with climate change is that mean annual temperature and mean annual rainfall are not the variables of interest. Rather it is extreme episodic events that place nations and entire regions of the world at risk. These extreme events are by definition “rare” (once every hundred years), and changes in their likelihood are challenging to detect because of their rarity, but are exactly the manifestations of climate change that we must get better at anticipating (Diffenbaugh et al., 2017). Society will have a hard time responding to shorter intervals between rare extreme events because in the lifespan of an individual human, a person might experience as few as two or three extreme events. How likely is it that you would notice a change in the interval between events that are separated by decades, especially given that the interval is not regular but varies stochastically? A concrete example of this dilemma can be found in the past and expected future changes in storm-related flooding of New York City. The highly disruptive flooding of New York City associated with Hurricane Sandy represented a flood height that occurred once every 500 years in the 18th century, and that occurs now once every 25 years, but is expected to occur once every 5 years by 2050 (Garner et al., 2017). This change in frequency of extreme floods has profound implications for the measures New York City should take to protect its infrastructure and its population, yet because of the stochastic nature of such events, this shift in flood frequency is an elevated risk that will go unnoticed by most people. 4. The combination of positive feedback loops and societal inertia is fertile ground for global environmental catastrophes Humans are remarkably ingenious, and have adapted to crises throughout their history. Our doom has been repeatedly predicted, only to be averted by innovation (Ridley, 2011). However, the many stories of human ingenuity successfully addressing existential risks such as global famine or extreme air pollution represent environmental challenges that are largely linear, have immediate consequences, and operate without positive feedbacks. For example, the fact that food is in short supply does not increase the rate at which humans consume food—thereby increasing the shortage. Similarly, massive air pollution episodes such as the London fog of 1952 that killed 12,000 people did not make future air pollution events more likely. In fact it was just the opposite—the London fog sent such a clear message that Britain quickly enacted pollution control measures (Stradling, 2016). Food shortages, air pollution, water pollution, etc. send immediate signals to society of harm, which then trigger a negative feedback of society seeking to reduce the harm. In contrast, today’s great environmental crisis of climate change may cause some harm but there are generally long time delays between rising CO2 concentrations and damage to humans. The consequence of these delays are an absence of urgency; thus although 70 of Americans believe global warming is happening, only 40 think it will harm them (http://climatecommunication.yale.edu/visualizations-data/ycom-us-2016/). Secondly, unlike past environmental challenges, the Earth’s climate system is rife with positive feedback loops. In particular, as CO2 increases and the climate warms, that very warming can cause more CO2 release which further increases global warming, and then more CO2, and so on. Table 2 summarizes the best documented positive feedback loops for the Earth’s climate system. These feedbacks can be neatly categorized into carbon cycle, biogeochemical, biogeophysical, cloud, ice-albedo, and water vapor feedbacks. As important as it is to understand these feedbacks individually, it is even more essential to study the interactive nature of these feedbacks. Modeling studies show that when interactions among feedback loops are included, uncertainty increases dramatically and there is a heightened potential for perturbations to be magnified (e.g., Cox, Betts, Jones, Spall, and Totterdell, 2000; Hajima, Tachiiri, Ito, and Kawamiya, 2014; Knutti and Rugenstein, 2015; Rosenfeld, Sherwood, Wood, and Donner, 2014). This produces a wide range of future scenarios. Positive feedbacks in the carbon cycle involves the enhancement of future carbon contributions to the atmosphere due to some initial increase in atmospheric CO2. This happens because as CO2 accumulates, it reduces the efficiency in which oceans and terrestrial ecosystems sequester carbon, which in return feeds back to exacerbate climate change (Friedlingstein et al., 2001). Warming can also increase the rate at which organic matter decays and carbon is released into the atmosphere, thereby causing more warming (Melillo et al., 2017). Increases in food shortages and lack of water is also of major concern when biogeophysical feedback mechanisms perpetuate drought conditions. The underlying mechanism here is that losses in vegetation increases the surface albedo, which suppresses rainfall, and thus enhances future vegetation loss and more suppression of rainfall—thereby initiating or prolonging a drought (Chamey, Stone, and Quirk, 1975). To top it off, overgrazing depletes the soil, leading to augmented vegetation loss (Anderies, Janssen, and Walker, 2002). Climate change often also increases the risk of forest fires, as a result of higher temperatures and persistent drought conditions. The expectation is that forest fires will become more frequent and severe with climate warming and drought (Scholze, Knorr, Arnell, and Prentice, 2006), a trend for which we have already seen evidence (Allen et al., 2010). Tragically, the increased severity and risk of Southern California wildfires recently predicted by climate scientists (Jin et al., 2015), was realized in December 2017, with the largest fire in the history of California (the “Thomas fire” that burned 282,000 acres, https://www.vox.com/2017/12/27/16822180/thomas-fire-california-largest-wildfire). This catastrophic fire embodies the sorts of positive feedbacks and interacting factors that could catch humanity off-guard and produce a true apocalyptic event. Record-breaking rains produced an extraordinary flush of new vegetation, that then dried out as record heat waves and dry conditions took hold, coupled with stronger than normal winds, and ignition. Of course the record-fire released CO2 into the atmosphere, thereby contributing to future warming. Out of all types of feedbacks, water vapor and the ice-albedo feedbacks are the most clearly understood mechanisms. Losses in reflective snow and ice cover drive up surface temperatures, leading to even more melting of snow and ice cover—this is known as the ice-albedo feedback (Curry, Schramm, and Ebert, 1995). As snow and ice continue to melt at a more rapid pace, millions of people may be displaced by flooding risks as a consequence of sea level rise near coastal communities (Biermann and Boas, 2010; Myers, 2002; Nicholls et al., 2011). The water vapor feedback operates when warmer atmospheric conditions strengthen the saturation vapor pressure, which creates a warming effect given water vapor’s strong greenhouse gas properties (Manabe and Wetherald, 1967). Global warming tends to increase cloud formation because warmer temperatures lead to more evaporation of water into the atmosphere, and warmer temperature also allows the atmosphere to hold more water. The key question is whether this increase in clouds associated with global warming will result in a positive feedback loop (more warming) or a negative feedback loop (less warming). For decades, scientists have sought to answer this question and understand the net role clouds play in future climate projections (Schneider et al., 2017). Clouds are complex because they both have a cooling (reflecting incoming solar radiation) and warming (absorbing incoming solar radiation) effect (Lashof, DeAngelo, Saleska, and Harte, 1997). The type of cloud, altitude, and optical properties combine to determine how these countervailing effects balance out. Although still under debate, it appears that in most circumstances the cloud feedback is likely positive (Boucher et al., 2013). For example, models and observations show that increasing greenhouse gas concentrations reduces the low-level cloud fraction in the Northeast Pacific at decadal time scales. This then has a positive feedback effect and enhances climate warming since less solar radiation is reflected by the atmosphere (Clement, Burgman, and Norris, 2009). The key lesson from the long list of potentially positive feedbacks and their interactions is that runaway climate change, and runaway perturbations have to be taken as a serious possibility. Table 2 is just a snapshot of the type of feedbacks that have been identified (see Supplementary material for a more thorough explanation of positive feedback loops). However, this list is not exhaustive and the possibility of undiscovered positive feedbacks portends even greater existential risks. The many environmental crises humankind has previously averted (famine, ozone depletion, London fog, water pollution, etc.) were averted because of political will based on solid scientific understanding. We cannot count on complete scientific understanding when it comes to positive feedback loops and climate change.
2/20/22
JF - CP - Taxation
Tournament: Palm Classic | Round: Triples | Opponent: Catonsville AT | Judge: James Stuckert, Truman Le, John Boals 5 States ought to tax private LEO satellite launches according to econometric analysis of negative externalities generated by their displacement of other corporations. The proceeds from this tax should be used to fund debris mitigation programs. Tax solves – internalizes externalities. Adilov 13 Nodir Adilov, Peter J. Alexander, and Brendan Michael Cunningham, * Purdue University Fort Wayne, Federal Communications Commission, * U.S. Naval Academy; Eastern Connecticut State University, “Earth Orbit Debris: An Economic Model,” 05/14/13, https://ssrn.com/abstract=2264915, EA Space debris, an externality generated by expended launch vehicles and damaged satellites, reduces the realized value of space activities by increasing the probability of damaging existing satellites or other space vehicles. Unlike terrestrial pollution, debris created in the production process interacts with firms’ final products, and is, moreover, self-propagating: collisions between debris or extant satellites creates additional debris. In the limiting case, collisional cascading could reduce the realized value of certain earth orbits to zero. Voluntary guidelines regarding debris remediation have been employed over the past 50 years, but, as we showed in our model, voluntary guidelines provide insufficient incentives. In fact, competitive firms will generally choose the least-costly mitigation technology, which in turn generates the most debris, because it carries the lowest cost for the firm. In our model, a social planner that takes into account the welfare of both producers and consumers would generate fewer launches as compared to the competitive market, and it would employ technology that reduces the rate of debris creation. Active debris removal technology is pre-emergent and costly; funding will likely require the cooperation of governments of space-faring nations. Since active debris removal is retrospective, nations that have created the majority of extant debris, the United States, Russia, and China, might provide funding commensurate with created debris. A tax on launches provides a straight-forward economic solution to externalities. Future research might formally investigate the effectiveness of various policy remedies to space pollution.
2/13/22
JF - CP - US-China Alliance
Tournament: Palm Classic | Round: 5 | Opponent: Harker NA | Judge: Lena Mizrahi 3 CP Text: The People’s Republic of China should - increase and encourage private and civil space cooperation with the United States over appropriation of outer space. - de-militarize its space industry. - dismantle and remove ASAT weapons. - The United States Federal Government should repeal the Wolf Amendment. The Counterplan competes – it re-directs China’s commercial space industry to productive cooperation with the United States. The 1AC said that China’s government is reliant on private action meaning the Plan collapses all of the space sector meaning meaningful cooperation with the US becomes impossible. Cooperation de-escalates the Space Race, solves Sino-Russian axis, and spills-over to broader US-China relations Marshall and Hadfield 21 Will Marshall and Chris Hadfield 4-15-2021 "Why the U.S. and China Should Collaborate in Space" https://time.com/5954941/u-s-china-should-collaborate-in-space/ (CEO of Planet which operates 200 satellites that image the entire Earth landmass on a daily basis, and he formerly worked at NASA on lunar missions and space debris. Colonel Chris Hadfield was Commander of the International Space Station and flew both the U.S. Space Shuttle and Russian Soyuz vehicles. Prior to that he served as a fighter/test pilot with the U.S. Air Force, U.S. Navy, and Royal Canadian Air Force.)Elmer While much has been made of the tense March 18 exchange between American and Chinese diplomats in Anchorage, Alaska, one area became an unlikely candidate for cooperation: outer space. During a press conference after the meeting, Jake Sullivan, the U.S. National Security Advisor, pointed out that the Perseverance rover that recently landed on Mars “wasn’t just an American project. It had technology from multiple countries from Europe and other parts of the world.” China’s top diplomat, Yang Jiechi, seized the opportunity to say that, “China would welcome it if there is a will to carry out similar cooperation from the United States with us.” Planned or not, Yang’s comment gave voice to one very smart way two geopolitical rivals sharing the same planet could work together despite their growing tensions. Space exploration has long been used to foster deep cooperation, even between adversaries. During the height of the Cold War, the U.S. and U.S.S.R. jointly undertook the 1975 Apollo-Soyuz mission, which both served as a means of political rapprochement and opened the possibility of cooperation in other areas. Those links endured. After the Soviet Union collapsed, Russia was invited to partner in the construction of the International Space Station (ISS). It was a multi-layered act that went beyond simple generosity; the more work former Soviet scientists had to do designing and building the ISS, the less likely they’d be to sell their expertise to other countries. Today, Sino-American space cooperation is similarly desirable. It could improve ties as it did for the U.S. and Russia, de-escalate an emerging Sino-Russian axis in space, and serve as a bargaining chip to help sustain other areas of cooperation. While China and the U.S. seem to clash on virtually every issue, space, by its nature, is different. Orbit isn’t a high-ground that one can seize. Instead, space works like a commons, where for any one state or company to be able to operate safely, all have to act responsibly. We need peaceful cooperation to enjoy its benefits. One reason not to cooperate in space with a geopolitical rival is technology transfer. There are legitimate concerns that collaboration could lead to technology sharing that unfairly advances China. Indeed, in 2011, the U.S. Congress included a passage, known as the Wolf Amendment, in an appropriations bill, forbidding NASA from cooperating in any way with China for fear of technological theft or espionage. The reasoning was straightforward: The U.S. enjoys significant leadership in some space technologies, including satellites, and much of that technology is proprietary, shared with no other countries. In the area of human spaceflight, however, things are different. The U.S. has extensively shared the entire ISS program for decades with the fourteen partner nations, including Russia. If there ever were secrets there, they are secrets no more. In fact, Russia and the U.S. as partners saved the day between 2011, after the space shuttles were grounded, and 2021, when the U.S. regained the ability to transport astronauts to space. During that decade, Russia’s Soyuz spacecraft served as the only way to get crews to and from the station. At the same time, uncrewed American resupply ships similarly helped keep the ISS viable when the Russian Soyuz fleet was grounded following mishaps. China has developed and proven a very successful human spaceflight program; adding their launch and spacecraft capability to the partnership would strengthen the overall mission. In order for China and the U.S. to work together in space, some things would have to change. First, the Wolf Amendment would have to be repealed—nothing meaningful can happen until that goes. Cooperation might then begin in lower profile areas such as sharing remote sensing data and reducing orbital debris. The United States and Europe have led the way with Landsat and Copernicus satellite programs providing free images of Earth that can be used to understand changes to our environment. The Chinese have yet to create a similar data share program for their Earth imaging systems—but they should. The United States and China could also discuss joint efforts to reduce the belt of space junk that circles the planet and threatens everyone’s satellites. Most importantly, cooperation could extend to joint human spaceflight missions; the US could invite China to conduct a crewed visit to the ISS, or to join in the human exploration of the Moon, targeted to happen in this decade and which both nations are now working on separately; the goal would be a joint Moon base rather than a space race. For decades, space travel has provided an opportunity for humans to see our world differently. Apollo 11 astronaut Michael Collins said, “The thing that really surprised me was that the Earth projected an air of fragility.” Chinese astronauts, since Yang Liwei’s first flight 18 years ago, have surely had a similar experience gazing down at our planet. Cooperating in space can give the United States and China the opportunity to change their thinking together. Bold American leadership can be a leveraged move in reducing tensions, as it was in keeping the Cold War cold—a win for all nations and our shared, blue-green planet. US-China Relations key to prevent escalation – current US course turns status quo cold war hot. Nye 21 Joseph Nye 3-3-2021 "The factors that could lead to war between the US and China" https://www.aspistrategist.org.au/the-factors-that-could-lead-to-war-between-the-us-and-china/ (professor at Harvard University and author)Elmer When China’s foreign minister, Wang Yi, recently called for a reset of bilateral relations with the United States, a White House spokesperson replied that the US saw the relationship as one of strong competition that required a position of strength. It’s clear that President Joe Biden’s administration is not simply reversing Donald Trump’s policies. Some analysts, citing Thucydides’ attribution of the Peloponnesian War to Sparta’s fear of a rising Athens, believe the US–China relationship is entering a period of conflict pitting an established hegemon against an increasingly powerful challenger. I am not that pessimistic. In my view, economic and ecological interdependence reduces the probability of a real cold war, much less a hot one, because both countries have an incentive to cooperate in a number of areas. At the same time, miscalculation is always possible and some see the danger of ‘sleepwalking’ into catastrophe, as happened with World War I. History is replete with cases of misperception about changing power balances. For example, when US President Richard Nixon visited China in 1972, he wanted to balance what he saw as a growing Soviet threat to a declining America. But what Nixon interpreted as decline was really the return to normal of America’s artificially high share of global output after World War II. Nixon proclaimed multipolarity, but what followed was the end of the Soviet Union and America’s unipolar moment two decades later. Today, some Chinese analysts underestimate America’s resilience and predict Chinese dominance but this, too, could turn out to be a dangerous miscalculation. It is equally dangerous for Americans to over- or underestimate Chinese power, and the US contains groups with economic and political incentives to do both. Measured in dollars, China’s economy is about two-thirds the size of that of the US, but many economists expect China to surpass the US sometime in the 2030s, depending on what one assumes about Chinese and American growth rates. Will American leaders acknowledge this change in a way that permits a constructive relationship, or will they succumb to fear? Will Chinese leaders take more risks, or will Chinese and Americans learn to cooperate in producing global public goods under a changing distribution of power? Recall that Thucydides attributed the war that ripped apart the ancient Greek world to two causes: the rise of a new power and the fear that this created in the established power. The second cause is as important as the first. The US and China must avoid exaggerated fears that could create a new cold or hot war. Even if China surpasses the US to become the world’s largest economy, national income is not the only measure of geopolitical power. China ranks well behind the US in soft power and US military expenditure is nearly four times that of China. While Chinese military capabilities have been increasing in recent years, analysts who look carefully at the military balance conclude that China will not, say, be able to exclude the US from the Western Pacific. On the other hand, the US was once the world’s largest trading economy and its largest bilateral lender. Today, nearly 100 countries count China as their largest trading partner, compared to 57 for the US. China plans to lend more than US$1 trillion for infrastructure projects with its Belt and Road Initiative over the next decade, while the US has cut back aid. China will gain economic power from the sheer size of its market as well as its overseas investments and development assistance. China’s overall power relative to the US is likely to increase. Nonetheless, balances of power are hard to judge. The US will retain some long-term power advantages that contrast with areas of Chinese vulnerability. One is geography. The US is surrounded by oceans and neighbours that are likely to remain friendly. China has borders with 14 countries, and territorial disputes with India, Japan and Vietnam set limits on its hard and soft power. Energy is another area where America has an advantage. A decade ago, the US was dependent on imported energy, but the shale revolution transformed North America from energy importer to exporter. At the same time, China became more dependent on energy imports from the Middle East, which it must transport along sea routes that highlight its problematic relations with India and other countries. The US also has demographic advantages. It is the only major developed country that is projected to hold its global ranking (third) in terms of population. While the rate of US population growth has slowed in recent years, it will not turn negative, as in Russia, Europe, and Japan. China, meanwhile, rightly fears ‘growing old before it grows rich.’ China’s labour force peaked in 2015 and India will soon overtake it as the world’s most populous country. America also remains at the forefront in key technologies (bio, nano and information) that are central to 21st-century economic growth. China is investing heavily in research and development, and competes well in some fields. But 15 of the world’s top 20 research universities are in the US; none is in China. Those who proclaim Pax Sinica and American decline fail to take account of the full range of power resources. American hubris is always a danger but so is exaggerated fear, which can lead to overreaction. Equally dangerous is rising Chinese nationalism, which, combined with a belief in American decline, leads China to take greater risks. Both sides must beware of miscalculation. After all, more often than not, the greatest risk we face is our own capacity for error. US-China War goes Nuclear. Brands and Beckley 21 Hal Brands and Michael Beckley 12-16-2021 "Washington Is Preparing for the Wrong War With China" https://www.foreignaffairs.com/articles/china/2021-12-16/washington-preparing-wrong-war-china (Henry A. Kissinger Distinguished Professor of Global Affairs at the Johns Hopkins University School of Advanced International Studies, a Senior Fellow at the American Enterprise Institute and Associate Professor of Political Science at Tufts University, a Non-Resident Senior Fellow at the American Enterprise Institute)Elmer The United States is getting serious about the threat of war with China. The U.S. Department of Defense has labeled China its primary adversary, civilian leaders have directed the military to develop credible plans to defend Taiwan, and President Joe Biden has strongly implied that the United States would not allow that island democracy to be conquered. Yet Washington may be preparing for the wrong kind of war. Defense planners appear to believe that they can win a short conflict in the Taiwan Strait merely by blunting a Chinese invasion. Chinese leaders, for their part, seem to envision rapid, paralyzing strikes that break Taiwanese resistance and present the United States with a fait accompli. Both sides would prefer a splendid little war in the western Pacific, but that is not the sort of war they would get. A war over Taiwan is likely to be long rather than short, regional rather than local, and much easier to start than to end. It would expand and escalate, as both countries look for paths to victory in a conflict neither side can afford to lose. It would also present severe dilemmas for peacemaking and high risks of going nuclear. If Washington doesn’t start preparing to wage, and then end, a protracted conflict now, it could face catastrophe once the shooting starts. IMPENDING SLUGFEST A U.S.-Chinese war over Taiwan would begin with a bang. China’s military doctrine emphasizes coordinated operations to “paralyze the enemy in one stroke.” In the most worrying scenario, Beijing would launch a surprise missile attack, hammering not only Taiwan’s defenses but also the naval and air forces that the United States has concentrated at a few large bases in the western Pacific. Simultaneous Chinese cyberattacks and antisatellite operations would sow chaos and hinder any effective U.S. or Taiwanese response. And the People’s Liberation Army (PLA) would race through the window of opportunity, staging amphibious and airborne assaults that would overwhelm Taiwanese resistance. By the time the United States was ready to fight, the war would effectively be over. The Pentagon’s planning increasingly revolves around preventing this scenario, by hardening and dispersing the U.S. military presence in Asia, encouraging Taiwan to field asymmetric capabilities that can inflict a severe toll on Chinese attackers, and developing the ability to blunt the PLA’s offensive capabilities and sink an invasion fleet. This planning is predicated on the critical assumption that the early weeks, if not days, of fighting would determine whether a free Taiwan survives. Yet whatever happens at the outset, a conflict almost certainly wouldn’t end quickly. Most great-power wars since the Industrial Revolution have lasted longer than expected, because modern states have the resources to fight on even when they suffer heavy losses. Moreover, in hegemonic wars—clashes for dominance between the world’s strongest states—the stakes are high, and the price of defeat may seem prohibitive. During the nineteenth and twentieth centuries, wars between leading powers—the Napoleonic Wars, the Crimean War, the world wars—were protracted slugfests. A U.S.-Chinese war would likely follow this pattern. If the United States managed to beat back a Chinese assault against Taiwan, Beijing wouldn’t simply give up. Starting a war over Taiwan would be an existential gamble: admitting defeat would jeopardize the regime’s legitimacy and President Xi Jinping’s hold on power. It would also leave China more vulnerable to its enemies and destroy its dreams of regional primacy. Continuing a hard fight against the United States would be a nasty prospect, but quitting while China was behind would seem even worse. Washington would also be inclined to fight on if the war were not going well. Like Beijing, it would view a war over Taiwan as a fight for regional dominance. The fact that such a war would probably begin with a Pearl Harbor–style missile attack on U.S. bases would make it even harder for an outraged American populace and its leaders to accept defeat. Even if the United States failed to prevent Chinese forces from seizing Taiwan, it couldn’t easily bow out of the war. Quitting without first severely damaging Chinese air and naval power in Asia would badly weaken Washington’s reputation, as well as its ability to defend remaining allies in the region. Both sides would have the capacity to keep fighting, moreover. The United States could summon ships, planes, and submarines from other theaters and use its command of the Pacific beyond the first island chain—which runs from Japan in the north through Taiwan and the Philippines to the south—to conduct sustained attacks on Chinese forces. For its part, China could dispatch its surviving air, naval, and missile forces for a second and third assault on Taiwan and press its maritime militia of coast guard and fishing vessels into service. Both the United States and China would emerge from these initial clashes bloodied but not exhausted, increasing the likelihood of a long, ugly war. BIGGER, LONGER, MESSIER When great-power wars drag on, they get bigger, messier, and more intractable. Any conflict between the United States and China is likely to force both countries to mobilize their economies for war. After the initial salvos, both sides would hurry to replace munitions, ships, submarines, and aircraft lost in the early days of fighting. This race would strain both countries’ industrial bases, require the reorientation of their economies, and invite nationalist appeals—or government compulsion—to mobilize the populace to support a long fight. Long wars also escalate as the combatants look for new sources of leverage. Belligerents open new fronts and rope additional allies into the fight. They expand their range of targets and worry less about civilian casualties. Sometimes they explicitly target civilians, whether by bombing cities or torpedoing civilian ships. And they use naval blockades, sanctions, and embargoes to starve the enemy into submission. As China and the United States unloaded on each other with nearly every tool at their disposal, a local war could turn into a whole-of-society brawl that spans multiple regions. Bigger wars demand more grandiose aims. The greater the sacrifices required to win, the better the ultimate peace deal must be to justify those sacrifices. What began as a U.S. campaign to defend Taiwan could easily turn into an effort to render China incapable of new aggression by completely destroying its offensive military power. Conversely, as the United States inflicted more damage on China, Beijing’s war aims could grow from conquering Taiwan to pushing Washington out of the western Pacific altogether. All of this would make forging peace more difficult. The expansion of war aims narrows the diplomatic space for a settlement and produces severe bloodshed that fuels intense hatred and mistrust. Even if U.S. and Chinese leaders grew weary of fighting, they might still struggle to find a mutually acceptable peace. GOING NUCLEAR A war between China and the United States would differ from previous hegemonic wars in one fundamental respect: both sides have nuclear weapons. This would create disincentives to all-out escalation, but it could also, paradoxically, compound the dangers inherent in a long war. For starters, both sides might feel free to shoot off their conventional arsenals under the assumption that their nuclear arsenals would shield them from crippling retaliation. Scholars call this the “stability-instability paradox,” whereby blind faith in nuclear deterrence risks unleashing a massive conventional war. Chinese military writings often suggest that the PLA could wipe out U.S. bases and aircraft carriers in East Asia while China’s nuclear arsenal deterred U.S. attacks on the Chinese mainland. On the flip side, some American strategists have called for pounding Chinese mainland bases at the outset of a conflict in the belief that U.S. nuclear superiority would deter China from responding in kind. Far from preventing a major war, nuclear weapons could catalyze one. Once that war is underway, it could plausibly go nuclear in three distinct ways. Whichever side is losing might use tactical nuclear weapons—low-yield warheads that could destroy specific military targets without obliterating the other side’s homeland—to turn the tide. That was how the Pentagon planned to halt a Soviet invasion of central Europe during the Cold War, and it is what North Korea, Pakistan, and Russia have suggested they would do if they were losing a war today. If China crippled U.S. conventional forces in East Asia, the United States would have to decide whether to save Taiwan by using tactical nuclear weapons against Chinese ports, airfields, or invasion fleets. This is no fantasy: the U.S. military is already developing nuclear-tipped, submarine-launched cruise missiles that could be used for such purposes. China might also use nuclear weapons to snatch victory from the jaws of defeat. The PLA has embarked on an unprecedented expansion of its nuclear arsenal, and PLA officers have written that China could use nuclear weapons if a conventional war threatened the survival of its government or nuclear arsenal—which would almost surely be the case if Beijing was losing a war over Taiwan. Perhaps these unofficial claims are bluffs. Yet it is not difficult to imagine that if China faced the prospect of humiliating defeat, it might fire off a nuclear weapon (perhaps at or near the huge U.S. military base on Guam) to regain a tactical advantage or shock Washington into a cease-fire. As the conflict drags on, either side could also use the ultimate weapon to end a grinding war of attrition. During the Korean War, American leaders repeatedly contemplated dropping nuclear bombs on China to force it to accept a cease-fire. Today, both countries would have the option of using limited nuclear strikes to compel a stubborn opponent to concede. The incentives to do so could be strong, given that whichever side pulls the nuclear trigger first might gain a major advantage. A final route to nuclear war is inadvertent escalation. Each side, knowing that escalation is a risk, may try to limit the other’s nuclear options. The United States could, for instance, try to sink China’s ballistic missile submarines before they hide in the deep waters beyond the first island chain. Yet such an attack could put China in a “use it or lose it” situation with regard to its nuclear forces, especially if the United States also struck China’s land-based missiles and communication systems, which intermingle conventional and nuclear forces. In this scenario, China’s leaders might use their nuclear weapons rather than risk losing that option altogether. US-China Relations solves laundry list of existential threats. Paulson 15, H. M. "Dealing with China: An insider unmasks the new economic superpower. Hachette Book Group." Inc.: All Books (2015). (Former US Treasury Secretary)Elmer One crisp day in early March 2014, I found myself sitting in a sleek conference room high above Boston Harbor taking questions from a group of financial executives. These men and women worked for a range of institutions that managed well over $3 trillion of financial assets, including the personal savings and pension funds of millions of Americans. They were keen to learn as much as they could about the Chinese economy. Was it about to hit the wall? Was I worried about a real estate bubble? How fragile was the country's financial system? Was the government serious about dealing with China's environmental problems? One fellow had a more personal question for me. "Hank," he said. "You're a real patriot. Why are you helping China?" The question pulled me up short. Three years before, when I first 'c began planning to write this book, I don't think I would have been asked anything like that at a meeting of sophisticated financiers. They would J have accepted that helping China to reform its economy, open its markets, protect its environment, and improve the quality of life of its people-all things I have been working on-would bring economic and strategic benefits to the U.S. as well. But that viewpoint has been changing as China has emerged as our biggest, most formidable economic competitor since the end of World War II and has started flexing its newfound military muscle in unsettling ways. As a result, many Americans, from all walks of life, have begun to view China with growing apprehension and resentment. Some would now prefer confrontation to cooperation. I understand these sentiments. Partly they are a function of China's choices and actions, and partly they are born of frustration with the recent economic troubles of the United States. I've spent a fair number of pages explaining how China must carry out meaningful economic reforms if it expects to continue its amazing success story. These arguments make sense for China and its people. But why should an American care? Why should we root for China to succeed? Shouldn't we instead be hoping that this ungainly giant stumbles, if only to slow down its daunting economic and military growth? In coming years China's weight and influence in the world, already substantial, is likely to begin to rival our own. Why take the chance now of helping the Chinese deal with so many of their problems and challenges? Why aid a competitor? The answer is simple: we should do so because it is more than ever in America's own self-interest that we do. To begin with, just about every major global challenge we face-from economic and environmental issues to food and energy security to nuclear proliferation and terrorism-will be easier to solve if the world's two most important economic powers can act in complementary ways. But these challenges will be almost impossible to address if the U.S. and China work at cross-purposes. If we want to benefit from an expanding global economy, we need the most dynamic growth engines, like China's, to thrive. If we want to prevent the worst climate change outcomes and to preserve our fragile global ecosystems, we need China to solve its massive environmental problems at home and adopt better practices abroad. If we want to keep diseases from our shores, we need Chinaand other countries to use the very best methods to prevent and halt epidemics. If we want to stem the spread of dangerous weapons to those who might harm our citizens, we need nations, including China, to work together to end illicit trafficking. If we want all these things to happen, we must be proactive, frank, and at times forceful with the Chinese while seeking ways to cooperate, to develop complementary policies, and to work to more fully integrate them into a rules-based global order. If we attempt to exclude, ignore, or weaken China, we limit our ability to influence choices made by its leaders and risk turning the worst-case scenarios of China skeptics into a self-fulfilling reality.
2/13/22
JF - DA - Chinese Deflection
Tournament: Palm Classic | Round: 1 | Opponent: Harker SY | Judge: Felicity Park China’s space program is key to asteroid mitigation---US efforts fail and the same entities referenced in AC Patel and AC Chow are key. Chen 21 (, S., 2021. How 23 giant Chinese rockets could save world from asteroids. online South China Morning Post. Available at: https://www.scmp.com/news/china/science/article/3139914/how-23-giant-chinese-rockets-could-save-world-doomsday-asteroid Accessed 19 January 2022 Stephen Chen investigates major research projects in China, a new power house of scientific and technological innovation. He has worked for the Post since 2006. He is an alumnus of Shantou University, the Hong Kong University of Science and Technology, and the Semester at Sea programme which he attended with a full scholarship from the Seawise Foundation.)-rahulpenu How 23 giant Chinese rockets could save the world from ‘doomsday’ asteroid China can send mammoth machines into space which travel for years then deflect problematic rocks Same devices have been criticised recently because one plummeted back to Earth in uncontrolled re-entry China’s space programme could one day save the world, with massive rockets travelling for years to defend the planet from huge asteroids capable of wiping out entire cities, according to a government-backed study. This saviour role is unexpected given these are the same machines seen as a threat by many, including the United States, just weeks ago; the main 20-tonne section from one such rocket fell back to Earth in May in an uncontrolled re-entry. It fell into the sea or burned up beforehand, although last year fragments from another rocket were said to have hit two villages in the West African country, Ivory Coast. Now a new government-funded study says China can launch 23 Long March 5 (CZ-5) rockets – the largest in its fleet, weighing almost 900 tonnes on take-off – to break up the rocky objects in our solar system. Some asteroids are as small as pebbles but others are hundreds of kilometres across. An asteroid about 500 metres (1,640 feet) wide could kill millions. Although the chance of one colliding with the Earth is currently low, there is one called Bennu which could hit in about a century. Researcher Li Mingtao and his colleagues at the National Space Science Centre in Beijing have been commissioned to find out how China can step in and try to ensure humans do not go the way of the dinosaurs. The asteroid that led to their extinction was around 10km (6 miles) wide. To change the course of a giant asteroid hurtling towards us at terrifying speeds, a lot of kinetic energy would be needed. Nuclear weapons might do the job but such a blast could break the target into several threatening chunks. In their proposal, the space centre team suggested launching 23 CZ-5 rockets from various sites across China, at the same time. The spacecraft would have to travel for almost three years to reach their target. On top of each rocket would be a deflector, a device designed to avoid breaking up the asteroid. Each rocket would “hit” the asteroid, one after another, by way of a gentle nudge. This would only change the course of a Bennu-sized asteroid slightly, but enough to make it pass safely at a distance about 1.4 times the radius of the Earth and save some cities from annihilation, according to Li’s calculations. “It is possible to defend against large asteroids with a nuclear-free technique within 10 years,” said Li and colleagues in a June paper published in Icarus , an international journal for solar system studies. The CZ-5 is the backbone of China’s space programme, a more-than-handy workhorse used in space station construction and Mars exploration. The problem is its size becomes an issue during free fall back to Earth, travelling at thousands of miles an hour. Western authorities including the US Space Force have said they carefully tracked each CZ-5 after each launch. In May US Defence Secretary Lloyd Austin hoped the rocket of concern at the time would “land in a place where it will not harm anyone. Hopefully in the ocean, or someplace like that.” He also said there was a need to make sure “those kinds of things” were taken into consideration when planning and conducting operations. Some Western media warned readers that the debris might hit big cities. That did not happen but led to an increased focus on China’s responsibility as a space power. In the Icarus paper Li and his colleagues said fuel not used during the rocket launch could give extra thrust during the flight towards an asteroid, and the rocket fuselage also increased the total mass of the deflector. They said existing rockets only had to undergo small modifications such as adding a few small thrusters. A similar mission proposed by researchers with Nasa and California’s Lawrence Livermore National Laboratory in 2018 would require the launch of 75 Delta IV heavy rockets, according to the two organisations and mentioned by Li. Known as HAMMER (Hypervelocity Asteroid Mitigation Mission for Emergency Response), the US plan would deliver more than 400 tonnes of deflectors, nearly twice as many as in the Chinese proposal, but with a flight time nearly a year shorter, to achieve similar results. The US mission would be more expensive than the Chinese one, Li said. The Chinese plan also needs less preparation time. While the American approach would need to discover an asteroid 25 years before its potential collision with Earth, China’s plan could cut the lead time to just a decade. Overall, the Chinese approach, involving what is called the Assembled Kinetic Impactor, could greatly improve deflection efficiency and reduce both launch costs and lead time, the paper said. A space scientist at Beijing’s Tsinghua University said competition between China and the US would accelerate the development of space technology. “The problem is, when the doomsday threat comes, politics may override science and lots of time may be wasted on debates to decide which country should take the lead,” said the researcher, who did not want to be named because of the sensitivity of the issue. China has been challenging US dominance in space for some time. It already has a rover on Mars, is building a space station, exploring the far side of the moon and studying lunar samples recently retrieved by robots. The US launched its first asteroid defence programme decades ago. It has the only asteroid-warning radar system on Earth and one of its spacecraft is returning home after obtaining samples from Bennu, the asteroid that could hit us in about a century. In 2025 China is expected to launch its own spacecraft to retrieve asteroid samples. China is also building a planetary defence system with what will be the most powerful radar in the world, according to researchers involved in the project. It will be made up of large radio telescopes across the country and be able to track more targets than its US counterpart.
Extinction---nuclear winter and global calamity---comprehensive studies Baum 19 - executive director of the Global Catastrophic Risk Institute, Ph.D in Geography, Seth Baum, “Risk-Risk Tradeoff Analysis of Nuclear Explosives for Asteroid Deflection,” SSRN Scholarly Paper (Rochester, NY: Social Science Research Network, May 31, 2019), https://papers.ssrn.com/abstract=3397559. The most severe asteroid collisions and nuclear wars can cause global environmental effects. The core mechanism is the transport of particulate matter into the stratosphere, where it can spread worldwide and remain aloft for years or decades. Large asteroid collisions create large quantities of dust and large fireballs; the fire heats the dust so that some portion of it rises into the stratosphere. The largest collisions, such as the 10km Chicxulub impactor, can also eject debris from the collision site into space; upon reentry into the atmosphere, the debris heats up enough to spark global fires (Toon, Zahnle, Morrison, Turco, and Covey, 1997). The fires are a major impact in their own right and can send additional smoke into the stratosphere. For nuclear explosions, there is also a fireball and smoke, in this case from the burning of cities or other military targets. While in the stratosphere, the particulate matter blocks sunlight and destroys ozone (Toon et al., 2007). The ozone loss increases the amount of ultraviolet radiation reaching the surface, causing skin cancer and other harms (Mills, Toon, Turco, Kinnison, and Garcia, 2008). The blocked sunlight causes abrupt cooling of Earth’s surface and in turn reduced precipitation due to a weakened hydrological cycle. The cool, dry, and dark conditions reduce plant growth. Recent studies use modern climate and crop models to examine the effects for a hypothetical IndiaPakistan nuclear war scenario with 100 weapons (50 per side) each of 15KT yield. The studies find agriculture declines in the range of approximately 2 to 50 depending on the crop and location.11 Another study compares the crop data to existing poverty and malnourishment and estimates that the crop declines could threaten starvation for two billion people (Helfand, 2013). However, the aforementioned studies do not account for new nuclear explosion fire simulations that find approximately five times less particulate matter reaching the stratosphere, and correspondingly weaker global environmental effects (Reisner et al., 2018). Note also that the 100 weapon scenario used in these studies is not the largest potential scenario. Larger nuclear wars and large asteroid collisions could cause greater harm. The largest asteroid collisions could even reduce sunlight below the minimum needed for vision (Toon et al., 1997). Asteroid risk analyses have proposed that the global environmental disruption from large collisions could cause one billion deaths (NRC, 2010) or the death of 25 of all humans (Chapman, 2004; Chapman and Morrison, 1994; Morrison, 1992), though these figures have not been rigorously justified (Baum, 2018a). The harms from asteroid collisions and nuclear wars can also include important secondary effects. The food shortages from severe global environmental disruption could lead to infectious disease outbreaks as public health conditions deteriorate (Helfand, 2013). Law and order could be lost in at least some locations as people struggle for survival (Maher and Baum, 2013). Today’s complex global political-economic system already shows fragility to shocks such as the 2007- 2008 financial crisis (Centeno, Nag, Patterson, Shaver, and Windawi, 2015); an asteroid collision or nuclear war could be an extremely large shock. The systemic consequences of a nuclear war would be further worsened by the likely loss of major world cities that serve as important hubs in the global economy. Even a single detonation in nuclear terrorism would have ripple effects across the global political-economic system (similar to, but likely larger than, the response prompted by the terrorist attacks of 11 September 2001). It is possible for asteroid collisions to cause nuclear war. An asteroid explosion could be misinterpreted as a nuclear attack, prompting nuclear attack that is believed to be retaliation. For example, the 2013 Chelyabinsk event occurred near an important Russian military installation, prompting concerns about the event’s interpretation (Harris et al., 2015). The ultimate severity of an asteroid collision or violent nuclear conflict use would depend on how human society reacts. Would the reaction be disciplined and constructive: bury the dead, heal the sick, feed the hungry, and rebuild all that has fallen? Or would the reaction be disorderly and destructive: leave the rubble in place, fight for scarce resources, and descend into minimalist tribalism or worse? Prior studies have identified some key issues, including the viability of trade (Cantor, Henry, and Rayner, 1989) and the self-sufficiency of local communities (Maher and Baum, 2013). However, the issue has received little research attention and remains poorly understood. This leaves considerable uncertainty in the total human harm from an asteroid collision or nuclear weapons use. Previously published point estimates of the human consequences of asteroid collisions12 and nuclear wars (Helfand, 2013) do not account for this uncertainty and are likely to be inaccurate. Of particular importance are the consequences for future generations, which could vastly outnumber the present generation. If an asteroid collision or nuclear war would cause human extinction, then there would be no future generations. Alternatively, if survivors fail to recover a large population and advanced technological civilization, then future generations would be permanently diminished. The largest long-term factor is whether future generations would colonize space and benefit from its astronomically large amount of resources (Tonn, 1999). However, it is not presently known which asteroid collisions or nuclear wars (if any) would cause the permanent collapse of human civilization and thus the loss of the large future benefits (Baum et al., 2019). Given the enormous stakes, prudent risk management would aim for very low probabilities of permanent collapse (Tonn, 2009). It should be noted that the severity of violent nuclear conflict could depend on more than just the effects of nuclear explosions, because the overall conflict scenario could include non-nuclear violence. Indeed, it is possible for the nuclear explosions to constitute a relatively small portion of the total severity, as was the case in World War II. 4.4 Risk of Violent Non-Nuclear Conflict Finally, it is necessary to discuss the risk of violent non-nuclear conflict. Only a small portion of violent non-nuclear conflicts are applicable, specifically the portion affected by nuclear weapons. More precisely, this section discusses non-nuclear conflicts involving one or more countries that possess nuclear weapons at some point during the lifetime of a nuclear deflection program. Nuclear deterrence theory predicts that nuclear-armed adversaries will not initiate major wars against each other because both sides could be destroyed in a nuclear war. However, the theory does permit limited, small-scale violent conflicts between nuclear-armed countries. These conflicts likely would not involve nuclear weapons. Indeed, nuclear deterrence may even make small violent conflicts more likely, because the countries know that neither side wants to escalate the conflict into major war. This idea is known as the stability-instability paradox: nuclear deterrence brings stability with respect to major wars but instability with respect to minor conflicts. Empirical support for the stability-instability paradox has been found by some research (Rauchhaus, 2009),while other research has found no significant effect of the possession of nuclear weapons on the probability of conflicts of any scale (Bell and Miller, 2015; Gartzke and Jo, 2009). If countries fully disarm their nuclear arsenals, such that they would never have nuclear weapons again, then there would be no nuclear deterrence to prevent the onset of major wars. A simple risk analysis could assume that the risk of major wars would be comparable to the risk prior to the development of nuclear weapons. The two twentieth century World Wars combined for around 100 million deaths in 50 years,13 suggesting an annualized risk of two million deaths. However, two World Wars do not make for a robust dataset. Indeed, the robustness of these two data points is called into question by historical analysis finding that both world wars might not have occurred in the reasonably plausible event that the 1914 assassination of Archduke Ferdinand had failed (Lebow, 2014). Similarly, another historical analysis finds that the U.S. and Soviet Union would probably not have waged major war against each other even in the absence of nuclear deterrence (Mueller, 1988). Furthermore, these past events are not necessarily applicable to the future conditions of a post-nuclear-disarmament world. To the best of the present author’s knowledge, no studies have analyzed the risk of major wars in a post-nucleardisarmament world.
2/13/22
JF - DA - Hacking
Tournament: Palm Classic | Round: Triples | Opponent: Catonsville AT | Judge: James Stuckert, Truman Le, John Boals 6 Hacking towards Satellites is coming now – incentives and vulnerabilities align. Culpan 21 Tim Culpan 11-2-2021 "The Next Big Hack Could Come From the Stars" https://archive.is/XElln#selection-3035.0-3040.0 (Bloomberg Opinion Columnist)Elmer “As space becomes more important, there becomes unfortunately even greater incentives for malicious actors to disrupt, deny or alter our space-based assets,” Bob Kolasky, head of the Department of Homeland Security’s National Risk Management Center, told the same conference organized by the National Institute of Standards and Technology. “With space, whatever you put in orbit is what you must live with. Systems must be designed so that they can address threats and hazards throughout their lifespan.” What makes satellites and their associated land-based infrastructure more vulnerable is that the data they transmit can be easily accessed by anyone on Earth with $300 worth of TV reception equipment, allowing you to eavesdrop on unencrypted financial data or download information from Russian and American weather satellites in real time. A nefarious actor with its own satellite could even cause interference or block the signal from these orbiting stations. But among the scariest of scenarios would be for an adversary to break into the control systems of a satellite, redirect its movement or even crash it into another satellite or the planet. That may have already happened. According to one account, a breach at the Goddard Space Flight Center in Washington, D.C., in 1998 led to a U.S.-German satellite called ROSAT being overtaken and turned toward the sun, damaging the ultraviolet filter on its image sensors. This allegation has been denied, yet whether real or apocryphal the incident (the filter was indeed destroyed by the sun) shows the challenges of repairing hardware 360 miles above the earth’s surface or even investigating the cause of the malfunction. Megaconstellations solves satellite hacking – multiple warrants. Commercial Satellites are key due to production capacity. Hallex and Cottom 20 Hallex, Matthew, and Travis Cottom. "Proliferated commercial satellite constellations: Implications for national security." Joint Forces Quarterly 97.July (2020): 20-29. (Matthew A. Hallex is a Research Staff Member at the Institute for Defense Analyses. Travis S. Cottom is a Research Associate at the Institute for Defense Analyses.)Re-cut by Elmer While potentially threatening the sustainability of safe orbital operations, new proliferated constellations also offer opportunities for the United States to increase the resilience of its national security space architectures. Increasing the resilience of U.S. national security space architectures has strategic implications beyond the space domain. Adversaries such as China and Russia see U.S. dependence on space as a key vulnerability to exploit during a conflict. Resilient, proliferated satellite constellations support deterrence by denying adversaries the space superiority they believe is necessary to initiate and win a war against the United States.28 Should deterrence fail, these constellations could provide assured space support to U.S. forces in the face of adversary counterspace threats while imposing costs on competitors by rendering their investments in counterspace systems irrelevant. Proliferated constellations can support these goals in four main ways. First, the extreme degree of disaggregation inherent in government and commercial proliferated constellations could make them more resilient to attacks by many adversary counterspace systems. A constellation composed of hundreds or thousands of satellites could withstand losing a relatively large number of them before losing significant capability. Conducting such an attack with kinetic antisatellite weapons—like those China and Russia are developing—would require hundreds of costly weapons to destroy satellites that would be relatively inexpensive to replace. Second, proliferated constellations would be more resilient to adversary electronic warfare. Satellites in LEO can emit signals 1,280 times more powerful than signals from satellites in GEO.29 They also are faster in the sky than satellites in more distant orbits, which, combined with the planned use of small spot beams for communications proliferated constellations, would shrink the geographic area in which an adversary ground-based jammer could effectively operate, making jammers less effective and easier to geolocate and eliminate.30 Third, even if the United States chooses not to deploy national security proliferated constellations during peacetime, industrial capacity for mass-producing proliferated constellation satellites could be repurposed during a conflict. Just as Ford production lines shifted from automobiles to tanks and aircraft during World War II, one can easily imagine commercial satellite factories building military reconnaissance or communications satellites during a conflict. Fourth, deploying and maintaining constellations of hundreds or thousands of satellites will drive the development of low-cost launches to a much higher rate than is available today. Inexpensive, high-cadence space launch could provide a commercial solution to operationally responsive launch needs of the U.S. Government. In a future where space launches occur weekly or less, the launch capacity needed to augment national security space systems during a crisis or to replace systems lost during a conflict in space would be readily available.31 Hacking on Satellites goes Nuclear. Miller and Fontaine 17 James Miller and Richard Fontaine 11-26-2017 "Cyber and Space Weapons Are Making Nuclear Deterrence Trickier" https://www.defenseone.com/ideas/2017/11/cyber-and-space-weapons-are-making-nuclear-deterrence-trickier/142767/ (James N. Miller, Jr. is a member of the Board of Advisors of the Center for a New American Security. He served as U.S. Under Secretary of Defense for Policy from 2012 to 2014.)Elmer Cyber weapons are not, of course, the sole preserve of Russia. Washington has acknowledged its own development of them, and senior U.S. officials have highlighted their use against ISIS. Their possession by both Russia and the United States complicates traditional notions of strategic stability. Using non-kinetic, non-lethal cyber tools is likely to be very attractive in a crisis, and certainly in a conflict. Yet with both sides possessing the means to disrupt or destroy the other’s military systems and critical infrastructure – both war-supporting infrastructure as well as purely civilian infrastructure - a small “cyber-spark” could prompt rapid escalation. Such an attack could inadvertently “detonate” a cyber weapon that had been intended to lay dormant in the other side’s systems. Or a spark produced by sub-national actors – “patriotic hackers” inside or outside the government – could generate unintended cascading effects. The spark could even come via a false flag attack, with a third-party trying to pit the United States and Russia against one another. A second scenario could appear if armed conflict looks likely. At the outset, there would exist strong incentives to use offensive cyber and counter-space capabilities early, in order to negate the other side’s military. The U.S. and Russian militaries depend (though not equally) on information technology and space assets to collect and disseminate intelligence, as well as for command, control, and communications. Hence the incentive to use non-kinetic cyber or space attacks to degrade the other side’s military, with few if any direct casualties. By moving first, the cyber- or space-attacker could gain military and coercive advantage, while putting the onus on the attacked side to dare escalate with “kinetic” lethal attacks. Would the United States or Russia respond with, say, missile strikes or a bombing campaign in response to some fried computers or dead robots in outer space? Given the doubt that they would, large-scale cyber and space attacks – before a kinetic conflict even starts – are likely to be seen as a low-risk, high-payoff move for both sides. A third scenario plays out if one side believes that its critical infrastructure and satellites are far less vulnerable than the other side. In that case, a severe crisis or conflict might prompt the country to threaten (and perhaps provide a limited demonstration of) cyber attacks on civilian critical infrastructure, or non-kinetic attacks on space assets. Such a move would require the attacked side to respond not in kind but by escalating. So far, the three scenarios we have described could well undermine stability between the United States and Russia, but need not implicate nuclear stability. Yet consider this: U.S. and Russian nuclear forces rely on information technology and space assets for warning and communications. Attack the right satellites, or attack the right computers, and one side may disrupt the other’s ability to use nuclear weapons – or at least place doubt in the minds of its commanders. As a result, a major cyber and space attack could put nuclear “use-or-lose” in play early in a crisis. While we are generally accustomed to thinking about nuclear use as the highest rung on the escalatory ladder, such pressures – generated via non-nuclear attacks – could bring the horrors of a nuclear exchange closer rather than substituting for them.
2/13/22
JF - DA - Space Deterrence
Tournament: Lex | Round: 2 | Opponent: Bridgeland PT | Judge: Brett Cryan Space Commercialization is key to Space Deterrence – Commercial Flexibility is key to deterrence by denial. Klein 19, John J. Understanding space strategy: the art of war in space. Routledge, 2019. (a Senior Fellow and Strategist at Falcon Research, Inc. and Adjunct Professor at George Washington University’s Space Policy Institute)Elmer Recent U.S. space policy initiatives underscore the far-reaching benefits of commercial space activities. The White House revived the National Space Council to foster closer coordination, cooperation, and exchange of technology and information among the civil, national security, and commercial space sectors.1 National Space Policy Directive 2 seeks to promote economic growth by streamlining U.S. regulations on the commercial use of space.2 While the defense community generally appreciates the value of services and capabilities derived from the commercial space sector—including space launch, Earth observation, and satellite communications—it often overlooks one area of strategic importance: deterrence. To address the current shortcoming in understanding, this paper first describes the concept of deterrence, along with how space mission assurance and resilience fit into the framework. After explaining how commercial space capabilities may influence the decision calculus of potential adversaries, this study presents actionable recommendations for the U.S. Department of Defense (DoD) to address current problem areas. Ultimately, DoD—including the soon-to-be reestablished U.S. Space Command and possibly a new U.S. Space Force—should incorporate the benefits and capabilities of the commercial space sector into flexible deterrent options and applicable campaign and contingency plans. Deterrence, Mission Assurance, and Resilience Thomas Schelling, the dean of modern deterrence theory, held that deterrence refers to persuading a potential enemy that it is in its interest to avoid certain courses of activity.3 One component of deterrence theory lies in an understanding that the threat of credible and potentially overwhelming force or other retaliatory action against any would-be adversary is sufficient to deter most potential aggressors from conducting hostile actions. This idea is also referred to as deterrence by punishment.4 The second salient component of deterrence theory is denial. According to Glenn Snyder’s definition, deterrence by denial is “the capability to deny the other party any gains from the move which is to be deterred.”5 The 2018 U.S. National Defense Strategy (NDS) highlights deterrence, and specifically deterrence by denial, as a vital component of national security. The NDS notes that the primary objectives of the United States include deterring adversaries from pursuing aggression and preventing hostile actions against vital U.S. interests.6 The strategy also observes that deterring conflict necessitates preparing for war during peacetime.7 For the space domain, the peacetime preparedness needed for deterrence by denial occurs in the context of space mission assurance and resilience. Mission assurance entails “a process to protect or ensure the continued function and resilience of capabilities and assets—including personnel, equipment, facilities, networks, information and information systems, infrastructure, and supply chains—critical to the performance of DoD mission essential functions in any operating environment or condition.”8 Similar to mission assurance but with a different focus, resilience is an architecture’s ability to support mission success with higher probability; shorter periods of reduced capability; and across a wider range of scenarios, conditions, and threats, despite hostile action or adverse conditions.9 Resilience may leverage cross-domain solutions, along with commercial and international capabilities.10 Space mission assurance and resilience can prevent a potential adversary from achieving its objectives or realizing any benefit from its aggressive action. These facets of U.S. preparedness help convey the futility of conducting a hostile act. Consequently, they enhance deterrence by denial. Commercial Space Enables Deterrence The commercial space sector directly promotes mission assurance and resilience efforts. This is in part due to the distributed and diversified nature of commercial space launch and satellites services. Distribution refers to the use of a number of nodes, working together, to perform the same mission or functions as a single node; diversification describes contributing to the same mission in multiple ways, using different platforms, orbits, or systems and capabilities.11 The 2017 U.S. National Security Strategy, in noting the benefits derived from the commercial space industry, states that DoD partners with the commercial sector’s capabilities to improve the U.S. space architecture’s resilience.12 Although U.S. policy and joint doctrine frequently acknowledge the role of the commercial space sector in space mission assurance and resilience, there is little recognition that day-to-day contributions from the commercial industry assists in deterring would-be adversaries. The commercial space sector contributes to deterrence by denial through multi-domain solutions that are distributed and diversified. These can deter potential adversaries from pursuing offensive actions against space-related systems. Commercial launch providers enhance deterrence by providing options for getting payloads into orbit. These include diverse space launch capabilities such as small and responsive launch vehicles, along with larger, reusable launch vehicles; launch rideshares for secondary payloads; and government payloads on commercial satellites. Various on-orbit systems also promote deterrence. For example, if an aggressor damages a commercial remote sensing satellite during hostilities, similar commercial satellites in a different orbital regime, or those of the same constellation, may provide the needed imagery. If satellite communications are jammed or degraded, commercial service providers can reroute satellite communications through their own networks, or potentially through the networks of another company using a different portion of the frequency spectrum. Regarding deterrence by punishment efforts, the commercial space sector can play a role, albeit an indirect one, through improved space situational awareness (SSA) and space forensics (including digital forensics and multispectral imagery). The commercial industry may support the attribution process following a hostile or illegal act in space through its increasingly proliferating network of SSA ground telescopes and other terrestrial tracking systems. The DoD may also leverage the commercial space sector’s cyber expertise to support digital forensic efforts to help determine the source of an attack. By supporting a credible and transparent attribution process, commercial partners may cause a would-be adversary to act differently if it perceives that its aggressive, illegal, or otherwise nefarious actions will be disclosed. Doing so can help bolster the perceived ability to conduct a legitimate response following a hostile attack, which may improve deterrence by punishment efforts. Commercial space capabilities may also facilitate the application of force to punish a potential aggressor. In addition to traditional military space systems, commercial satellite imagery and communication capabilities may be used in cueing and targeting for punitive strikes against an aggressor. Although the commercial space sector is not expected to be involved directly in the use of retaliatory force following a hostile act, commercial partners may help in providing the information used to identify those responsible and to facilitate any consequent targeting efforts. Space Deterrence Breakdowns causes War and Extinction. Parker 17 Clifton Parker 1-24-2017 “Deterrence in space key to U.S. security” https://cisac.fsi.stanford.edu/news/deterrence-space-key-us-security (Policy Analyst at the Stanford Center for International Security and Cooperation)Elmer Space is more important than ever for the security of the United States, but it’s almost like the Wild West in terms of behavior, a top general said today. Air Force Gen. John Hyten, commander of the U.S. Strategic Command, spoke Jan. 24 at Stanford’s Center for International Security and Cooperation. His talk was titled, “U.S. Strategic Command Perspectives on Deterrence and Assurance.” Hyten said, “Space is fundamental to every single military operation that occurs on the planet today.” He added that “there is no such thing as a war in space,” because it would affect all realms of human existence, due to the satellite systems. Hyten advocates “strategic deterrence” and “norms of behavior” across space as well as land, water and cyberspace. Otherwise, rivals like China and Russia will only threaten U.S. interests in space and wreak havoc for humanity below, he said. Most of contemporary life depends on systems connected to space. Hyten also addressed other topics, including recent proposals by some to upgrade the country’s missile defense systems. “You just don’t snap your fingers and build a state-of-the-art anything overnight,” Hyten said, adding that he has not yet spoken to Trump administration officials about the issue. “We need a powerful military,” but a severe budget crunch makes “reasonable solutions” more likely than expensive and unrealistic ones. On the upgrade front, Hyten said he favors a long-range strike missile system to replace existing cruise missiles; a better air-to-air missile for the Air Force; and an improved missile defense ground base interceptor. ‘Critically dependent’ From satellites to global-positioning systems GPS, space has transformed human life – and the military – in the 21st century, Hyten said. In terms of defining "space," the U.S. designates people who travel above an altitude of 50 miles as astronauts. As the commander of the U.S. Strategic Command, Hyten oversees the control of U.S. strategic forces, providing options for the president and secretary of defense. In particular, this command is charged with space operations (such as military satellites), information operations (such as information warfare), missile defense, global command and control, intelligence, surveillance, and reconnaissance, global strike and strategic deterrence (the U.S. nuclear arsenal), and combating weapons of mass destruction. Hyten explained that every drone, fighter jet, bomber, ship and soldier is critically dependent on space to conduct their own operations. All cell phones use space, and the GPS command systems overall are managed at Strategic Command, he said. “No soldier has to worry about what’s over the next hill,” he said, describing GPS capabilities, which have fundamentally transformed humanity’s way of life. Space needs to be available for exploration, he said. “I watch what goes on in space, and I worry about us destroying that environment for future generations.” He said that too many drifting objects and debris exist – about 22,000 right now. A recent Chinese satellite interception created a couple thousand more debris objects that now circle about the Earth at various altitudes and pose the risk of striking satellites. “We track every object in space” now, Hyten said, urging “international norms of behavior in space.” He added, “We have to deter bad behavior on space. We have to deter war in space. It’s bad for everybody. We could trash that forever.” But now rivals like China and Russia are building weapons to deploy in the lower levels of space. “How do we prevent this? It’s bigger than a space problem,” he said. Deterring conflict in the cyber, nuclear and space realms is the strategic deterrence goal of the 21st century, Hyten said. “The best way to prevent war is to be prepared for war,” he said. Hyten believes the U.S. needs a fundamentally different debate about deterrence. And it all starts with nuclear weapons. “In my deepest heart, I wish I didn’t have to worry about nuclear weapons,” he said. Hyten described his job as “pretty sobering, it’s not easy.” But he also noted the mass violence of the world prior to 1945 when the first atomic bomb was used. Roughly 80 million people died from 1939 to 1945 during World War II. Consider that in the 10-plus years of the Vietnam War, 58,000 Americans were killed. That’s equivalent to two days of deaths in WWII, he said. In a world without nuclear weapons, a rise in conventional warfare would produce great numbers of mass casualties, Hyten said. About war, he said, “Once you see it up close, no human will ever want to experience it.” Though America has “crazy enemies” right now, in many ways the world is more safe than during WWII, Hyten said. The irony is that nuclear weapons deterrence has kept us from the type of mass killings known in events like WWII. But the U.S. must know how to use its nuclear deterrence effectively. Looking ahead, Hyten said the U.S. needs to think about space as a potential war environment. An attack in space might not mean a response in space, but on the Earth. Hyten describes space as the domain that people look up at it and still dream about. “I love to look at the stars,” but said he wants to make sure he’s not looking up at junk orbiting in the atmosphere.
1/18/22
JF - DA - Space Innovation
Tournament: Harvard | Round: 5 | Opponent: Hunter AH | Judge: Henry Eberhart 4 Space Commercialization drives Tech Innovation in the Status Quo – it provides a unique impetus. Hampson 17 Joshua Hampson 1-25-2017 “The Future of Space Commercialization” https://republicans-science.house.gov/sites/republicans.science.house.gov/files/documents/TheFutureofSpaceCommercializationFinal.pdf (Security Studies Fellow at the Niskanen Center)Elmer The size of the space economy is far larger than many may think. In 2015 alone, the global market amounted to $323 billion. Commercial infrastructure and systems accounted for 76 percent of that 9 total, with satellite television the largest subsection at $95 billion. The global space launch market’s 10 11 share of that total came in at $6 billion dollars. It can be hard to disaggregate how space benefits 12 particular national economies, but in 2009 (the last available report), the Federal Aviation Administration (FAA) estimated that commercial space transportation and enabled industries generated $208.3 billion in economic activity in the United States alone. Space is not just about 13 satellite television and global transportation; while not commercial, GPS satellites also underpin personal navigation, such as smartphone GPS use, and timing data used for Internet coordination.14 Without that data, there could be problems for a range of Internet and cloud-based services.15 There is also room for growth. The FAA has noted that while the commercial launch sector has not grown dramatically in the last decade, there are indications that there is latent demand. This 16 demand may catalyze an increase in launches and growth of the wider space economy in the next decade. The Satellite Industry Association’s 2015 report highlighted that their section of the space economy outgrew both the American and global economies. The FAA anticipates that growth to 17 continue, with expectations that small payload launch will be a particular industry driver.18 In the future, emerging space industries may contribute even more the American economy. Space tourism and resource recovery—e.g., mining on planets, moons , and asteroids—in particular may become large parts of that industry. Of course, their viability rests on a range of factors, including costs, future regulation, international problems, and assumptions about technological development. However, there is increasing optimism in these areas of economic production. But the space economy is not just about what happens in orbit, or how that alters life on the ground. The growth of this economy can also contribute to new innovations across all walks of life. Technological Innovation Innovation is generally hard to predict; some new technologies seem to come out of nowhere and others only take off when paired with a new application. It is difficult to predict the future, but it is reasonable to expect that a growing space economy would open opportunities for technological and organizational innovation. In terms of technology, the difficult environment of outer space helps incentivize progress along the margins. Because each object launched into orbit costs a significant amount of money—at the moment between $27,000 and $43,000 per pound, though that will likely drop in the future —each 19 reduction in payload size saves money or means more can be launched. At the same time, the ability to fit more capability into a smaller satellite opens outer space to actors that previously were priced out of the market. This is one of the reasons why small, affordable satellites are increasingly pursued by companies or organizations that cannot afford to launch larger traditional satellites. These small 20 satellites also provide non-traditional launchers, such as engineering students or prototypers, the opportunity to learn about satellite production and test new technologies before working on a full-sized satellite. That expansion of developers, experimenters, and testers cannot but help increase innovation opportunities. Technological developments from outer space have been applied to terrestrial life since the earliest days of space exploration. The National Aeronautics and Space Administration (NASA) maintains a website that lists technologies that have spun off from such research projects. Lightweight 21 nanotubes, useful in protecting astronauts during space exploration, are now being tested for applications in emergency response gear and electrical insulation. The need for certainty about the resiliency of materials used in space led to the development of an analytics tool useful across a range of industries. Temper foam, the material used in memory-foam pillows, was developed for NASA for seat covers. As more companies pursue their own space goals, more innovations will likely come from the commercial sector. Outer space is not just a catalyst for technological development. Satellite constellations and their unique line-of-sight vantage point can provide new perspectives to old industries. Deploying satellites into low-Earth orbit, as Facebook wants to do, can connect large, previously-unreached swathes of 22 humanity to the Internet. Remote sensing technology could change how whole industries operate, such as crop monitoring, herd management, crisis response, and land evaluation, among others. 23 While satellites cannot provide all essential information for some of these industries, they can fill in some useful gaps and work as part of a wider system of tools. Space infrastructure, in helping to change how people connect and perceive Earth, could help spark innovations on the ground as well. These innovations, changes to global networks, and new opportunities could lead to wider economic growth. Strong Innovation solves Extinction. Matthews 18 Dylan Matthews 10-26-2018 “How to help people millions of years from now” https://www.vox.com/future-perfect/2018/10/26/18023366/far-future-effective-altruism-existential-risk-doing-good (Co-founder of Vox, citing Nick Beckstead @ Rutgers University)Re-cut by Elmer If you care about improving human lives, you should overwhelmingly care about those quadrillions of lives rather than the comparatively small number of people alive today. The 7.6 billion people now living, after all, amount to less than 0.003 percent of the population that will live in the future. It’s reasonable to suggest that those quadrillions of future people have, accordingly, hundreds of thousands of times more moral weight than those of us living here today do. That’s the basic argument behind Nick Beckstead’s 2013 Rutgers philosophy dissertation, “On the overwhelming importance of shaping the far future.” It’s a glorious mindfuck of a thesis, not least because Beckstead shows very convincingly that this is a conclusion any plausible moral view would reach. It’s not just something that weird utilitarians have to deal with. And Beckstead, to his considerable credit, walks the walk on this. He works at the Open Philanthropy Project on grants relating to the far future and runs a charitable fund for donors who want to prioritize the far future. And arguments from him and others have turned “long-termism” into a very vibrant, important strand of the effective altruism community. But what does prioritizing the far future even mean? The most literal thing it could mean is preventing human extinction, to ensure that the species persists as long as possible. For the long-term-focused effective altruists I know, that typically means identifying concrete threats to humanity’s continued existence — like unfriendly artificial intelligence, or a pandemic, or global warming/out of control geoengineering — and engaging in activities to prevent that specific eventuality. But in a set of slides he made in 2013, Beckstead makes a compelling case that while that’s certainly part of what caring about the far future entails, approaches that address specific threats to humanity (which he calls “targeted” approaches to the far future) have to complement “broad” approaches, where instead of trying to predict what’s going to kill us all, you just generally try to keep civilization running as best it can, so that it is, as a whole, well-equipped to deal with potential extinction events in the future, not just in 2030 or 2040 but in 3500 or 95000 or even 37 million. In other words, caring about the far future doesn’t mean just paying attention to low-probability risks of total annihilation; it also means acting on pressing needs now. For example: We’re going to be better prepared to prevent extinction from AI or a supervirus or global warming if society as a whole makes a lot of scientific progress. And a significant bottleneck there is that the vast majority of humanity doesn’t get high-enough-quality education to engage in scientific research, if they want to, which reduces the odds that we have enough trained scientists to come up with the breakthroughs we need as a civilization to survive and thrive. So maybe one of the best things we can do for the far future is to improve school systems — here and now — to harness the group economist Raj Chetty calls “lost Einsteins” (potential innovators who are thwarted by poverty and inequality in rich countries) and, more importantly, the hundreds of millions of kids in developing countries dealing with even worse education systems than those in depressed communities in the rich world. What if living ethically for the far future means living ethically now? Beckstead mentions some other broad, or very broad, ideas (these are all his descriptions): Help make computers faster so that people everywhere can work more efficiently Change intellectual property law so that technological innovation can happen more quickly Advocate for open borders so that people from poorly governed countries can move to better-governed countries and be more productive Meta-research: improve incentives and norms in academic work to better advance human knowledge Improve education Advocate for political party X to make future people have values more like political party X ”If you look at these areas (economic growth and technological progress, access to information, individual capability, social coordination, motives) a lot of everyday good works contribute,” Beckstead writes. “An implication of this is that a lot of everyday good works are good from a broad perspective, even though hardly anyone thinks explicitly in terms of far future standards.” Look at those examples again: It’s just a list of what normal altruistically motivated people, not effective altruism folks, generally do. Charities in the US love talking about the lost opportunities for innovation that poverty creates. Lots of smart people who want to make a difference become scientists, or try to work as teachers or on improving education policy, and lord knows there are plenty of people who become political party operatives out of a conviction that the moral consequences of the party’s platform are good. All of which is to say: Maybe effective altruists aren’t that special, or at least maybe we don’t have access to that many specific and weird conclusions about how best to help the world. If the far future is what matters, and generally trying to make the world work better is among the best ways to help the far future, then effective altruism just becomes plain ol’ do-goodery.
2/20/22
JF - DA - Space Innovation
Tournament: Harvard | Round: 5 | Opponent: Hunter AH | Judge: Henry Eberhart 4 Space Commercialization drives Tech Innovation in the Status Quo – it provides a unique impetus. Hampson 17 Joshua Hampson 1-25-2017 “The Future of Space Commercialization” https://republicans-science.house.gov/sites/republicans.science.house.gov/files/documents/TheFutureofSpaceCommercializationFinal.pdf (Security Studies Fellow at the Niskanen Center)Elmer The size of the space economy is far larger than many may think. In 2015 alone, the global market amounted to $323 billion. Commercial infrastructure and systems accounted for 76 percent of that 9 total, with satellite television the largest subsection at $95 billion. The global space launch market’s 10 11 share of that total came in at $6 billion dollars. It can be hard to disaggregate how space benefits 12 particular national economies, but in 2009 (the last available report), the Federal Aviation Administration (FAA) estimated that commercial space transportation and enabled industries generated $208.3 billion in economic activity in the United States alone. Space is not just about 13 satellite television and global transportation; while not commercial, GPS satellites also underpin personal navigation, such as smartphone GPS use, and timing data used for Internet coordination.14 Without that data, there could be problems for a range of Internet and cloud-based services.15 There is also room for growth. The FAA has noted that while the commercial launch sector has not grown dramatically in the last decade, there are indications that there is latent demand. This 16 demand may catalyze an increase in launches and growth of the wider space economy in the next decade. The Satellite Industry Association’s 2015 report highlighted that their section of the space economy outgrew both the American and global economies. The FAA anticipates that growth to 17 continue, with expectations that small payload launch will be a particular industry driver.18 In the future, emerging space industries may contribute even more the American economy. Space tourism and resource recovery—e.g., mining on planets, moons , and asteroids—in particular may become large parts of that industry. Of course, their viability rests on a range of factors, including costs, future regulation, international problems, and assumptions about technological development. However, there is increasing optimism in these areas of economic production. But the space economy is not just about what happens in orbit, or how that alters life on the ground. The growth of this economy can also contribute to new innovations across all walks of life. Technological Innovation Innovation is generally hard to predict; some new technologies seem to come out of nowhere and others only take off when paired with a new application. It is difficult to predict the future, but it is reasonable to expect that a growing space economy would open opportunities for technological and organizational innovation. In terms of technology, the difficult environment of outer space helps incentivize progress along the margins. Because each object launched into orbit costs a significant amount of money—at the moment between $27,000 and $43,000 per pound, though that will likely drop in the future —each 19 reduction in payload size saves money or means more can be launched. At the same time, the ability to fit more capability into a smaller satellite opens outer space to actors that previously were priced out of the market. This is one of the reasons why small, affordable satellites are increasingly pursued by companies or organizations that cannot afford to launch larger traditional satellites. These small 20 satellites also provide non-traditional launchers, such as engineering students or prototypers, the opportunity to learn about satellite production and test new technologies before working on a full-sized satellite. That expansion of developers, experimenters, and testers cannot but help increase innovation opportunities. Technological developments from outer space have been applied to terrestrial life since the earliest days of space exploration. The National Aeronautics and Space Administration (NASA) maintains a website that lists technologies that have spun off from such research projects. Lightweight 21 nanotubes, useful in protecting astronauts during space exploration, are now being tested for applications in emergency response gear and electrical insulation. The need for certainty about the resiliency of materials used in space led to the development of an analytics tool useful across a range of industries. Temper foam, the material used in memory-foam pillows, was developed for NASA for seat covers. As more companies pursue their own space goals, more innovations will likely come from the commercial sector. Outer space is not just a catalyst for technological development. Satellite constellations and their unique line-of-sight vantage point can provide new perspectives to old industries. Deploying satellites into low-Earth orbit, as Facebook wants to do, can connect large, previously-unreached swathes of 22 humanity to the Internet. Remote sensing technology could change how whole industries operate, such as crop monitoring, herd management, crisis response, and land evaluation, among others. 23 While satellites cannot provide all essential information for some of these industries, they can fill in some useful gaps and work as part of a wider system of tools. Space infrastructure, in helping to change how people connect and perceive Earth, could help spark innovations on the ground as well. These innovations, changes to global networks, and new opportunities could lead to wider economic growth. Strong Innovation solves Extinction. Matthews 18 Dylan Matthews 10-26-2018 “How to help people millions of years from now” https://www.vox.com/future-perfect/2018/10/26/18023366/far-future-effective-altruism-existential-risk-doing-good (Co-founder of Vox, citing Nick Beckstead @ Rutgers University)Re-cut by Elmer If you care about improving human lives, you should overwhelmingly care about those quadrillions of lives rather than the comparatively small number of people alive today. The 7.6 billion people now living, after all, amount to less than 0.003 percent of the population that will live in the future. It’s reasonable to suggest that those quadrillions of future people have, accordingly, hundreds of thousands of times more moral weight than those of us living here today do. That’s the basic argument behind Nick Beckstead’s 2013 Rutgers philosophy dissertation, “On the overwhelming importance of shaping the far future.” It’s a glorious mindfuck of a thesis, not least because Beckstead shows very convincingly that this is a conclusion any plausible moral view would reach. It’s not just something that weird utilitarians have to deal with. And Beckstead, to his considerable credit, walks the walk on this. He works at the Open Philanthropy Project on grants relating to the far future and runs a charitable fund for donors who want to prioritize the far future. And arguments from him and others have turned “long-termism” into a very vibrant, important strand of the effective altruism community. But what does prioritizing the far future even mean? The most literal thing it could mean is preventing human extinction, to ensure that the species persists as long as possible. For the long-term-focused effective altruists I know, that typically means identifying concrete threats to humanity’s continued existence — like unfriendly artificial intelligence, or a pandemic, or global warming/out of control geoengineering — and engaging in activities to prevent that specific eventuality. But in a set of slides he made in 2013, Beckstead makes a compelling case that while that’s certainly part of what caring about the far future entails, approaches that address specific threats to humanity (which he calls “targeted” approaches to the far future) have to complement “broad” approaches, where instead of trying to predict what’s going to kill us all, you just generally try to keep civilization running as best it can, so that it is, as a whole, well-equipped to deal with potential extinction events in the future, not just in 2030 or 2040 but in 3500 or 95000 or even 37 million. In other words, caring about the far future doesn’t mean just paying attention to low-probability risks of total annihilation; it also means acting on pressing needs now. For example: We’re going to be better prepared to prevent extinction from AI or a supervirus or global warming if society as a whole makes a lot of scientific progress. And a significant bottleneck there is that the vast majority of humanity doesn’t get high-enough-quality education to engage in scientific research, if they want to, which reduces the odds that we have enough trained scientists to come up with the breakthroughs we need as a civilization to survive and thrive. So maybe one of the best things we can do for the far future is to improve school systems — here and now — to harness the group economist Raj Chetty calls “lost Einsteins” (potential innovators who are thwarted by poverty and inequality in rich countries) and, more importantly, the hundreds of millions of kids in developing countries dealing with even worse education systems than those in depressed communities in the rich world. What if living ethically for the far future means living ethically now? Beckstead mentions some other broad, or very broad, ideas (these are all his descriptions): Help make computers faster so that people everywhere can work more efficiently Change intellectual property law so that technological innovation can happen more quickly Advocate for open borders so that people from poorly governed countries can move to better-governed countries and be more productive Meta-research: improve incentives and norms in academic work to better advance human knowledge Improve education Advocate for political party X to make future people have values more like political party X ”If you look at these areas (economic growth and technological progress, access to information, individual capability, social coordination, motives) a lot of everyday good works contribute,” Beckstead writes. “An implication of this is that a lot of everyday good works are good from a broad perspective, even though hardly anyone thinks explicitly in terms of far future standards.” Look at those examples again: It’s just a list of what normal altruistically motivated people, not effective altruism folks, generally do. Charities in the US love talking about the lost opportunities for innovation that poverty creates. Lots of smart people who want to make a difference become scientists, or try to work as teachers or on improving education policy, and lord knows there are plenty of people who become political party operatives out of a conviction that the moral consequences of the party’s platform are good. All of which is to say: Maybe effective altruists aren’t that special, or at least maybe we don’t have access to that many specific and weird conclusions about how best to help the world. If the far future is what matters, and generally trying to make the world work better is among the best ways to help the far future, then effective altruism just becomes plain ol’ do-goodery.
2/20/22
JF - DA - Starlink Ag
Tournament: Palm Classic | Round: Triples | Opponent: Catonsville AT | Judge: James Stuckert, Truman Le, John Boals 3 Starlink is key to Precision Ag – key to food sustainability and increasing food supply to account for exponential population growth. Greensight 21 3-15-2021 "Can Starlink Save the World by Connecting Farms?" https://www.greensightag.com/logbook/can-starlink-save-the-world-by-connecting-farms/ (Data Management Consulting Firm)Elmer GreenSight innovates in a number of different areas, but one of the areas we are most passionate about is in agriculture. We’ve deployed our drone intelligence systems all over the world at all sorts of different facilities. One of the most challenging has been deployments at farms, and one of the biggest challenges has been connectivity. Connected farms are a requirement to feed the world, and Starlink will make that happen. Most urban and suburban households in the United States have had easy and reasonably inexpensive access to high speed internet access for 20 years. It is easy to forget that the situation is not the same for rural areas of the country. Many areas have no access to high speed, “broadband”, internet access, with some having only dialup internet access in their homes. According to the 2015 FCC broadband report, only 53 of rural households have access to high speed internet, even using low standards for “high” speed. On average farms have even less access, and that doesn’t even include high speed connectivity out in their fields. Cellular service is spotty especially on large farms in primarily agricultural areas, and legacy satellite systems provide slow upload speeds at expensive prices. Utilizing modern internet connected technologies and cloud based systems that require constant, high speed access can be a challenge at best and potentially impossible. A 2016 research study by Goldman and Sachs projected that by 2050, the world’s food production efficiency needs to increase by 50 to support our growing population. This paper backs up this conclusion with a lot of research, but the fundamental conclusion is that farming land area is unlikely to increase nor will the number of farmers. Increased global food production increases must come from productivity boosts. Researchers feel that productivity improvements from chemistry and genomics are unlikely to yield significant increases as they have in the past. They predict that the most likely area for these improvements are with precision farming techniques, notably precision planting and precision application of chemicals and water. The term “Precision Agriculture” was coined in the late 1960s and 1970s in seminal research that projected that in the future farming would be driven by data with inputs and practices varied and optimized based on weather, measurements from the field, and accurate year over year yield measurements. Since then, many tools and technologies have been developed that have made true precision agriculture more and more practical. Precision RTK GPS can guide equipment with precision better than an inch. Drones and satellite mapping of fields using remote sensing can map out health and detect problems with the crops. In field IoT sensors will stream live data (such as our partners Soil Scout). Soil genomics and analysis can analyze macro and micro nutrient content of the soil and track the genetics of the soil microbiome (like our friends at Trace Genomics). Robotic and automated farming equipment (like our partners at Monarch Tractor and Husqvarna are building) can vary applications and planting according to precomputed variable rate application maps. Despite all these breakthroughs, precision farming techniques still have a low penetration. There are many reasons for this (more than could be discussed in this article!) but one of them is inadequate connectivity. Most of these modern technologies rely on access to the internet and in many cases it just isn’t possible. For decades subsidies and programs have been rolled out to improve rural connectivity but the reality is that connecting up far flung areas is expensive, often labor intensive, and consequently from a pure business standpoint does not make sense for the connectivity providers. Even as infrastructure expands to more remote areas, there will always remain large swaths of rural america where conventional connectivity infrastructure is highly impractical. Most of GreenSight’s data processing is done in the cloud. Several gigabytes of imagery data are uploaded from our aircraft after every flight to be processed and delivered to our customers. Our custom artificial intelligence analyses the data and informs farmers to problem areas. From many remote farm fields, uploading can be a slow process. We’ve invested heavily in the portability of our systems and our upcoming next generation aircraft will be capable of onboard processing, but despite this connectivity will still be needed to make data available for farmers and other automated agriculture systems. Advanced sensing systems like ours have to be able to integrate with connected robotic sprayers, harvesters and tractors, unlocking the productivity potential of precision agriculture. Humanity needs precision agriculture, and connected data-driven systems will be a big part of that revolution. Beyond the global necessity, the economics for farmers work too! A 2018 USDA studies indicate that connecting US farmland will unlock $50B in industry revenue. We are extremely excited about Starlink and its potential to bring cost effective internet connectivity to farms and rural areas. Starlink levels the playing field for rural areas, enabling high speed connectivity everywhere. No longer will farmers have to wait for high speed wired connectivity to come to their area or install a complex mesh network on their property. IoT data can be streamed from fields as easily as it now streams from urban homes. Starlink will be a catalyzing force for chance, advancing access to precision agriculture globally and contributing to solving global food challenges. Food Insecurity goes nuclear – escalates multiple hotspots. Cribb 19 Julian Cribb 8-23-2019 “Food or War” https://www.cambridge.org/core/books/abs/food-or-war/hotspots-for-food-conflict-in-the-twentyfirst-century/1CD674412E09B8E6F325C9C0A0A6778A (principal of Julian Cribb and Associates who provide specialist consultancy in the communication of science, agriculture, food, mining, energy and the environment. , His published work includes over 8000 articles, 3000 media releases and eight books. He has received 32 awards for journalism.)Elmer Future Food Wars The mounting threat to world peace posed by a food, climate and ecosystem increasingly compromised and unstable was emphasised by the US Director of National Intelligence, Dan Coats, in a briefing to the US Senate in early 2019. 'Global environmental and ecological degradation, as well as climate change, are likely to fuel competition for resources, economic distress, and social discontent through 2019 and beyond', he said. 'Climate hazards such as extreme weather, higher temperatures, droughts, floods, wildfires, storms, sea level rise, soil degradation, and acidifying oceans are intensifying, threatening infrastructure, health, and water and food security. Irreversible damage to ecosystems and habitats will undermine the economic benefits they provide, worsened by air, soil, water, and marine pollution.' Boldly, Coats delivered his warning at a time when the US President, Trump, was attempting to expunge all reference to climate from government documents. 23 Based upon these recent cases of food conflicts, and upon the lessons gleaned from the longer history of the interaction between food and war, several regions of the planet face a greatly heightened risk of conflict towards the mid twentyfirst century. Food wars often start out small, as mere quarrels over grazing rights, access to wells or as one faction trying to control food supplies and markets. However, if not resolved quickly these disputes can quickly escalate into violence, then into civil conflagrations which, if not quelled, can in turn explode into crises that reverberate around the planet in the form of soaring prices, floods of refugees and the involvement of major powers — which in turn carries the risk of transnational war. The danger is magnified by swollen populations, the effects of climate change, depletion of key resources such as water, topsoil and nutrients, the collapse of ecosystem services that support agriculture and fisheries, universal pollution, a widening gap between rich and poor, and the rise of vast megacities unable to feed themselves (Figure 5.3). Each of the world's food 'powderkeg regions' is described below, in ascending order of risk. United States In one sense, food wars have already broken out in the United States, the most overfed country on Earth. Here the issue is chiefly the growing depletion of the nation's mighty ground- water resources, especially in states using it for food production, and the contest over what remains between competing users — farmers, ranchers and Native Americans on the one hand and the oil, gas and mining industry on the other. Concern about the future of US water supplies was aggravated by a series of savage droughts in the early twentyfirst century in the west, south and midwest linked to global climate change and declining snow- pack in the Rocky Mountains, both of which affect not only agriculture but also the rate at which the nation's groundwater reserves recharge. 'Groundwater depletion has been a concern in the Southwest and High Plains for many years, but increased demands on our groundwater resources have overstressed aquifers in many areas of the Nation, not just in arid regions', notes the US Geological Survey.24 Nine US states depend on groundwater for between 50 per cent and 80 per cent of their total freshwater supplies, and five states account for nearly half of the nation's groundwater use. Major US water resources, such as the High Plains aquifers and the Pacific Northwest aquifers have sunk by 30—50 metres (100—150 feet) since exploitation began, imperilling the agricultural industries that rely on them. In the arid south- west, aquifer declines of 100—150 metres have been recorded (Figure 5.4). To take but one case, the famed Ogallala Aquifer in the High Plains region supports cropping industries worth more than US $20 billion a year and was in such a depleted state it would take more than 6000 years to replace by natural infiltration the water drawn from it by farmers in the past 150 years. As it dwindles, some farmers have tried to kick their dependence on ground- water other users, including the growing cities and towns of the region, proceeded to mine it as if there was no tomorrow.25 A study by Kansas State University concluded that so far, 30 per cent of the local groundwater had been extracted and another 39 per cent would be depleted by the mid century on existing trends in withdrawal and recharge.26 Over half the US population relies on groundwater for drinking; both rural and urban America are at risk. Cities such as New Orleans, Houston and Miami face not only rising sea levels — but also sinking land, due to the extraction of underlying ground- water. In Memphis, Tennessee, the aquifer that supplies the city's drinking water has dropped by 20 metres. Growing awareness of the risk of a nation, even one as large and technologically adept as the USA, having insufficient water to grow its food, generate its exports and supply its urban homes has fuelled tensions leading to the eruption of nationwide protests over 'fracking' for oil and gas — a process that can deplete or poison groundwater — and the building -of oil pipe- lines, which have a habit of rupturing and also polluting water resources. The boom in fracking and piping is part of a deliberate US policy to become more self-reliant in fossil fuels.27 Thus, in its anxiety to be independent of overseas energy suppliers, the USA in effect decided to barter away its future food security for current oil security — and the price of this has been a lot of angry farmers, Native Americans and concerned citizens. The depletion of US groundwater coincides with accelerating climate risk, which may raise US temperatures by as much as 4—5 oc by 2100, leading to major losses in soil moisture throughout the US grain belt, and the spread of deserts in the south and west. Food production will also be affected by fiercer storms, bigger floods, more heatwaves, an increase in drought frequency and greater impacts from crop and livestock diseases. In such a context, it is no time to be wasting stored water. The case of the USA is included in the list of world 'hot spots' for future food conflict, not because there is danger of a serious shooting war erupting over water in America in the foreseeable future, but to illustrate that even in technologically advanced countries unforeseen social tensions and crises are on the rise over basic resources like food, land and water and their depletion. This doesn't just happen in Africa or the Middle East. It's a global phenomenon. Furthermore, the USA is the world's largest food exporter and any retreat on its part will have a disproportionate effect on world food price and supply. There is still plenty of time to replan America's food systems and water usage — but, as in the case of fossil fuels and climate, rear-guard action mounted by corporate vested interests and their hired politicians may well paralyse the national will to do it. That is when the US food system could find itself at serious risk, losing access to water in a time of growing climatic disruption, caused by exactly the same forces as those depleting the groundwater: the fossil fuels sector and its political stooges. The probable effect of this will, in the first instance, be a decline in US meat and dairy production accompanied by rising prices and a fall in its feedgrain exports, with domino effects on livestock industries worldwide. The flip-side to this issue is that America's old rival, Russia, is likely to gain in both farmland and water availability as the planet warms through the twentyfirst century — and likewise Canada. Both these countries stand to prosper from a US withdrawal from world food markets, and together they may negate the effects of any US food export shortfalls. Central and South America South America is one of the world's most bountiful continents in terms of food production — but, after decades of improvement, malnutrition is once more on the rise, reaching a new peak of 42.5 million people affected in 2016. 28 'Latin America and the Caribbean used to be a worldwide example in the fight against hunger. We are now following the worrisome global trend', said regional FAO representative Julio Berdegué. 29 Paradoxically, obesity is increasing among Latin American adults, while malnutrition is rising among children. 'Although Latin America and the Caribbean produce enough food to meet the needs of their population, this does not ensure healthy and nutritious diets', the FAO explains. Worsening income inequality, poor access to food and persistent poverty are contributing to the rise in hunger and bad diets, it adds.30 'The impact of climate change in Latin America and the Caribbean will be considerable because of its economic dependence on agriculture, the low adaptive capacity of its population and the geographical location of some of its countries', an FAO report warned.31 Emerging food insecurity in Central and Latin America is being driven by a toxic mixture of failing water supplies, drying farmlands, poverty, maladministration, incompetence and corruption. These issues are exacerbated by climate change, which is making the water supply issue worse for farmers and city people alike in several countries and delivering more weather disasters to agriculture. Mexico has for centuries faced periodic food scarcity, with a tenth of its people today suffering under-nutrition. In 2008 this rose to 18 per cent, leading to outbreaks of political violence. 2 In 2013, 52 million Mexicans were suffering poverty and seven million more faced extreme hunger, despite the attempts of successive governments to remedy the situation. By 2100 northern Mexico is expected to warm by 4—5 oc and southern Mexico by 1.5—2.5 oc. Large parts of the country, including Mexico City, face critical water scarcity. Mexico's cropped area could fall by 40—70 per cent by the 2030s and disappear completely by the end of the century, making it one of the world's countries most at risk from catastrophic climate change and a major potential source of climate refugees.33 The vanishing lakes and glaciers of the high Andes confront montane nations — Bolivia, Peru and Chile especially — with the spectre of growing water scarcity and declining food security. The volume of many glaciers, which provide meltwater to the region's rivers, which in turn irrigate farmland, has halved since 1975.34 Bolivia's second largest water body, the 2000 square kilometres Lake Poopo, dried out completely.35 The loss of water is attributed partly to El Niho droughts, partly to global warming and partly to over-extraction by the mining industries of the region. Chile, with 24,000 glaciers (80 per cent of all those in Latin America) is feeling the effects of their retreat and shrinkage especially, both in large cities such as the capital Santiago, and in irrigation agriculture and energy supply. Chile is rated by the World Resources Institute among the countries most likely to experience extreme water stress by 2040.36 Climate change is producing growing water and food insecurity in the 'dry corridor' of Central America, in countries such as El Salvador, Guatemala and Honduras. Here a combination of drought, major floods and soil erosion is undermining efforts to raise food production and stabilise nutrition. Food production in Venezuela began falling in the 1990s, and by the late 2010s two thirds of the population were malnourished; there was a growing flood of refugees into Colombia and other neighbouring countries. The food crisis has been variously blamed on the Venezuelan government's 'Great Leap Forward' (modelled on that of China — which also caused widespread starvation), a halving in Venezuela's oil export earnings, economic sanctions by the USA, and corruption. However, local scientists such as Nobel Laureate Professor Juan Carlos Sanchez warn that climate impacts are already striking the densely populated coastal regions with increased torrential rains, flooding and mudslides, droughts and hurricanes, while inland areas are drying out and desertifying, leading to crop failures, water scarcity and a tide of climate refugees.37 These factors will tend to deepen food insecurity towards the mid century. Venezuela's climate refugees are already making life more difficult for neighbouring countries such as Colombia. Deforestation in the Brazilian Amazon has, in recent decades, removed around 20 per cent of its total tree cover, replacing it with dry savannah and farmland. At 40 per cent clearance and with continued global warming, scientists anticipate profound changes in the local climate, towards a drying trend, which will hammer the agriculture that has replaced the forest.38 Brazil has already wiped out the once- vast Mata Atlantica forest along its eastern coastline, and this region is now drying, with resultant water stress for both farming and major cities like Säo Paulo. Brazil's outlook for 2100 is for further drying — tied to forest loss as well as global climate change — increased frequency of drought and heatwaves, major fires and acute water scarcity in some regions. Moreover, as the Amazon basin dries out, if will release vast quantities of C02 from its peat swamps and rainforest soils. These are thought to contain in excess of three billion tonnes of carbon and could cause a significant acceleration in global warming, affecting everyone on Earth. 39 Latin America is the world capital of private armies, with as many as 50 major guerrilla groups, paramilitaries, terrorist, indigenous and criminal insurgencies over the past half century exemplified in familiar names like the Sandanistas (Nicaragua), FARC (Colombia) and Shining Path (Peru). 40 Many of these drew their initial inspiration from the international communist movement of the mid twentieth century, while others are right-wing groups set up in opposition to them or else represent land rights movements of disadvantaged groups. However, all these movements rely for oxygen on simmering public discontent with ineffectual or corrupt governments and lack of fair access to food, land and water generally. In other words, the tendency of South and Central America towards internal armed conflict is supercharged significantly by failings in the food system which generate public anger, leading to sympathy and support for anyone seen to be challenging the incumbent regimes. This is not to suggest that feeding every person well would end all insurgencies — but it would certainly take the wind of popular support out of a lot of their sails. In that sense the revolutionary tendency of South America echoes the preconditions for revolution in France and Russia in the eighteenth and twentieth centuries. Central Asia The risk of wars breaking out over water, energy and food insecurity in Central Asia is high.41 Here, the five main players — Kazakhstan, Uzbekistan, Turkmenistan, Tajikistan and Kyrgyzstan — face swelling populations, crumbling Soviet-era infrastructure, flagging resource cooperation, a degrading land- scape, deteriorating food availability and a changing climate. At the heart of the issue and the region's increasingly volatile politics is water: 'Without water in the region's two great rivers — the Syr Darya and the Amu Darya — vital crops in the down- stream agricultural powerhouses would die. Without power, life in the upstream countries would be unbearable in the freezing winters' , wrote Rustam Qobil. Central Asia's water crisis first exploded onto the global consciousness with the drying of the Aral Sea — the world's fourth largest lake — from the mid 1960s43, following the damming and draining of major rivers such as the Amu Darya, Syr Darya and Naryn. It was hastened by a major drought in 200844 exacerbated by climate change, which is melting the 'water tower' of glacial ice stored in the Tien Shan, Pamir and Hindu Kush mountain ranges that feed the region's rivers. The Tien Shan alone holds 10,000 glaciers, all of them in retreat, losing an estimated 223 million cubic metres a year. At such a rate of loss the region's rivers will run dry within a generation.45 Lack of water has already delivered a body blow to Central Asia's efforts to modernise its agriculture, adding further tension to regional disputes over food, land and water. 'Water has always been a major cause of wars and border conflicts in the Central Asian region', policy analyst Fuad Shahbazov warned. This potential for conflict over water has been exacerbated by disputes over the Fergana valley, the region's greatest foodbowl, which underwent a 32 per cent surge in population in barely ten years — while more and more of it turned to desert.46 The Central Asian region is ranked by the World Resources Institute as one of the world's most perilously water-stressed regions to 2040 (Figure 5.6). With their economies hitting rock bottom, corrupt and autocratic governments that prefer to blame others for their problems and growing quarrels over food, land, energy and water, the 'Stans' face 'a perfect storm', Nate Shenkkan wrote in the journal Foreign Policy 47 Increased meddling by Russia and China is augmenting the explosive mix: China regards Central Asia as a key component of its 'Belt and Road' initiative intended to expand its global influence, whereas Russia hopes to lure the region back into its own economic sphere. Their rival investments may help limit some of the problems faced by Central Asia — or they may unlock a fresh cycle of political feuding, turmoil and regime change.48 A 2017 FAO report found 14.3 million people — one in every five — in Central Asia did not have enough to eat and a million faced actual starvation, children especially. It noted that after years of steady improvement, the situation was deteriorating. This combination of intractable and deteriorating factors makes Central Asia a serious internal war risk towards the mid twentyfirst century, with involvement by superpowers raising the danger of international conflict and mass refugee flight. The Middle East The Middle East is the most water-stressed region on Earth (see Figure 5.5 above). It is 'particularly vulnerable to climate change. It is one of the world's most water-scarce and dry regions, with a high dependency on climate-sensitive agriculture and a large share of its population and economic activity in flood-prone urban coastal zones', according to the World Bank. 49 The Middle East — consisting of the 22 countries of the Arab League, Turkey and Iran — has very low levels of natural rainfall to begin with. Most of it has 600 millimetres or less per year and is classed as arid. 'The Middle East and North Africa MENA is a global hotspot of unsustainable water use, especially of ground- water. In some countries, more than half of current water withdrawals exceed what is naturally available', the Bank said in a separate report on water scarcity. 50 'The climate is predicted to become even hotter and drier in most of the MENA region. Higher temperatures and reduced precipitation will increase the occurrence of droughts. It is further estimated that an additional 80—100 million people will be exposed by 2025 to water stress', the Bank added. The region's population of 300 million in the late 2010s is forecast to double to 600 million by 2050. Average temperatures are expected to rise by 3—5 oc and rainfall will decrease by around 20 per cent. The result will be vastly increased water stress, accelerated desertification, growing food insecurity and a rise in sea levels displacing tens of millions from densely popu- lated, low-lying areas like the Nile delta.51 The region is deemed highly vulnerable to climate impacts, warns a report by the UN Development Programme. 'Current climate change projections show that by the year 2025, the water supply in the Arab region will be only 15 per cent of levels in 1960. With population growth around 3 per cent annually and deforestation spiking to 4 per cent annually... the region now includes 14 of the world s 20 most water-stressed countries.'52 The Middle Fast/North Africa (MENA) region has 6 per cent of the world's population with only 1.5 per cent of the world's fresh water reserves to share among them. This means that the average citizen already has about a third less water than the minimum necessary for a reasonable existence — many have less than half, and populations are growing rapidly. Coupled with political chaos and ill governance in many countries, growing religious and ethnic tensions between different groups — often based on centuries-old disputes — a widening gap between rich and poor and foreign meddling by the USA, Russia and China, shortages of food, land and water make the Middle East an evident cauldron for conflict in the twentyfirst century. Growing awareness of their food risk has impelled some oil-rich Arab states into an international farm buying spree, purchasing farming, fishing and food processing companies in countries as assorted as South Sudan, Ethiopia, the Philippines, Ukraine, the USA, Poland, Argentina, Australia, Brazil and Morocco. In some food-stressed countries these acquisitions have already led to riots and killings.53 The risk is high that, by exporting its own food—land—water problems worldwide, especially to regions already facing scarcity, the Middle East could propagate conflicts and government collapses around the globe. This is despite the fact that high-tech solar desalination, green energy, hydroponics, aquaponics and other intensive urban food production technologies make it possible for the region to produce far more of its own food locally, if not to be entirely self-sufficient. Dimensions of the growing crisis in the Middle East include the following. Wars have already broken out in Syria and Yemen in which scarcity of food, land and water were prominent among the tensions that led to conflict between competing groups. Food, land and water issues feed into and exacerbate already volatile sentiment over religion, politics, corruption, mismanagement and foreign interference by the USA, China and Russia. The introduction of cheap solar-powered and diesel pumps has accelerated the unsustainable extraction of groundwater throughout the region, notably in countries like Libya, Egypt, Saudi Arabia and Morocco. 54 Turkish building of new dams to monopolise waters flowing across its borders is igniting scarcity and potential for conflict with downstream nations, including Iraq, Iran and Syria. 55 Egypt's lifeline, the Nile, is threatened by Ethiopian plans to dam the Blue Nile, with tensions that some observers consider could lead to a shooting war. 56 There are very low levels of water recycling throughout the region, while water use productivity is about half that of the world as a whole. There is a lack of a sense of citizen responsibility for water and food scarcity throughout the region. Land grabs around the world by oil-rich states are threatening to destabilise food, land and water in other countries and regions, causing conflict. A decline in oil prices and the displacement of oil by the global renewables revolution may leave the region with fewer economic options for solving its problems. There is a risk that acquisition of a nuclear weapon by Iran may set off a nuclear arms race in the region with countries such as Saudi Arabia, Syria and possibly Turkey following suit and Israel rearming to stay in the lead. This would translate potential food, land and water conflicts into the atomic realm. Together these issues, and failure to address their root causes, make the Middle East a fizzing powder keg in the twentyfirst century. The question is when and where, not whether, it explodes — and whether the resulting conflict will involve the use of weapons of mass destruction, including nuclear, thus affecting the entire world. China China is the world's biggest producer, importer and consumer of food. Much of the landmass of the People's Republic of China (PRC) is too mountainous or too arid for farming, but the rich soils of its eastern and southern regions are highly productive provided sufficient water is available and climate impacts are mild. Those, however, are very big 'ifs'. In 1995, American environmentalist Lester R. Brown both Eked and aroused the PRC Communist Party bosses with a small, hard-hitting book entitled Who Will Feed China? Wake-Up Call for a Small Planet.57 In it he posited that Chinese population growth was so far out of control that the then-agricultural system could not keep up, and China would be forced to import vast amounts of grain, to the detriment of food prices and availability worldwide. His fears, so far, have not been realised — not because they were unsoundly based, but because China managed — just — to stay abreast of rising food demand by stabilising and subsidising grain prices, restoring degraded lands, boosting agricultural science and technology, piping water from south to north, developing high-intensity urban farms, buying up foreign farmland worldwide and encouraging young Chinese to leave the country. What Brown didn't anticipate was the economic miracle that made China rich enough to afford all this. However, his essential thesis remains valid: China's food supply will remain on a knife-edge for the entire twentyfirst century, vulnerable especially to water scarcity and climate impacts. If the nation outruns its domestic resources yet still has to eat, it may well be at the expense of others globally. Some western commentators were puzzled when China scrapped its 35-year 'One Child Policy' in 2015, but in fact the policy had done its job, shaving around 300 million people off the projected peak of Chinese population. It was also causing serious imbalances, such as China's huge unmarried male sur- plus. Furthermore, rising urbanisation and household incomes meant Chinese parents no longer wanted large families, as in the past. Policy or no policy, China's birthrate has continued to fall and by 2018 was 1.6 babies per woman — well below replacement, lower than the USA and nearly as low as Germany. Its population was 1.4 billion, but this was growing at barely 0.4 per cent a year, with the growth due at least in part to lengthening life expectancy. 58 For China, female fertility is no longer the key issue. The critical issue is water. And the critical region is the north, where 41 per cent of the population reside. Here surface and ground- waters — which support not only the vast grain and vegetable farming industries of the North China Plain but also burgeoning megacities like Beijing, Tianjin and Shenyang — have been vanishing at an alarming rate. 'In the past 25 years, 28,000 rivers have disappeared. Groundwater has fallen by up to 1—3 metres a year. One consequence: parts of Beijing are subsiding by 11 cm a year. The flow of the Yellow River, water supply to millions, is a tenth of what it was in the 1940s; it often fails to reach the sea. Pollution further curtails supply: in 2017 8.8 per cent of water was unfit even for agricultural or industrial use', the Financial Times reported.59 On the North China Plain, annual consump- tion of water for all uses, including food production, is about 27 billion cubic metres a year — compared with an annual water availability of 22 billion cubic metres, a deficit that is made up by the short-term expedient of mining the region's groundwater. 60 To stave off disaster, the PRC has built a prodigious network of canals and pipelines from the Yangtse River in the water-rich south, to Beijing in the water-starved north. Hailed as a 'lifeline', the South—North Water Transfer Project had two drawbacks: first, the fossil energy required to pump millions of tonnes of water over a thousand kilometres and, second, the fact that while the volume was sufficient to satisfy the burgeoning cities for a time, it could not supply and distribute enough clean water to meet the needs of irrigated farming over so vast a region in the long run, nor meet those of its planned industrial growth.61 Oft-mouthed 'solutions' like desalination or the piping of water from Tibet or Russia face similar drawbacks: demand is too great for the potential supply and the costs, both financial and environmental, prohibitive. China is already among the world's most water-stressed nations. The typical Chinese citizen has a 'water footprint' of 1071 cubic metres a year — three quarters of the world average (1385 cubic metres), and scarcely a third that of the average American (2842 cubic metres).62 Of this water, 62 per cent is used to grow food to feed the Chinese population — and 90 per cent is so polluted it is unfit to drink or use in food processing. Despite massive investment in water infrastructure and new technology, many experts doubt that China can keep pace with the growth in its demand for food, at least within its own borders, chiefly because of water scarcity.63 Adding to the pressure is that China's national five-year plans for industrialisation demand massive amounts more water — demands that may confront China with a stark choice between food and economic growth. 'The Chinese government is moving too slowly towards the Camel Economy. It has plans, incentives for officials; it invests in recycling, irrigation, pollution, drought resistant crops; it leads the world in high voltage transmission (to get hydro, wind and solar energy from the west of China). None of this is sufficient or likely to be in time', the Financial Times opined. As the world's leading carbon emitter, China is more responsible for climate change than any other country. It is also, potentially, more at risk. The main reason, quite simply, is the impact of a warming world on China's water supply — in the form of disappearing rivers, lakes, groundwater and mountain glaciers along with rising sea levels. To this is coupled the threat to agriculture from increasing weather disasters and the loss of ecosystem services from a damaged landscape. 65 China is thus impaled on the horns of a classic dilemma. Without more water it cannot grow its economy sufficiently to pay for the water-conserving and food-producing technologies and infrastructure it needs to feed its people. Having inadvertently unleashed a population explosion with its highly successful conversion to modern farming systems, the challenge for China now is to somehow sustain its food supply through the population peak of the mid twentyfirst century, followed by a managed decline to maybe half of today's numbers by the early twentysecond century. It is far from clear whether the present approach — improving market efficiency, continuing to modernise agricultural production systems, pumping water, trying to control soil and water losses and importing more food from overseas will work. 66 China has pinned its main hopes on technology to boost farm yields and improve water distribution and management. Unfortunately, it has selected the unsustainable American industrial farming model to do this — which involves the massive use of water, toxic chemicals, fertilisers, fossil fuels and machines. This in turn is having dreadful consequences for China's soils, waters, landscapes, food supply, air, climate and consumer health. Serious questions are now being asked whether such an approach is not digging the hole China is in, even deeper. Furthermore, some western analysts are sceptical whether the heavy hand of state control is up to the task of generating the levels of innovation required to feed China sustainably.67 Plan B, which is to purchase food from other countries, or import it from Chinese-owned farming and food ventures around the world, faces similar difficulties. Many of the countries where China is investing in food production themselves face a slow-burning crisis of land degradation, water scarcity, surging populations and swelling local food demand. By exporting its own problems, China is adding to their difficulties. While there may be some truth to the claim that China is helping to modernise food systems in Africa, for example, it is equally clear that the export of food at a time of local shortages could have dire consequences for Africans, leading to wars in Africa and elsewhere. How countries will react to Chinese pressure to export food in the face of their own domestic shortages is, as yet, unclear. If they permit exports, it could prove cata- strophic for their own people and governments — but if they cut them off, it could be equally catastrophic for China. Such a situation cannot be regarded as anything other than a menace to world peace. Around 1640, a series of intense droughts caused widespread crop failures in China, leading to unrest and uprisings which, in 1644, brought down the Ming Dynasty. A serious domestic Chinese food and water crisis today — driven by drought, degradation of land and water and climate change in northern China coupled with failure in food imports — could cause a re-run of history: 'The forthcoming water crisis may impact China's social, economic, and political stability to a great extent', a US Intelligence Assessment found. The adverse impacts of climate change will add extra pressure to existing social and resource stresses.' 68 Such events have the potential to precipitate tens, even hundreds, of millions of emigrants and refugees into countries all over the world, with domino consequences for those countries that receive them. Strategic analysts have speculated that tens of millions of desperate Chinese flooding into eastern Russia, or even India, could lead to war, including the risk of international nuclear exchange. 69 Against such a scenario are the plain facts that China is a technologically advanced society, with the foresight, wealth and capacity to plan and implement nationwide changes and the will, if necessary, to enforce them. Its leaders are clearly alert to the food and water challenge — and its resolution may well depend on the extent of water recycling they are able to achieve. As to whether the PRC can afford the cost of transitioning from an unsustainable to a sustainable food system, all countries have a choice between unproductive military spending and feeding their populace. A choice between food or war. It remains to be seen which investment China favours. However, it is vital to understand that the problem of whether China can feed itself through the twentyfirst century is not purely a Chinese problem. It's a problem, both economic and physical, for the entire planet — and it is thus in everyone's best interest to help solve it. For this reason, China is rated number 3 on this list of potential food war hotspots. Africa Food wars — that is, wars in which food, land and water play a significant contributing role — have been a constant in the story of Africa since the mid twentieth century, indeed, far longer. In a sense, the continent is already a microcosm of the world of the twentyfirst century as climate change and resource scarcity com- bine with rapid population growth to ratchet up the tensions that lead competing groups to fight, whether the superficial distinc- Mons between them are ethnic, religious, social or political. We have examined the particular cases of Rwanda, South Sudan and the Horn of Africa — but there are numerous other African conflicts, insurgencies and ongoing disturbances in which food, land and water are primary or secondary triggers and where famine is often the outcome: Nigeria, Congo, Egypt, Tunisia, Libya, Mali, Chad, the Central African Republic, the Maghreb region of the Sahara, Mozambique, Cote d'Ivoire and Zimbabwe have all experienced conflicts in which issues of access to food, land and water were important drivers and consequences. The trajectory of Africa's population in the first two decades of the twentyfirst century implies that the number of its people could quadruple from 1.2 billion in 2017 to 4.5 billion by 2100 (Figure 5.6). If fulfilled, this would make Africans 41 per cent of the world population by the end of the century. The UN Popula- tion Division's nearer projections are for Africans to outnumber Chinese or Indians at 1.7 billion by 2030, and reach 2.5 billion in 2050, which represents a doubling in the continent's inhabitants in barely 30 years. 70 While African fertility rates (babies per woman) remain high by world standards — 4.5 compared with a global average of 2.4 — they have also fallen steeply, from a peak of 8.5 babies in the 1970s. Furthermore, the picture is uneven with birthrates in most Sub-Saharan countries remaining high (around five to six babies/woman), while those of eight, mainly southern, countries have dropped to replace- ment or below (i.e. under 2.1). As has been the case around the world, birth rates tend to drop rapidly with the spread of urban isation, education and economic growth — whereas countries which slide back into poverty tend to experience rising birth- rates. Food access is a vital ingredient in this dynamic: it has been widely observed that better-fed countries tend to have much lower rates of birth and population growth, possibly because people who are food secure lose fewer infants and children in early life and thus are more open to family planning. So, in a real sense, food sufficiency holds one of the keys to limiting the human population to a level sustainable both for Africa and the planet in general. Forecasting the future of Africa is not easy, given the complexity of the interwoven climatic, social, technological and political issues — and many do not attempt it. However, the relentless optimism of the UN and its food agency, the FAO, is probably not justified by the facts as they are known to science — and may have more to do with not wishing to give offence to African governments or discourage donors than with attempting to accurately analyse what may occur. Even the FAO acknowledges however that food insecurity is rising across Sub-Saharan Africa as well as other parts. In 2017, conflict and insecurity were the major drivers of acute food insecurity in 18 countries and territories where almost 74 million food-insecure people were in need of urgent assistance. Eleven of these countries were in Africa and accounted for 37 million acutely food insecure people; the largest numbers were in northern Nigeria, Demo- cratic Republic of Congo, Somalia and South Sudan the agency said in its Global Report on Food Crises 2018.71 The FAO also noted that almost one in four Africans was undernourished in 2016 — a total of nearly a quarter of a billion people. The rise in undernourishment and food insecurity was linked to the effects of climate change, natural disasters and conflict according to Bukar Tijani, the FAO's assistant director general for Africa. 72 Even the comparatively prosperous nation of South Africa sits on a conflict knife-edge, according to a scientific study: 'Results indicate that the country exceeds its environmental boundaries for biodiversity loss, marine harvesting, freshwater use, and climate change, and that social deprivation was most severe in the areas of safety, income, and employment, which are significant factors in conflict risk', Megan Cole and colleagues found. 73 In the Congo, home to the world's second largest tropical forest, 20 years of civil war had not only slain five million civilians but also decimated the forests and their ecological services on which the nation depended. Researchers found evidence that reducing conflict can also help to reduce environ- mental destruction: 'Peace-building can potentially be a win for nature as well, and.. conservation organizations and govern- ments should be ready to seize conservation opportunities'. 74 As the African population doubles toward the mid century, as its water, soils, forests and economic wealth per capita dwindle, as foreign corporations plunder its riches, as a turbulent climate hammers its herders and farmers — both industrial and traditional — the prospect of Africa resolving existing conflicts and avoiding new ones is receding. The mistake most of the world is making is to imagine this only affects the Africans. The consequences will impact everyone on the planet.
2/13/22
JF - DA - Xi Lashout
Tournament: CPS | Round: 5 | Opponent: Harker AS | Judge: Vanessa Nguyen Cites are broken - check open source
12/20/21
JF - NC - Kant
Tournament: Lex | Round: 4 | Opponent: Ridge SN | Judge: Daniel Shahab Diaz 3 Framework The meta-ethic is procedural moral realism. This entails that moral facts stem from procedures while substantive realism holds that moral truths exist independently of that in the empirical world. Prefer procedural realism – 1 Collapses – the only way to verify whether something is a moral fact is by using procedures to warrant it. 2 Uncertainty – our experiences are inaccessible to others which allows people to say they don’t experience the same, however a priori principles are universally applied to all agents. 3 Is/Ought Gap – we can only perceive what is, not what ought to be. It’s impossible to derive an ought statement from descriptive facts about the world, necessitating a priori premises. Moral law must be universal—our judgements can’t only apply to ourselves any more than 2+2=4 can be true only for me – any non-universalizable norm justifies someone’s ability to impede on your ends. Prefer – 1 Performativity—freedom is the key to the process of justification of arguments. Willing that we should abide by their ethical theory presupposes that we own ourselves in the first place. 2 All other frameworks collapse—non-Kantian theories source obligations in extrinsically good objects, but that presupposes the goodness of the rational will. No aff analytics – unpredictable because there are infinite analytical arguments that they can read 3 TJFs and they outweigh since it precludes engagement on the framework layer – prefer for Resource disparities- Our framework ensures big squads don’t have a comparative advantage since debates become about quality of arguments rather than quantity - their model crowds out small schools because they have to prep for every unique advantage under each aff, every counterplan, and every disad with carded responses to each of them Offense Acquisition of property can never be unjust – to create rights violations, there must already be an owner of the property being violated, but that presupposes its appropriation by another entity. Feser, (Edward Feser, 1-1-2005, accessed on 12-15-2021, Cambridge University Press, "THERE IS NO SUCH THING AS AN UNJUST INITIAL ACQUISITION | Social Philosophy and Policy | Cambridge Core", Edward C. Feser is an American philosopher. He is an Associate Professor of Philosophy at Pasadena City College in Pasadena, California. https://www.cambridge.org/core/journals/social-philosophy-and-policy/article/abs/there-is-no-such-thing-as-an-unjust-initial-acquisition/5C744D6D5C525E711EC75F75BF7109D1)brackets for gen langphs st There is a serious difficulty with this criticism of Nozick, however. It is just this: There is no such thing as an unjust initial acquisition of resources; therefore, there is no case to be made for redistributive taxation on the basis of alleged injustices in initial acquisition. This is, to be sure, a bold claim. Moreover, in making it, I contradict not only Nozick’s critics, but Nozick himself, who clearly thinks it is at least possible for there to be injustices in acquisition, whether or not there have in fact been any (or, more realistically, whether or not there have been enough such injustices to justify continual redistributive taxation for the purposes of rectifying them). But here is a case where Nozick has, I think, been too generous to the other side. Rather than attempt —unsatisfactorily, in the view of his critics—to meet the challenge to show that initial acquisition has not in general been unjust, he ought instead to have insisted that there is no such challenge to be met in the first place. Giving what I shall call “the basic argument” for this audacious claim will be the task of Section II of this essay. The argument is, I think, compelling, but by itself it leaves unexplained some widespread intu- itions to the effect that certain specific instances of initial acquisition are unjust and call forth as their remedy the application of a Lockean proviso, or are otherwise problematic. (A “Lockean proviso,” of course, is one that forbids initial acquisitions of resources when these acquisitions do not leave “enough and as good” in common for others.) Thus, Section III focuses on various considerations that tend to show how those intuitions are best explained in a way consistent with the argument of Section II. Section IV completes the task of accounting for the intuitions in question by considering how the thesis of self-ownership itself bears on the acqui- sition and use of property. Section V shows how the results of the previ- ous sections add up to a more satisfying defense of Nozickian property rights than the one given by Nozick himself, and considers some of the implications of this revised conception of initial acquisition for our under- standing of Nozick’s principles of transfer and rectification. II. The Basic Argument The reason there is no such thing as an unjust initial acquisition of resources is that there is no such thing as either a just or an unjust initial acquisition of resources. The concept of justice, that is to say, simply does not apply to initial acquisition. It applies only after initial acquisition has already taken place. In particular, it applies only to transfers of property (and derivatively, to the rectification of injustices in transfer). This, it seems to me, is a clear implication of the assumption (rightly) made by Nozick that external resources are initially unowned. Consider the following example. Suppose an individual A seeks to acquire some previously unowned resource R. For it to be the case that A commits an injustice in acquiring R, it would also have to be the case that there is some individual B (or perhaps a group of individuals) against whom A commits the injustice. But for B to have been wronged by A’s acquisi- tion of R, B would have to have had a rightful claim over R, a right to R. By hypothesis, however, B did not have a right to R, because no one had a right to it—it was unowned, after all. So B was not wronged and could not have been. In fact, the very first person who could conceivably be wronged by anyone’s use of R would be, not B, but A himself, since A is the first one to own R. Such a wrong would in the nature of the case be an injustice in transfer—in unjustly taking from A what is rightfully his—not in initial acquisition. The same thing, by extension, will be true of all unowned resources: it is only after some- one has initially acquired them that anyone could unjustly come to possess them, via unjust transfer. It is impossible, then, for there to be any injustices in initial acquisition.7
1/15/22
JF - NC - Kant v2
Tournament: Palm Classic | Round: 4 | Opponent: Immaculate Heart BC | Judge: Joseph Barquin 3 Framework Permissibility and presumption negate 1 Obligations- the resolution indicates the affirmative has to prove an obligation, and permissibility would deny the existence of an obligation 2 Falsity- Statements are more often false than true because proving one part of the statement false disproves the entire statement. Presuming all statements are true creates contradictions which would be ethically bankrupt. 3 Negating is harder – A Aff gets first and last speech which control the direction of the debate B Affirmatives can strategically uplayer in the 1ar giving them a 7-6 time skew advantage, splitting the 2nr C They get infinite prep time The meta-ethic is procedural moral realism. This entails that moral facts stem from procedures while substantive realism holds that moral truths exist independently of that in the empirical world. Prefer procedural realism – 1 Collapses – the only way to verify whether something is a moral fact is by using procedures to warrant it. 2 Uncertainty – our experiences are inaccessible to others which allows people to say they don’t experience the same, however a priori principles are universally applied to all agents. 3 Is/Ought Gap – we can only perceive what is, not what ought to be. It’s impossible to derive an ought statement from descriptive facts about the world, necessitating a priori premises. Practical Reason is that procedure. To ask for why we ought be reasoners concedes its authority since it uses reason – anything else is nonbinding and arbitrary. That hijacks their framework since you need reason to evaluate any relevant consequences. Moral law must be universal—our judgements can’t only apply to ourselves any more than 2+2=4 can be true only for me – any non-universalizable norm justifies someone’s ability to impede on your ends. Thus, the standard is consistency with the categorical imperative. Prefer – 1 Performativity—freedom is the key to the process of justification of arguments. Willing that we ought abide by their ethical theory presupposes that we own ourselves in the first place. 2 All other frameworks collapse—non-Kantian theories source obligations in extrinsically good objects, but that presupposes the goodness of the rational will. 3 TJFs and they outweigh since it precludes engagement on the framework layer – prefer for Resource disparities- Our framework ensures big squads don’t have a comparative advantage since debates become about quality of arguments rather than quantity - their model crowds out small schools because they have to prep for every unique advantage under each aff, every counterplan, and every disad with carded responses to each of them Offense 1 A model of freedom mandates a market-oriented approach to space—that negates Broker 20 (Tyler, work has been published in the Gonzaga Law Review, the Albany Law Review and the University of Memphis Law Review.) “Space Law Can Only Be Libertarian Minded,” Above the Law, 1-14-20, https://abovethelaw.com/2020/01/space-law-can-only-be-libertarian-minded/ TDI The impact on human daily life from a transition to the virtually unlimited resource reality of space cannot be overstated. However, when it comes to the law, a minimalist, dare I say libertarian, approach appears as the only applicable system. In the words of NASA, “2020 promises to be a big year for space exploration.” Yet, as Rand Simberg points out in Reason magazine, it is actually private American investment that is currently moving space exploration to “a pace unseen since the 1960s.” According to Simberg, due to this increase in private investment “We are now on the verge of getting affordable private access to orbit for large masses of payload and people.” The impact of that type of affordable travel into space might sound sensational to some, but in reality the benefits that space can offer are far greater than any benefit currently attributed to any major policy proposal being discussed at the national level. The sheer amount of resources available within our current reach/capabilities simply speaks for itself. However, although those new realities will, as Simberg says, “bring to the fore a lot of ideological issues that up to now were just theoretical,” I believe it will also eliminate many economic and legal distinctions we currently utilize today. For example, the sheer number of resources we can already obtain in space means that in the rapidly near future, the distinction between a nonpublic good or a public good will be rendered meaningless. In other words, because the resources available within our solar system exist in such quantities, all goods will become nonrivalrous in their consumption and nonexcludable in their distribution. This would mean government engagement in the public provision of a nonpublic good, even at the trivial level, or what Kevin Williamson defines as socialism, is rendered meaningless or impossible. In fact, in space, I fail to see how any government could even try to legally compel collectivism in the way Simberg fears. Similar to many economic distinctions, however, it appears that many laws, both the good and the bad, will also be rendered meaningless as soon as we begin to utilize the resources within our solar system. For example, if every human being is given access to the resources that allows them to replicate anything anyone else has, or replace anything “taken” from them instantly, what would be the point of theft laws? If you had virtually infinite space in which you can build what we would now call luxurious livable quarters, all without exploiting human labor or fragile Earth ecosystems when you do it, what sense would most property, employment, or commercial law make? Again, this is not a pipe dream, no matter how much our population grows for the next several millennia, the amount of resources within our solar system can sustain such an existence for every human being. Rather than panicking about the future, we should try embracing it, or at least meaningfully preparing for it. Currently, the Outer Space Treaty, or as some call it “the Magna Carta of Space,” is silent on the issue of whether private individuals or corporate entities can own territory in space. Regardless of whether governments allow it, however, private citizens are currently obtaining the ability to travel there, and if human history is any indicator, private homesteading will follow, flag or no flag. We Americans know this is how a Wild West starts, where most regulation becomes the impractical pipe dream. But again, this would be a Wild West where the exploitation of human labor and fragile Earth ecosystem makes no economic sense, where every single human can be granted access to resources that even the wealthiest among us now would envy, and where innovation and imagination become the only things we would recognize as currency. Only a libertarian-type system, that guarantees basic individual rights to life, liberty, and the pursuit of happiness could be valued and therefore human fidelity to a set of laws made possible, in such an existence.
2 Banning private space appropriation inhibits the sale and use of spacecraft and fuel- that’s a form of restricting the free economic choices of individuals Richman 12, Sheldon. “The free market doesn’t need government regulation.” Reason, August 5, 2012. AHS RG Order grows from market forces. But where do market forces come from? They are the result of human action. Individuals select ends and act to achieve them by adopting suitable means. Since means are scarce and ends are abundant, individuals economize in order to accomplish more rather than less. And they always seek to exchange lower values for higher values (as they see them) and never the other way around. In a world of scarcity, tradeoffs are unavoidable, so one aims to trade up rather than down. (One’s trading partner does the same.) The result of this, along with other features of human action, and the world at large is what we call market forces. But really, it is just men and women acting rationally in the world.
2/13/22
JF - NC - Nibble
Tournament: Lex | Round: 2 | Opponent: Bridgeland PT | Judge: Brett Cryan I negate – 1 the is “denoting a disease or affliction” but appropriation isn’t a disease 2 of is to “expressing an age” but the rez doesn’t delineate a length of time 3 private describes “belonging to or for the use of one particular person or group of people only” and an entity is “independent, separate, or self-contained existence” resources.
1/18/22
JF - T - Appropriation
Tournament: Palm Classic | Round: Triples | Opponent: Catonsville AT | Judge: James Stuckert, Truman Le, John Boals 2 Interpretation: Appropriation means use, exploitation, or occupation that is permanent and to the exclusion of others Babcock 19 Professor of Law, Georgetown University Law Cente. Babcock, Hope M. "The Public Trust Doctrine, Outer Space, and the Global Commons: Time to Call Home ET." Syracuse L. Rev. 69 (2019): 191. Article II is one of those succeeding provisions that curtails “the freedom of use outlined in Article I by declaring that outer space, including the moon and other celestial bodies, is not subject to national appropriation.”147 It flatly prohibits national appropriation of any celestial body in outer space “by means of use or occupation, or by any other means.”148 However, “many types of ‘use’ or ‘exploitation’. . . are inconceivable without appropriation of some degree at least of any materials taken,” like ore or water.149 If this view of Article II’s prohibitory language is correct, then “it is not at all farfetched to say that the OST actually installs a blanket prohibition on many beneficial forms of development.”150 However, the OST only prohibits an appropriation that constitutes a “long-term use and permanent occupation, to the exclusion of all others.”151 Violation: Constellations do not appropriate – reject non-legal interpretations Johnson 20 Chris Johnson is the Space Law Advisor for Secure World Foundation and has nine years of professional experience in international space law and policy. He has authored and co-authored publications on international space law, national space legislation, international cooperation in space, human-robotic cooperative space exploration, and on the societal benefits of space technology for Africa. "The Legal Status of MegaLEO Constellations and Concerns About Appropriation of Large Swaths of Earth Orbit." https://swfound.org/media/206951/johnson2020_referenceworkentry_thelegalstatusofmegaleoconstel.pdf No, This Is Not Impermissible Appropriation An opposite conclusion can also be reasonably arrived at when approached along the following lines. The counter argument would assert that the deployment and operation of these global constellations, such as SpaceX’s Starlink, OneWeb, Kepler, etc., are aligned with and in full conformity with the laws applicable to outer space. These constellations are merely the exercise and enjoyment of the freedom of exploration and use of outer space and do not constitute any impermissible appropriation of the orbits that they transit. Freedom of Access and Use Permits Constellations Rather than being a violation of other’s rights to access and explore outer space, the deployment of these constellations is more correctly viewed as the exercise and enjoyment of the right to access and use outer space. Article I of the Outer Space Treaty establishes a right to access and use space without discrimination. Not allowing an actor to deploy spacecraft, regardless of their number or destination, would be infringing with the exercise of their freedom. It would be discriminatory. Additionally, actors do not need permission from any other State, or group of States, to access and explore outer space. Aligned with the Intentions of the Outer Space Treaty This use of outer space by constellations in LEO, while not explicitly mentioned by the drafters of the Outer Space Treaty or other space law, actually is the fulfillment of their visions for the use of outer space. The preamble to the Outer Space Treaty (which contains the subject matter and purpose of the treaty and can be used for interpreting the operative articles of the treaty) speaks of the aspirations of humanity in exploring and using outer space. It is easy to see constellations that will provide Internet access to the world as fulfilling the visions of the drafters: The States Parties to this Treaty, Inspired by the great prospects opening up before mankind as a result of man’s entry into outer space, Recognizing the common interest of all mankind in the progress of the exploration and use of outer space for peaceful purposes, Believing that the exploration and use of outer space should be carried on for the benefit of all peoples irrespective of the degree of their economic or scientific development, Desiring to contribute to broad international cooperation in the scientific as well as the legal aspects of the exploration and use of outer space for peaceful purposes, Believing that such cooperation will contribute to the development of mutual understanding and to the strengthening of friendly relations between States and peoples, As such, subsequent article of the Outer Space Treaty should be read in a permissive light, as permitting constellations, rather than a restrictive light which only sees potential negative aspects of constellations. Due Regard and Harmful Contamination Will be Addressed Operators in LEO are well aware of the challenges to space sustainability that their constellations will pose and will be taking efforts to mitigate the creation of debris. OneWeb is keenly focused on space sustainability and has even argued that the current norm, whereby spacecraft are not in space for longer than 25 years and are deorbited from lower orbits at the end of their lifetime (aka post mission disposal), is not sufficient to keep outer space clean and that shorter lifespan limits should be imposed on operators, especially operators in LEO, and operators of small satellites. Additionally, these systems will be able to cooperate with emerging space safety and space traffic management plans and can operate in ways that do not restrict or impinge on other users of the space domain. Because due regard is therefore displayed for the space domain, and to the interests of others, these constellations do not prejudice or infringe upon the freedoms of use and exploration of the space domain and are therefore not occupation, or possession, much less appropriation. This Does Not Constitute Possession, or Ownership, or Occupation The use of LEO by satellite constellations is substantially similar to the use of GSO, and therefore permissible. In each region, individual actors are given permission - either from a national administrator or from an international governing body (the ITU) via a national administer–to use precoordinated subsections of space. In a way that is overwhelmingly similar to the use of orbital slots in GSO, the placement of spacecraft into orbits in LEO or higher orbits does not constitute possession, ownership, or occupation of those orbits. This is because States (and their companies) have been occupying orbital slots in GSO for decades, and these uses of GSO have never been accused of “appropriating” GSO. The users have never claimed to be appropriating GSO, and their exercising of rights to use GSO is respected by other actors in the space domain. This is the same situation for other orbits, including LEO and other non-Geostationary orbits. And while GSO locations are relatively stable (subject to space weather and other perturbations, and require stationkeeping), spacecraft in LEO are actually moving through space and are not stationary, so it is even more difficult to see this use by constellations as occupation, much less appropriation. Moreover, Space Situational Awareness (SSA) and Space Traffic Management (STM) will allow other uses to use these orbits, and nothing about the use of any one user necessarily precludes others. Lastly, there is no intention by operators of constellations to exclusively occupy, must less possess or appropriate, these orbits. Would not the appropriation of outer space be an intentional, volutional act? No such intention can be found in the operators of global constellations. 1 Precision –The resolution is the only predictable stasis point for dividing ground—any deviation justifies the aff arbitrarily jettisoning words in the resolution at their whim which decks negative ground and preparation because the aff is no longer bounded by the resolution. 2 Predictable limits—including satellite slots offers huge explosion in the topic since they get permutations of different satellite systems – LEO MEO and HEO, plus different companies, plus sizes of constellations, et cetera. Letting temporary occupation be appropriation is a limits diaster - any aff about a single space ship, satellite, or weapon would be T because they temporarily occupy space
2/13/22
JF - T - Appropriation v2
Tournament: Palm Classic | Round: Doubles | Opponent: Sequoia AS | Judge: Annabelle Long, Spencer Paul, Jonathan Meza 1 Interpretation: Appropriation means controlling property rights in the context of space law. - The definition is from Black’s Law Dictionary Su 17 Jinyuan Su, Professor and Assistant Dean at Xi'an Jiaotong University School of Law, China, “Legality of unilateral exploitation of space resources under international law,” 2017, International and Comparative Law Quarterly, Vol. 66, Issue 4, pp. 991-1008, https://doi.org/10.1017/S0020589317000367, EA The Outer Space Treaty does not prohibit expressis verbis the extraction of space resources. However, there exists a possibility that the recognition of property rights by a State, which is a party to the Outer Space Treaty, over resources extracted in outer space may conflict with its international obligations under Article II of the treaty, which proscribes the national appropriation of outer space 'by claim of sovereignty, by means of use or occupation, or by any other means'.26 The term 'appropriation' means 'the exercise of control over property; a taking of possession'.27 Violation: Satellite positioning is not appropriation proper – violating the principle alone is not sufficient. Matignon 19 Louis de Gouyon Matignon, PhD in space law from Georgetown University, “ORBITAL SLOTS AND SPACE CONGESTION,” 06/03/19, Space Legal Issues, https://www.spacelegalissues.com/orbital-slots-and-space-congestion/, EA Near-Earth space is formed of different orbital layers. Terrestrial orbits are limited common resources and inherently repugnant to any appropriation: they are not property in the sense of law. Orbits and frequencies are res communis (a Latin term derived from Roman law that preceded today’s concepts of the commons and common heritage of mankind; it has relevance in international law and common law). It’s the first-come, first-served principle that applies to orbital positioning, which without any formal acquisition of sovereignty, records a promptness behaviour to which it grants an exclusive grabbing effect of the space concerned. Geostationary orbit is a limited but permanent resource: this de facto appropriation by the first-comers – the developed countries – of the orbit and the frequencies is protected by Space Law and the International Telecommunications Law. The challenge by developing countries of grabbing these resources is therefore unjustified on the basis of existing law. Denying new entrants geostationary-access or making access more difficult does not constitute appropriation; it simply results from the traditional system of distribution of access rights. The practice of developed States is based on free access and priority given to the first satellites placed in geostationary orbit. Vote neg – 1 – Limits – they allow banning practices that don’t constitute taking property rights, but do preclude other actors from using the same space – opens the door to almost any practices like mining because those stop other actors from doing the same thing. 2 – Ground – basing appropriation off use instead of ownership kills our DA links based off property rights – think mining good and other private sector good turns. Key on a topic with zero neg generics. Fairness and education are voters – its how judges evaluate rounds and why schools fund debate DTD – it’s key to norm set and deter future abuse Competing interps – Reasonability invites arbitrary judge intervention and a race to the bottom of questionable argumentation – it also collapses since brightlines operate on an offense-defense paradigm No RVIs – A – Encourages theory baiting – outweighs because if the shell is frivolous, they can beat it quickly B – its illogical for you to win for proving you were fair – outweighs since logic is a litmus test for other arguments
2/18/22
JF - T - FW
Tournament: Lex | Round: 6 | Opponent: Summit MR | Judge: Neda Bahrani Interpretation: Affs may only generate offense from an action that makes the appropriation of outer space by private entities illegal.
Resolved means a policy Words and Phrases 64 Words and Phrases Permanent Edition. “Resolved”. 1964. Definition of the word “resolve,” given by Webster is “to express an opinion or determination by resolution or vote; as ‘it was resolved by the legislature;” It is of similar force to the word “enact,” which is defined by Bouvier as meaning “to establish by law”.
Outer space means anything above Earth’s Karman line Dunnett 21 (Oliver Tristan, lecturer in geography at Queen’s University Belfast). Earth, Cosmos and Culture: Geographies of Outer Space in Britain, 1900–2020 (1st ed.). Routledge. 2021. https://doi.org/10.4324/9780815356301 EE In such ways, this book argues that Britain became a home to rich discourses of outer space, both feeding from and contributing to iconic achievements in space exploration, while also embracing the cosmos in imaginative and philosophical ways.2 INSERT FOOTNOTE 2 2 This book primarily uses the term ‘outer space’ to describe the realm beyond the Earth’s atmosphere, conventionally accepted as beginning at the Kármán line of 100km above sea level. Other terms such as ‘interplanetary space’, ‘interstellar space’, ‘cosmos’, and ‘the heavens’ are used in specific contexts. END FOOTNOTE 2 Cognisant of this spatial context, a central aim is to demonstrate how contemporary geographical enquiry can provide specific and valuable perspectives from which to understand outer space. This is an argument that was initiated by Denis Cosgrove, and his critique of Alexander von Humboldt’s seminal work Cosmos helped to demonstrate geography’s special relevance to thinking about outer space.3 The key thematic areas which provide the interface for this book’s research, therefore, are the cultural, political and scientific understandings of outer space; the context of the United Kingdom since the start of the last century; and the geographical underpinnings of their relationship. “Appropriation” means to take as property – prefer our definition since it’s contextual to space Leon 18 (Amanda M., Associate, Caplin and Drysdale, JD UVA Law) "Mining for Meaning: An Examination of the Legality of Property Rights in Space Resources." Virginia Law Review, vol. 104, no. 3, May 2018, p. 497-547. HeinOnline. Appropriation. The term "appropriation" also remains ambiguous. Webster's defines the verb "appropriate" as "to take to oneself in exclusion of others; to claim or use as by an exclusive or pre-eminent right; as, let no man appropriate a common benefit."16 5 Similarly, Black's Law Dictionary describes "appropriate" as an act "to make a thing one's own; to make a thing the subject of property; to exercise dominion over an object to the extent, and for the purpose, of making it subserve one's own proper use or pleasure."166 Oftentimes, appropriation refers to the setting aside of government funds, the taking of land for public purposes, or a tort of wrongfully taking another's property as one's own. The term appropriation is often used not only with respect to real property but also with water. According to U.S. case law, a person completes an appropriation of water by diversion of the water and an application of the water to beneficial use.167 This common use of the term "appropriation" with respect to water illustrates two key points: (1) the term applies to natural resources-e.g., water or minerals-not just real property, and (2) mining space resources and putting them to beneficial use-e.g., selling or manufacturing the mined resources could reasonably be interpreted as an "appropriation" of outer space. While the ordinary meaning of "appropriation" reasonably includes the taking of natural resources as well as land, whether the drafters and parties to the OST envisioned such a broad meaning of the term remains difficult to determine with any certainty. The prohibition against appropriation "by any other means" supports such a reading, though, by expanding the prohibition to other types not explicitly described.168 As illustrated by this analysis, considerable ambiguity remains after this ordinary-meaning analysis and thus, the question of Treaty obligations and property rights remains unresolved. In order to resolve these ambiguities, an analysis of preparatory materials, historical context, and state practice follows. 2. Preparatory Materials A review of meeting reports of the Committee on the Peaceful Uses of Outer Space and its Legal Sub-Committee regarding the Treaty reveals little to clear up the ambiguities of Articles I and II of the OST. In fact, the reports indicate that, despite several negotiating states expressing concern about the lack of clarity with respect to the meaning of "use" and the scope of the non-appropriation principle, no meaningful discussion occurred and no consensus was reached.16 9 Some commentators still conclude that the preparatory work does in fact confirm the drafters' intent for "use" to include exploitation. 170 These commentators do admit, however, that discussions of the term "exploitation" supporting their conclusion focused on remote sensing and communications satellites rather than on resource extraction.17 1 Further skepticism about such an intent for "use" to include "exploitation" also arises given the uncertainty amongst negotiating states about the meaning of these terms. A mere few months before the Treaty opened for signature in January 1967, negotiators were still asking questions about the meaning of "use" during the last few Legal Sub-Committee meetings. For example, in July 1966, the representative of France inquired: "Did the latter term "use" imply use for exploration purposes, such as the launching of satellites, or did it mean use in the sense of exploitation, which would involve far more complex issues?" 172 The representative noted that while some activities such as extraction of minerals were difficult to imagine presently, "it was important for all States, and not only those engaged in space exploration, to know exactly what was meant by the term 'use.'173 In the same meeting, the representative from the USSR offered an interesting response to the question posed by the representative of France: Adequate clarification was to be found in article II of the USSR draft, which specified that outer space and celestial bodies should not be subject to national appropriation by means of use or occupation, or by any other means. In other words no human activity on the moon or any other celestial body could be taken as justification for national appropriation. 174 This response implies that Article II acts as a qualification on Article I's broad provision for free exploration and use of outer space by all. Activity such as resource extraction would be viewed as national appropriation and such activity cannot be justified given Article II's prohibition, not even by falling within the ordinary meaning of "use." Despite this clarification, uncertainty appears to have remained, as lingering concerns were communicated in subsequent meetings by several other states, including Australia, Austria, and France."' Nevertheless, the committee put the Treaty in front of the General Assembly two months later without final resolution of the ambiguities regarding property rights arising from Articles I and II176 The preparatory materials ultimately fail to fully clarify the ambiguities of the meanings of "use" and "appropriation." The statement of the representative of the Soviet Union, one of the two main drafting parties, does, however, help push back on the interpretation of some academics that the nonappropriation principle fails to overcome the presumption of freedom of use.7 3. Historical Context Two interrelated, major historical events cannot be ignored when considering the meaning of the OST: (1) the Cold War and (2) the Space Race. The success of Sputnik I in 1957 showed space travel and exploration no longer to be a dream, but a reality.7 While exciting, this news also brought fear in light of the world's fragile balance of power and tensions between the United States and the Soviet Union. 17 9 What if the Soviet Union managed to launch a nuclear weapon into space? What if the United States greedily claimed the Moon as the fifty-first state? To many, the combination of the Cold War and Space Race made the late 1950s and the 1960s a perilous time.so When viewed as a response to this perilous era, the OST begins to look much more like a nuclear arms treaty and an attempt to ease Cold War tensions than a treaty concerned with the issue of property rights in space."' The Treaty's emphasis on "peaceful purposes" supports this contextual interpretation. 1 82 On the one hand, as many suggest, this context leads to the conclusion that the vague nonappropriation principle of Article II does not prevent private property rights in space resources and the presumption of broad "use" prevails.1 83 Private property rights were simply not a concern of the Treaty drafters and therefore, the Treaty does not address-nor prohibit-such claims. On the other hand, the context surrounding the treaty's drafting does not necessarily lead to this conclusion. In fact, the emphasis on "peaceful purposes" and reducing international tension might instead suggest a stricter reading of Articles I and II. If things were so unstable and tense on Earth, the drafters may have instead intended Article II as a qualification on the general right to explore and use outer space in Article I, recognizing the simple fact that disputes over property, both land and minerals, have sparked some of history's bloodiest conflicts. The Antarctic treaty experience evidences Cold War concern over potential resource rights disputes. Leading up to the finalization of the Antarctic Treaty of 1959,184 seven nations had already made official territorial claims over varying portions of the frozen landscape in hopes of laying claim to the plethora of resources thought to be located within the subsurface."' Although the Treaty itself did not directly address rights to mineral resources in the Antarctic,186 the treaty is interpreted to have frozen these claims in the interest of "freedom of scientific investigation in Antarctica and cooperation toward that end.""' In a manner notably similar to the terms of Articles XI and XII of the OST, the Treaty promotes scientific exploration by encouraging information sharing of scientific program plans, personnel, and observations' and inspection of stations on a reciprocal basis.189 This Treaty along with several later treaties and protocols constitute the "Antarctic Treaty System," which as a whole manages the governance of Antarctica.1 9 0 In 1991, the Protocol on Environmental Protection to the Antarctic Treaty 91 ("Madrid Protocol") settled the question of property rights for the fifty years following the Protocol's entry into force. 192 The Madrid Protocol provides for "the comprehensive protection of the Antarctic environment ... and designates Antarctica as a natural reserve, devoted to peace and science."193 Article 7 explicitly-and simplystates "any activity relating to mineral resources, other than scientific research, shall be prohibited."1 94 Though Article 25 allows for the creation of a binding legal regime to determine whether and under what conditions mineral resource activity be allowed, no such international legal regime has been created to date. 195 The ban on mineral resource exploitation may only be amended by unanimous consent of the parties. 19 6 The United States signed and ratified both the Antarctic Treaty of 1959 and the Madrid Protocol. 197 The freezing of territorial claims in the Antarctic 98 by the Antarctica Treaty of 1959199 illustrates the existence of true concern over potential resource dispute and conflict during the Cold War, in addition to the major concerns posed by nuclear weapons.2 00 The drafting states also recognized the potential for conflict over property in outer space and drew on the language of the Antarctic Treaty of 1959 to draft the OST.2 01 Given these driving concerns, Article II could be reasonably read as qualifying Article I's general rule. Under this reading, Article II serves the same qualifying purpose as Article IV regarding military and nuclear weapon use in space. Some might push back on this interpretation by claiming that the drafters could have used language such as that in the Madrid Protocol to explicitly prohibit mining in space. However, this argument is flawed. The Madrid Protocol was not written until well after both the original Antarctic Treaty of 1959 and the OST. Furthermore, the timing of the Madrid Protocol perhaps provides further evidence that resources in space are not to be harvested until a subsequent agreement regarding rights over them can be agreed upon internationally. While the historical context does leave some ambiguity as to whether the OST permits property rights over space resources, the Antarctic experience provides a compelling analogy and suggests that the OST does not allow for property rights in space resources. 4. State Practice In its Frequently Asked Questions released about the SREU Act, the House Committee on Science, Space, and Technology forcefully asserted that the Act does not violate international law.20 2 in fact, according to the committee, the Act's provision of property rights "is affirmed by State practice and by the U.S. State Department in congressional testimony and written correspondence."2 03 Proponents of this view base their beliefs on several examples. One, "no serious objection" arose to the United States and the Soviet Union bringing samples of rocks and other materials from the Moon back by manned and robotic missions in the late 1960s, nor to Japan successfully collecting a small asteroid sample in 2010.204 Two, a practice of respecting ownership over such retrieved samples and a terrestrial market for such items exists, as illustrated by the fact that no one doubts that the American Museum of Natural History "owns" three asteroids found in Greenland by arctic explorer Robert E. Peary that are now part of the museum's Arthur Ross Hall of Meteorites. 205 Three, Congressmen also cite to a federal district court case, United States v. One Lucite Ball Containing Lunar Material,2 06 to illustrate state practice in favor of ownership over spaces resources. The case involved an Apollo lunar sample gifted to Honduras by the United States. The sample was stolen and sold to an individual in the United States.2 07 When caught during a sting operation intended to uncover illegal sales of imposter samples, the buyer was forced to forfeit the lunar sample after the court concluded the moon rocks had in fact been stolen, basing its decision in part on its recognition of Honduras having national property ownership over the sample. 208 These examples appear overwhelming, but they are not actually examples of activities of the same "form and content" that the SREU Act approves. 2 09 These examples all involve collection of samples in limited amounts and for scientific purposes, while the SREU Act approves large-scale collection and for commercial exploitation. The OST explicitly emphasizes a "freedom of scientific investigation in outer space," and the collection of scientific samples reasonably fall under this enumerated right. 2 10 Alternatively, the OST says nothing with respect to commercial exploitation, only discussing "benefits" of space in terms of sharing those benefits with all mankind.211 Furthermore, the American Museum of Natural History and Lucite Ball examples relied upon are misleading because they suggest that types of celestial artifacts found or gifted on Earth are subject to the same legal regime as resources mined or collected in space, which may not necessarily be true. The analogy of ownership over fish extracted from the high seas is also often cited in response to this pushback. Much like outer space, the high seas are open to all participants, yet the law of the seas still recognizes the right to title over fish extracted on the high seas by fishermen, who can then sell the fish.212 But again, this analogy has limited import because both the 1958 Geneva Convention on the High Seas and the United Nations Convention on the Law of the Sea ("UNCLOS") explicitly recognize the right to fish, while the OST grants no such right to exploit space resources. 2 1 3 Furthermore, state practice relevant to the question of property rights under the OST goes beyond these examples and analogies of ownership of resources taken from commons. State practice regarding property rights in general must be considered. For example, Professor Fabio Tronchetti disagrees with the oft-cited notion that state practice affirms the SREU Act.2 14 According to the professor, "under international law, property rights require a superior authority, a State, entitled to attribute and enforce them." 2 15 By granting property rights in the SREU Act, the United States impliedly claims that it has the authority to confer property rights over space resources-an authority traditionally reserved for the owner of a resource. This notion clashes with the nonappropriation principles of the OST. Though there is no consensus regarding whether the nonappropriation principle prohibits claims of sovereignty over resources, a strong consensus at least exists that the principle prohibits states from claiming sovereignty over real property in space.216 In some traditional systems of mineral ownership, however, ownership over resources ran with ownership over land.217 For example, under Roman law, property rights over subsurface minerals belonged to the landowner. 2 18 Thus, if the United States cannot have title in space lands under the nonappropriation principle, it cannot have title to the space resources in those lands either. Without title to the resources, the United States cannot bestow such title to its citizens under traditional international property law; by claiming that it can bestow such title, the United States is abrogating Article II of the OST. One could also argue that the in situ resources the Act grants rights in are actually still part of the celestial bodies; thus, the resources are real property prior to their removal, and are off limits under the Treaty.2 19 Given the limited import of the cited examples of state practice (limited quantity and scientific versus large-scale and commercial), the traditional practice of property rights being conferred from a sovereign to a citizen become incredibly compelling and suggest the SREU Act may abrogate the United States' treaty obligations. A final piece of evidence, however, again inserts ambiguity into the interpretation: the sweeping rejection of the Moon Agreement and its limitations on property rights by the international community discussed supra Part JJJ.A.2. On the one hand, the rejection may imply that the international community approved of property rights. On the other hand, however, there were other reasons for the sweeping rejection. For example, Professors Francis Lyall and Paul B. Larsen claim the "main area of controversy"2 2 0 actually surrounded the Agreement's proclamation of the Moon and celestial bodies and their natural resources as the "common heritage of mankind" in Article 11.1,221 rather than the Agreement's general property-right provisions. Many believed the invocation of the "common heritage of mankind" language would impart actual obligations upon parties to share extracted resources, whereas the "province of all mankind" and "for the benefit and interest of all" language of the OST did not.222 As with ordinary meaning, preparatory materials, and historical context, state practice leaves some ambiguities and state interpretations should also be considered. 5. State Interpretations Much like the preparatory materials discussed supra Part IV.A.1, subsequent state interpretation of the OST fails to fully address the question of the legality of property rights in space resources. On the one hand, the Senate Committee on Foreign Relations found that the drafters intended Articles I, II, and III of the Treaty to be general in nature when reviewing the Treaty,223 which perhaps suggests Article II's nonappropriation principle does not qualify Article I's general right to use or act as an exception. Yet, the committee also found the Treaty to be in response to the "potential for international competition and conflict in outer space." 2 24 To the committee, Articles I, II, and III stressed the importance of free scientific investigation, guaranteed free access to all areas of celestial bodies, and prohibited claims of sovereignty.225 Not only would property rights in natural resources potentially ignite and exacerbate conflict in space, but they also seemed somewhat incompatible with scientific investigation, free access, and the prohibition on sovereignty. During its hearing on the Treaty, the Senate Committee on Foreign Relations focused a majority of its discussion of Article I on whether or not the language "province of all mankind" imparted strict obligations, while devoting little to no time to the issue of the meaning of "use." 22 6 Former Justice Arthur Goldberg, then U.S. ambassador to the United Nations, did note the goal of the article was to "cnot subject space to exclusive appropriation by any particular power." 227 Nevertheless, this statement fails to resolve whether natural resources may be exploited, as such exploitation could be carried out in an inclusive manner. The committee's review of Article II consumes only eight lines of the hearing transcript, merely adding that the Article is complementary to Article I and that space cannot be claimed for the country (likely referring to land rather than resources).2 28 A different exchange between Ambassador Goldberg, Senator Lausche, and the Chairman leaves further ambiguity regarding the use of natural resources in space: Mr. Goldberg: We wanted to establish our right to explore and use outer space. Senator Lausche: Yes. That is, any one of the signatory nations shall have the right to the use of whatever might be found in one of the space bodies. Mr. Goldberg: No, no. It doesn't mean that. It means that they shall be free on their own to explore outer space. The Chairman: Or to use it. Mr. Goldberg: To use it. The Chairman: But not on an exclusive basis. Mr. Goldberg: Everyone is free.229 At first, Ambassador Goldberg appears to have refuted the notion that a signatory could simply "use" anything found in one of the space bodies, such as a mineral, implying Senator Lausche's example exceeded the scope of Article I. He then went on to emphasize exploratory activities. But then, Ambassador Goldberg backtracked and reasserted the right to use without clarifying his initial qualification. This sense of ambiguity remains today despite Congress signing off on the SREU Act. While sponsors of the bill and statements from resource extraction companies emphasized the broad scope of the right to "use" outer space and state practice in support of the legality of 230 property rights, several expert witnesses expressed genuine concern that obligations under the Treaty remain unclear and require additional analysis.231 B. Compatibility Employing the treaty interpretation tools of ordinary meaning, preparatory materials, historical context, state practice, and state interpretation offers many possible understandings of the obligations imparted by Articles I and II of the OST. For example, while the ordinary meaning of "use" could reasonably include the exploitation of materials, the meeting summaries of the Fifth Session of the U.N. Committee on the Peaceful Uses of Outer Space Legal Sub-Committee make clear that no consensus was ever reached regarding whether "use" includes large-scale exploitation of space resources, let alone fee-simple ownership and the ability to sell commercially. State practice dealing with extraterrestrial samples also sheds little light on the confusion, as the examples cited all deal instead with scientific samples of limited quantity. The international community's rejection of the Moon Agreement also fails to bring clarity. While on the one hand the rejection could be read as a rejection of the idea that the OST prohibits private property rights, it could also be read as a rejection of the common heritage of mankind doctrine. Finally, the prospect of privateventure space mining and extraterrestrial resource extraction remained far off and futuristic at the time of the Treaty's negotiation, making drawing legal conclusions about the legality of these revolutionary activities extremely difficult. Overall, however, the Treaty's structure and its purposes (preserving peace and avoiding international conflict in outer space) ultimately indicate that private property rights in space resources are prohibited by Article II's non-appropriation principle, at least until future international delegation determines otherwise (like in the Antarctic). The Treaty's structure confirms this interpretation. Article I lays down a general rule for activity in space. Subsequent articles of the Treaty then lay out more specific requirements of and qualifications to this general rule. Much like Article IV restricts the use of nuclear weapons in space, Article II restricts the use of space in ways that might result in potentially controversial property claims. Historically, claims to mineral rights have resulted in just as contentious conflict as those over sovereign lands. Treaty efforts to avoid conflicts in Antarctica and the high seas reflect similar sentiments. The Soviet Union's representative even hinted at this structural relationship between Articles I and II during Treaty S1 232 negotiations.22 In light of the imminent need to ease Cold War tensions, the potential for conflict over property, and the final structure of the Treaty, this Note concludes that the large-scale extraction of space resources is incompatible with the non-appropriation principle of Article II of the OST.23 3 As a result, the United States' provision of property rights to its citizens to possess, own, transport, use, and sell space and asteroid resources extracted through the SREU Act contravenes its international obligations established by the OST. Private entity = majority nonstate Warners 20 (Bill, JD Candidate, May 2021, at UIC John Marshall Law School) "Patents 254 Miles up: Jurisdictional Issues Onboard the International Space Station." UIC Review of Intellectual Property Law, vol. 19, no. 4, 2020, p. 365-380. HeinOnline. To satisfy these three necessary requirements for a new patent regime, the ISS IGA must add an additional clause ("Clause 7") in Article 21 specifically establishing a patent regime for private nonstate third parties onboard the ISS. First, Clause 7 would define the term "private entity" as an individual, organization, or business which is primarily privately owned and/or managed by nonstate affiliates. Specifically defining the term "private entity" prevents confusion as to what entities qualify under the agreement and the difference between "public" and "private."99 This definition would also support the connection of Clause 1 in Article 21 to "Article 2 of the Convention Establishing the World Intellectual Property Organization." 100 A succinct definition also alleviates international concerns that the changes to the ISS IGA pushes out Partner State influence. 101 Some in the international community may still point out that Clause 7 still pushes towards a trend of outer space privatization. However, this argument fails to consider that private entities in outer space have operated in space almostas comprehensively as national organizations. 102 Violation: They don’t defend a private entity or the appropriation of something and are not doing a policy action – don’t let them shift in the 1AR because cx proves they aren’t topical
Vote neg: 1 Fairness – post facto topic adjustment and debates about scholarship breed reactionary generics and allow the aff to cement their infinite prep advantage. They can specialize in 1 area of literature for 4 years which gives them a huge edge over people switching topics every 2 months – this crushes clash because all neg prep is based on the rez as a stable stasis point and they create a structural disincentive to do research – we lose 90 of negative ground while the aff still gets the perm which makes being neg impossible. 2 SSD is good – it forces debaters to consider a controversial issue from multiple perspectives. Non-T affs allow individuals to establish their own metrics for what they want to debate leading to ideological dogmatism. Even if they prove the topic is bad, our argument is that the process of preparing and defending proposals is an educational benefit of engaging it. 3 TVA solves – you can read an aff about how the colonization of space represents reproductive futurism since it is backed by motives to keep on keeping people alive Disads to the TVA prove there’s negative ground and that it’s a contestable stasis point, and if their critique is incompatible with the topic reading it on the neg solves and is better because it promotes switch-side debate Winning pessimism doesn’t answer T because only through the process of clash can they refine their defense of it—they need an explanation of why we switch sides and why there’s a winner and loser under their model D Fairness is an impact – 1 it’s an intrinsic good – some level of competitive equity is necessary to sustain the activity – if it didn’t exist, then there wouldn’t be value to the game since judges could literally vote whatever way they wanted regardless of the competing arguments made 2 probability – your ballot can’t solve their impacts but it can solve mine – debate can’t alter subjectivity, but can rectify skews 3 internal link turns every impact – a limited topic promotes in-depth research and engagement which is necessary to access all of their education 4 comes before substance – deciding any other argument in this debate cannot be disentangled from our inability to prepare for it – any argument you think they’re winning is a link, not a reason to vote for them, since it’s just as likely that they’re winning it because we weren’t able to effectively prepare to defeat it. This means they don’t get to weigh the aff.
Reject the team—T is question of models of debate and the damage to our strategy was already done Competing interps—they have to proactively to justify their model and reasonability links to our offense No rvis or impact turns—it’s their burden to prove their topical. Beating back T doesn’t prove their advocacy is good
1/16/22
JF - T - Nebel
Tournament: Palm Classic | Round: 1 | Opponent: Harker SY | Judge: Felicity Park 1 Interpretation: “Private entities” is a generic bare plural. The aff may not defend that a subset of nations ban the appropriation of outer space. Nebel 19. Jake Nebel is an assistant professor of philosophy at the University of Southern California and executive director of Victory Briefs. He writes a lot of this stuff lol – duh. “Genericity on the Standardized Tests Resolution.” Vbriefly. August 12, 2019. https://www.vbriefly.com/2019/08/12/genericity-on-the-standardized-tests-resolution/?fbclid=IwAR0hUkKdDzHWrNeqEVI7m59pwsnmqLl490n4uRLQTe7bWmWDO_avWCNzi14 TG Both distinctions are important. Generic resolutions can’t be affirmed by specifying particular instances. But, since generics tolerate exceptions, plan-inclusive counterplans (PICs) do not negate generic resolutions. Bare plurals are typically used to express generic generalizations. But there are two important things to keep in mind. First, generic generalizations are also often expressed via other means (e.g., definite singulars, indefinite singulars, and bare singulars). Second, and more importantly for present purposes, bare plurals can also be used to express existential generalizations. For example, “Birds are singing outside my window” is true just in case there are some birds singing outside my window; it doesn’t require birds in general to be singing outside my window. So, what about “colleges and universities,” “standardized tests,” and “undergraduate admissions decisions”? Are they generic or existential bare plurals? On other topics I have taken great pains to point out that their bare plurals are generic—because, well, they are. On this topic, though, I think the answer is a bit more nuanced. Let’s see why. “Colleges and universities” is a generic bare plural. I don’t think this claim should require any argument, when you think about it, but here are a few reasons. First, ask yourself, honestly, whether the following speech sounds good to you: “Eight colleges and universities—namely, those in the Ivy League—ought not consider standardized tests in undergraduate admissions decisions. Maybe other colleges and universities ought to consider them, but not the Ivies. Therefore, in the United States, colleges and universities ought not consider standardized tests in undergraduate admissions decisions.” That is obviously not a valid argument: the conclusion does not follow. Anyone who sincerely believes that it is valid argument is, to be charitable, deeply confused. But the inference above would be good if “colleges and universities” in the resolution were existential. By way of contrast: “Eight birds are singing outside my window. Maybe lots of birds aren’t singing outside my window, but eight birds are. Therefore, birds are singing outside my window.” Since the bare plural “birds” in the conclusion gets an existential reading, the conclusion follows from the premise that eight birds are singing outside my window: “eight” entails “some.” If the resolution were existential with respect to “colleges and universities,” then the Ivy League argument above would be a valid inference. Since it’s not a valid inference, “colleges and universities” must be a generic bare plural. Second, “colleges and universities” fails the upward-entailment test for existential uses of bare plurals. Consider the sentence, “Lima beans are on my plate.” This sentence expresses an existential statement that is true just in case there are some lima beans on my plate. One test of this is that it entails the more general sentence, “Beans are on my plate.” Now consider the sentence, “Colleges and universities ought not consider the SAT.” (To isolate “colleges and universities,” I’ve eliminated the other bare plurals in the resolution; it cannot plausibly be generic in the isolated case but existential in the resolution.) This sentence does not entail the more general statement that educational institutions ought not consider the SAT. This shows that “colleges and universities” is generic, because it fails the upward-entailment test for existential bare plurals. Third, “colleges and universities” fails the adverb of quantification test for existential bare plurals. Consider the sentence, “Dogs are barking outside my window.” This sentence expresses an existential statement that is true just in case there are some dogs barking outside my window. One test of this appeals to the drastic change of meaning caused by inserting any adverb of quantification (e.g., always, sometimes, generally, often, seldom, never, ever). You cannot add any such adverb into the sentence without drastically changing its meaning. To apply this test to the resolution, let’s again isolate the bare plural subject: “Colleges and universities ought not consider the SAT.” Adding generally (“Colleges and universitiesz generally ought not consider the SAT”) or ever (“Colleges and universities ought not ever consider the SAT”) result in comparatively minor changes of meaning. (Note that this test doesn’t require there to be no change of meaning and doesn’t have to work for every adverb of quantification.) This strongly suggests what we already know: that “colleges and universities” is generic rather than existential in the resolution. It applies to “private entities” – adding “generally” to the rez doesn’t substantially change its meaning and the rez doesn’t entail that all entities ought to ban private appropriation Net benefits - 1 Limits – 195 recognized countries plus combinations and specific entities within countries makes negating impossible especially with no unifying disads against different policies, implementation and regulation procedures
2 Precision outweighs – it determines which interps your ballot can endorse by providing the only salient focal point for debates—if their interp is not premised on the text of the resolution, its benefits are irrelevant to the question of topicality since it fails to interpret the topic. Plan affs just lead to cheatier pics anyways since the neg has to default to generics Fairness and education are voters – its how judges evaluate rounds and why schools fund debate DTD – it’s key to norm set and deter future abuse Competing interps – Reasonability invites arbitrary judge intervention and a race to the bottom of questionable argumentation – it also collapses since brightlines operate on an offense-defense paradigm No RVIs – A – Encourages theory baiting – outweighs because if the shell is frivolous, they can beat it quickly B – its illogical for you to win for proving you were fair – outweighs since logic is a litmus test for other arguments
2/13/22
JF - T - Unjust
Tournament: CPS | Round: 4 | Opponent: Harker SY | Judge: Parth Shah 1 Interp – Unjust refers to a negative action – it means contrary. Black Laws No Date "What is Unjust?" https://thelawdictionary.org/unjust/Elmer Contrary to right and justice, or to the enjoyment of his rights by another, or to the standards of conduct furnished by the laws. Violation – The Aff is a positive action – it creates a new concept for Space i.e. the creation of a multilateral agreement Standards – 1 Limits – making the topic bi-directional explodes predictability – it means that Aff’s can both increase non-exist property regimes in space AND decrease appropriation by private actors – makes the topic untenable. 2 Ground – wrecks Neg Generics – we can’t say appropriation good since the 1AC can create new views on Outer Space Property Rights that circumvent our Links 3 TVA – just defend that space appropriation is bad. Drop the Debater – it’s a fundamental baseline for debate-ability. Use Competing Interps – A Topicality is a yes/no question, you can’t be reasonably topical and B Reasonability invites arbitrary judge intervention and a race to the bottom of questionable argumentation. No RVI’s - A Forces the 1NC to go all-in on Theory which kills substance education, B Encourages Baiting since the 1AC will purposely be abusive, and C Illogical – you shouldn’t win for not being abusive.
12/20/21
JF - Theory - Appropriation Spec
Tournament: Palm Classic | Round: 4 | Opponent: Immaculate Heart BC | Judge: Joseph Barquin 2 1 Interpretation – the Affirmative must specify what type of Private Actor Appropriation they effect. Appropriation is extremely vague – no legal precedent which means no normal means. Pershing 19, Abigail D. "Interpreting the Outer Space Treaty's Non-Appropriation Principle: Customary International Law from 1967 to Today." Yale J. Int'l L. 44 (2019): 149. (Robina Fellow at European Court of Human Rights. European Court of Human Rights Yale Law School)Elmer Though the Outer Space Treaty flatly prohibits national appropriation of space,150 it leaves unanswered many questions as to what actually counts as appropriation. As far back as 1969, scholars wondered about the implications of this article.151 While it is clear that a nation may not claim ownership of the moon, other questions are not so clear. Does the prohibition extend to collecting scientific samples?152 Does creating space debris count as appropriation by occupation? While the answers to these questions are most likely no, simply because of the difficulties that would be caused otherwise, there are some questions that are more difficult to answer, and more pressing. As commercial space flight becomes more and more prevalent,153 the question of whether private entities can appropriate property in space becomes very important. Whereas once it took a nation to get into space, it will soon take only a corporation, and scholars have pondered whether these entities will be able to claim property in space.154 Though this seems allowable, since the treaty only prohibits “national appropriation,”155 allowing such appropriation would lead to an absurd result. This is because the only value that lies in recognition of a claim is the ability to have that claim enforced.156 If a nation recognized and enforced such a claim, this enforcement would constitute state action.157 It would serve to exclude members of other nations and would thus serve as a form of national appropriation, even though the nation never attempted to directly appropriate the property.158 Furthermore, the Outer Space Treaty also requires that non-governmental entities must be authorized and monitored by the entities’ home countries to operate in space.159 Since a nation cannot authorize its citizens to act in contradiction to international law, a nation would not be allowed to license a private entity to appropriate property in space.160 While this nonappropriation principle is great for allowing free access to space, thereby encouraging research and development in the field, it makes it difficult to create or police a solution to the space debris problem. A viable solution will have to work without becoming an appropriation. There is, however, very little substantive law on what actually counts as appropriation in the context of space.161 So, the best way to see what is and is not allowed is to look both at the general international law regarding appropriations and to look at the past actions of space actors to see what has been allowed (or at least tolerated) and what has been prohibited or rejected. 2 Violation: they don’t 3 Standards a Shiftiness – vague plan wording wrecks Neg Ground since it’s impossible to know which DAs link or which CPs are competitive since different types of appropriation like Space Mining, Space Col, and Satellites – absent 1AC specification, the 1AR can squirrel out of links by saying they don’t effect a certain type of appropriation or they don’t reduce private appropriation enough to trigger the link. Independently vote Negative on Presumption since the Aff gets struck down for being void-for-vagueness since they don’t have an explanation of what is effected or remaining after the Plan. Singer 10 Bill Singer 9-13-2010 “Yo, Congress, Keep On Truckin' -- Can You Dig It?” http://www.brokeandbroker.com/index.php?a=blogandid=554 (Bill Singer is a lawyer who represents securities-industry firms, individual registered persons, Wall Street whistleblowers, and defrauded public investors. For over three decades, Singer has represented clients before the American Stock Exchange, the New York Stock Exchange, the Financial Industry Regulatory Authority (formerly the NASD), the United States Securities and Exchange Commission, and in criminal investigations brought by various federal, state, and local prosecutors. Before entering the private practice of law, Singer was employed in the Legal Department of Smith Barney, Harris Upham and Co.; as a regulatory attorney with both the American Stock Exchange and the NASD (now FINRA); and as a Legal Counsel to Integrated Resources Asset Management. Singer was formerly Chief Counsel to the Financial Industry Association; General Counsel to the NASD Dissidents' Grassroots Movement; and General Counsel to the Independent Broker-Dealer Association. He was registered for a number of years as a Series 7 and Series 63 stockbroker.)Elmer All of which makes it critical that the laws, rules, and regulations of Wall Street be promulgated in an intelligible manner that clearly sets forth what is allowed and what is prohibited. What a provision was meant to say should be what it says -- there shouldn't be any guessing or uncertainty. Unfortunately, so much of what has been proposed as financial regulatory reform, and so much of what will likely emanate from the various agencies and commissions that will soon embark upon rulemaking, is vague. If there is one thing that courts will not tolerate it is vagueness. The law books are filled with agreements, contracts, rules, regulations, and laws that have been struck down as void for vagueness. I fear that much of FINREG may be headed for the same garbage can. b Topic Education – nuanced debates about private property in Outer Space requires specification since each form of appropriation has specific issues related to it so generalization disincentivizes in-depth research. Topic Education is a voter since we only debate the topic for two months. Fairness is a voter since it’s debate is a game so it’s a jurisdictional question and sequencing to evaluating any other argument in the debate. Appropriation Spec isn’t regressive – it’s a core discussion central to the literature, we’ve read a card proving predictability, and is a floor for topic debates.
2/13/22
JF - Theory - Spec Outer Space
Tournament: Lex | Round: 2 | Opponent: Bridgeland PT | Judge: Brett Cryan Cites are broken - check open source
1/18/22
ND - CP - Consult ICJ
Tournament: Apple Valley | Round: 1 | Opponent: Evergreen Valley Independent SE | Judge: Jeong-Wan Choi Cites are broken - check open source
11/6/21
ND - CP - Suez Canal
Tournament: Princeton | Round: 1 | Opponent: Ardrey Kell RG | Judge: Tara Riggs 4 Counterplan Text – A just government ought to establish an unconditional right for workers to strike except for employees at the Suez Canal Lack of Strike Protection limits Suez Canal Strikes now. - Edited for Ableist Language Cunningham 14 Erin Cunningham 4-11-2014 "From Cairo to Suez, Egypt workers defy government with labor strikes" https://www.washingtonpost.com/world/middle_east/from-cairo-to-suez-egypt-workers-defy-government-with-labor-strikes/2014/04/11/674171d0-a713-494d-a76b-b33f5d4bc505_story.html (American University of Paris, BA in international and comparative politics)Elmer Military involvement In Suez province, a critical industrial center and strategic hub of global maritime trade, the military has been particularly involved in suppressing factory workers’ strikes, labor rights activists say. Those actions could indicate how a military-supported Sissi presidency would deal with the ongoing labor unrest. In August, military police stormed a worker sit-in at the privately owned Suez Steel Company. The workers accused management of failing to honor an agreement that granted them hazard pay, health care and a share of the company’s profits. Last month, a senior army commander in Suez helped eliminate the union leadership at a local factory belonging to international ceramics and porcelain producer Cleopatra Ceramics, according to workers. On March 3, Maj. Gen. Mohamed Shams summoned 23 of the union’s first- and second-tier leaders to the area’s army headquarters and threatened to have Egypt’s secret police investigate them for terrorism if they did not sign resignation letters and leave the company, Cleopatra workers and labor activists said. Factory owner Mohamed Aboul Enein — and former Mubarak heavyweight ally — had been locked in a years-long struggle with workers over a 2012 agreement for better salaries, overtime pay and food allowances. In a telephone interview, Enein said he was forced to sign the contract under duress, after employees barricaded him inside the factory overnight. “These people belong to the Muslim Brotherhood,” Enein said of the workers. The Egyptian government has banned the Muslim Brotherhood and declared the group a terrorist organization. But there is no evidence the union was acting on behalf of the Islamist group. “They always ask for money,” Enein said of the workers. “They are criminals.” But company labor leaders said Shams’s and Enein’s close advisers threatened to bring the leaders’ wives and children to the military base until they promised to leave. A spokesman for the Egyptian armed forces did not respond to requests for comment. “They kept saying that if we did not sign, we would go to prison,” said Ayman Nofal, one of the union members who was pushed out. The move has stunned paralyzed worker organizing there, current employees said. “Like any entity in power, the military does not want strikes,” Ramadan said. Suez Strikes eviscerate Global Trade – expanding the scope and length of Strikes through legal protection makes their impacts unthinkable. Rohar 11 Evan Rohar 2-10-2011 "Suez Canal Strike Could Rattle Egypt’s Regime" https://labornotes.org/blogs/2011/02/suez-canal-strike-could-rattle-egyptE28099s-regime (Former Dock Worker at the Suez Canal)Elmer Workers in the critical Suez Canal Authority have taken perhaps the most important action of all, launching a 6,000-strong sitdown strike that began Tuesday evening. While their demands center on pay and working conditions, the sheer force of their leverage has implications for the entire Egyptian uprising. The action appears to be a wildcat strike. The Suez Canal enables ships to travel from Asia to Europe by way of the Red and Mediterranean Seas, bypassing a journey around the Cape of Good Hope at the tip of Africa that would take more than a week. The canal handled 559 million tons of cargo in 2009, nearly three times the tonnage handled by the Port of Los Angeles, the busiest port in the U.S. The canal handles cargo amounting to about 8 percent of global maritime trade. It also transits up to 2.5 million barrels of crude oil each day, with oil-exporting countries using the canal to move their crude to market and to import refined petroleum products. The canal is of further importance for U.S. military interests; the U.S. navy counts on it for rapid deployment of vessels from the Mediterranean to the Persian Gulf. So far, most industry analysts insist that canal traffic is either minimally affected or unaffected by the strike actions and will remain so. Many, such as the Journal of Commerce and Logistics Week, quote Egyptian government officials who have an interest in keeping a lid on the effectiveness of any protest. If canal workers affect traffic, or if the strikes spread, enormous international pressure would come down on the Mubarak regime to get the cargo flowing again. Reports contradicting the official line are starting to appear. Egypt’s state-controlled newspaper Ahram Online reported on Tuesday that “disruptions to shipping movements, as well as disastrous economic losses, are expected if the strike continues.” By Wednesday, the article had been changed to state that no delays are expected. Regardless, the waterway’s strategic and economic significance amounts to a massive bargaining chip for the pro-democracy protesters if leveraged correctly, and its importance won’t end with the uprising. If democracy prevails and the people of Egypt take power, the new regime could use the canal for any number of political and economic purposes. Egyptian authorities are beefing up security around the canal, claiming that Hamas and Hezbollah plan to dispatch saboteurs to aid the rebellion. Maybe they're acting on real intelligence, or maybe they're afraid of what the workers could do for themselves and for their revolution. Collapse of Trade causes Hotspot Escalation – goes Nuclear. Kampf 20 David Kampf 6-16-2020 “How COVID-19 Could Increase the Risk of War” https://www.worldpoliticsreview.com/articles/28843/how-covid-19-could-increase-the-risk-of-war (Senior PhD Fellow at the Center for Strategic Studies at The Fletcher School)Elmer But that overlooked the ways in which the risk of interstate war was already rising before COVID-19 began to spread. Civil wars were becoming more numerous, lasting longer and attracting more outside involvement, with dangerous consequences for stability in many regions of the world. And the global dynamics most commonly cited to explain the falling incidence of interstate war—democracy, economic prosperity, international cooperation and others—were being upended. If the spread of democracy kept the peace, then its global decline is unnerving. If globalization and economic interdependence kept the peace, then a looming global depression and the rise of nationalism and protectionism are disconcerting. If regional and global institutions kept the peace, then their degradation is unsettling. If the balance of nuclear weapons kept the peace, then growing risks of proliferation are disquieting. And if America’s preeminent power kept the peace, then its relative decline is troubling. Now, the pandemic, or more specifically the world’s reaction to it, is revealing the extent to which the factors holding major wars in check are withering. The idea that war between nations is a relic of the past no longer seems so convincing. The Pessimists Strike Back More than any other individual, it was cognitive scientist Steven Pinker who popularized the idea that we are living in the most peaceful moment in human history. Starting with his 2011 bestseller, “The Better Angels of Our Nature: Why Violence Has Declined,” Pinker argued that the frequency, duration and lethality of wars between great powers have all decreased. In his 2019 book, “Enlightenment Now: The Case for Reason, Science, Humanism, and Progress,” he wrote that war “between the uniformed armies of two nation-states appears to be obsolescent. There have been no more than three in any year since 1945, none in most years since 1989, and none since the American-led invasion of Iraq in 2003.” Optimists like Pinker held that, rather than the world falling apart, as a quick glance at headline news might suggest, the opposite was true: Humanity was flourishing. More regions are characterized by peace; fewer mass killings are occurring; governance and the rule of law are improving; and people are richer, healthier, better educated and happier than ever before. In their book, “Clear and Present Safety: The World Has Never Been Better and Why That Matters to Americans,” Michael A. Cohen and Micah Zenko argued that the evidence is so overwhelming that it is difficult to argue against the idea that wars between great powers, and all other interstate wars, are becoming vanishingly rare. Even when wars do break out, they tend to be shorter and less deadly than they were in the past. John Mueller, a senior fellow at the Cato Institute, also reasoned that the idea of war, like slavery and dueling before it, was in terminal decline, while Joshua Goldstein, an international relations researcher at American University, credited the United Nations and the rise of peacekeeping operations for helping win the “war on war.” But in recent years, a range of critics have begun to poke holes in these arguments. Tanisha M. Fazal, an international relations professor at the University of Minnesota, contends that the decline in war is overstated. Major advances in medicine, speedier evacuations of wounded soldiers from the field of battle and better armor have made war less fatal—but not necessarily less frequent. Fazal and Paul Poast, who is at the University of Chicago, further assert that the notion of war between great powers as a thing of the past is based on the assumption that all such conflicts resemble World War I and II—both are historical anomalies—and overlooks the actual wars fought between great powers since 1945, from the Korean War and the Vietnam War to proxy wars from Afghanistan to Ukraine. Meanwhile, Bear F. Braumoeller, an Ohio State political science professor, analyzed the same historical data on conflicts used by Pinker, Mueller and Goldstein, and found no general downward trend in either the initiation or deadliness of warfare over the past two centuries. What’s more, Braumoeller contends that the so-called “long peace”—the 75 years that have passed without systemic war since World War II—is far from invulnerable, and that wars are just as likely to escalate now as they used to be. Just because a major interstate war hasn’t happened for a long time, doesn’t mean it never will again. In all probability, it will. And by focusing solely on interstate wars, the optimists miss half the story, at least. Wars between states have declined, but civil wars never disappeared—and these internal conflicts could easily escalate into regional or global wars. The number of conflicts in the world reached its highest point since World War II in 2016, with 53 state-based armed conflicts in 37 countries. All but two of these conflicts were considered civil wars. To make matters worse, new studies have shown that civil wars are becoming longer, deadlier and harder to conclusively end, and that these internal conflicts are not really internal. Civil wars harm the economies and stability of neighboring countries, since armed groups, refugees, illicit goods and diseases all spill over borders. Some 10 million refugees have fled to other countries since 2012. The countries that now host them are more likely to experience war, which means states with huge refugee populations like Lebanon, Jordan and Turkey face legitimate security challenges. Even after the threat of violence has diminished in refugees’ countries of origin, return migration can reignite conflicts, repeating the brutal cycle. A Yugoslav Federal Army tank. Perhaps most importantly, recent research indicates that civil wars increase the risk of interstate war, in large part because they are attracting more and more outside involvement. In a 2008 paper, researchers Kristian Skrede Gleditsch, Idean Salehyan and Kenneth Schultz explained that, in addition to the spillover effects, two other factors in civil wars increase international tensions and could possibly provoke wider interstate wars: external interventions in support of rebel groups and regime attacks on insurgents across international borders. Immediately after the Cold War, none of the ongoing civil wars around the world were internationalized. According to the Uppsala Conflict Data Program, there were 12 full-fledged civil wars in 1991—in Afghanistan, Iraq, Peru, Sri Lanka, Sudan, and elsewhere—and foreign militaries were not active on the ground in any of them. Last year, by contrast, every single full-fledged civil war involved external military participants. This is due, in part, to the huge growth in U.S. military interventions abroad into civil conflicts, but it’s not only the Americans. All of today’s major wars are in essence proxy wars, pitting external rivals against one another. Conflicts in Syria, Yemen and Libya are best understood not as civil wars, but as international warzones, attracting meddlers including the United States, Russia, Saudi Arabia, Turkey, Iran, France and many others, which often intervene not to build peace, but to resolve conflicts in a way that is favorable to their own interests. These internationalized wars are more lethal, harder to resolve and possibly more likely to recur than civil wars that remain localized. It is not that difficult to imagine how these conflicts could spark wider international conflagrations. Wars, after all, can quickly spiral out of control. As Risks Increase, Deterrents Decline To make matters worse, most of the global trends that explained why interstate war had decreased in recent decades are now reversing. The theories that democracy, prosperity, cooperation and other factors kept the peace have been much debated—but if there was any truth to them, their reversals are likely to increase the chance of war, irrespective of how long the coronavirus pandemic lasts. Democracy is often considered a prophylactic for war. Fully democratic countries are less likely to experience civil war and rarely, if ever, go to war with other democracies—though, of course, they do still go to war against non-democracies. While this would be great news if democracy and pluralism were spreading, there have now been 14 consecutive years of global democratic decline, and there have been signs of additional authoritarian power grabs in countries like Hungary and Serbia during the pandemic. If democracy backslides far enough, internal conflicts and foreign aggression will become more likely. Other theories posit that economic bonds between countries have limited wars in recent decades. Dale Copeland, a professor of international relations at the University of Virginia, has argued that countries work to preserve ties when there are high expectations for future trade, but war becomes increasingly possible when trade is predicted to fall. If globalization brought peace, the recent wave of far-right nationalism and populism around the world may increase the chances of war, as tariffs and other trade barriers go up—mostly from the United States under President Donald Trump, who has launched trade wars with allies and adversaries alike. The coronavirus pandemic immediately elicited further calls to reduce dependence on other countries, with Trump using the opportunity to pressure U.S. companies to reconfigure their supply chains away from China. For its part, China made sure that it had the homemade supplies it needed to fight the virus before exporting extras, while countries like France and Germany barred the export of face masks, even to friendly nations. And widening economic inequalities, a consequence of the pandemic, are not likely to enhance support for free trade. This assault on open trade and globalization is just one aspect of a decaying liberal international order, which, its proponents argue, has largely helped to preserve peace between nations since World War II. But that old order is almost gone, and in all likelihood isn’t coming back. The U.N. Security Council appears increasingly fragmented and dysfunctional. Even before Trump, the world’s most powerful country ratified fewer treaties per year under the Obama administration than at any time since 1945. Trump’s presidency only harms multilateral cooperation further. He has backed out of the Paris Agreement on climate change, reneged on the Iran nuclear deal, picked fights with allies, questioned the value of NATO and defunded the World Health Organization in the middle of a global health crisis. Hyper-nationalism, rather than international collaboration, was the default response to the coronavirus outbreak in the U.S. and many other countries around the world. It’s hard to see the U.S. reluctance to lead as anything other than a sign of its inevitable, if slow, decline. The country’s institutionalized inequalities and systemic racism have been laid bare in recent months, and it no longer looks like a beacon for others to follow. The global balance of power is changing. China is both keen to assert a greater leadership role within traditionally Western-led institutions and to challenge the existing regional order in Asia. Between a rising China, revanchist Russia and new global actors, including non-state groups, we may be heading toward an increasingly multipolar or nonpolar world, which could prove destabilizing in its own right. Finally, the pacifying effect of nuclear weapons could be waning. While vast nuclear arsenals once compelled the United States and the Soviet Union to reach arms control agreements, old treaties are expiring and new talks are breaking down. Mistrust is growing, and the chance of an unwanted U.S.-Russia nuclear confrontation is arguably as high as it has been since the Cuban missile crisis. The theory of nuclear peace may no longer hold if more countries are tempted to obtain their own nuclear deterrent. Trump’s decision to abandon the Iran nuclear deal, for one thing, has only increased the chance that Tehran will acquire nuclear weapons. It’s almost easy to forget that, just a few short months ago, the United States and Iran were one miscalculation or dumb mistake away from waging all-out war. And despite Trump’s efforts to negotiate nuclear disarmament with Kim Jong Un’s regime in Pyongyang, it is wishful thinking to believe North Korea will give up its nuclear weapons. At this point, negotiators can only realistically try to ensure that North Korea’s nuclear menace doesn’t get even more potent. In other words, by turning inward, the United States is choosing to leave other countries to fend for themselves. The end result may be a less stable world with more nuclear actors. If leaders are smart, they will take seriously the warning signs exposed by this global emergency and work to reverse the drift toward war. If only one of these theories for peace were worsening, concerns would be easier to dismiss. But together, they are unsettling. While the world is not yet on the brink of World War III and no two countries are destined for war, the odds of avoiding future conflicts don’t look good. The pandemic is already degrading democracies, harming economies and curtailing international cooperation, and it also seems to be fostering internal instability within states. Rachel Brown, Heather Hurlburt and Alexandra Stark argue that the coronavirus could in fact sow more civil conflict. If this proves accurate, the increase in civil wars is likely to lead to more external meddling, and these next proxy wars could soon precipitate all-out international conflicts if outsiders aren’t careful. With the usual deterrents to conflict declining around the world, major wars could soon return.
12/4/21
ND - CP - Supertrees
Tournament: Princeton | Round: 1 | Opponent: Ardrey Kell RG | Judge: Tara Riggs 6 CP Text: A just government should establish a substantial incentive program for artificial tree carbon capture.
The US creating super trees is sufficient to solve international warming- super trees are distinct and better than any other geoengineering method Vince 12 Gaia Vince, BBC News, 4 October 2012, Sucking CO2 from the Skies With Artificial Trees, http://www.bbc.com/future/story/20121004-fake-trees-to-clean-the-skies TR Scientists are looking at ways to modulate the global temperature by removing some of this greenhouse gas from the air. If it works, it would be one of the few ways of geoengineering the planet with multiple benefits, beyond simply cooling the atmosphere. Every time we breathe out, we emit carbon dioxide just like all other metabolic life forms. Meanwhile, photosynthetic organisms like plants and algae take in carbon dioxide and emit oxygen. This balance has kept the planet at a comfortably warm average temperature of 14C (57F), compared with a chilly -18C (0F) if there were no carbon dioxide in the atmosphere. In the Anthropocene (the Age of Man), we have shifted this balance by releasing more carbon dioxide than plants can absorb. Since the industrial revolution, humans have been burning increasing amounts of fossil fuels, releasing stored carbon from millions of years ago. Eventually the atmosphere will reach a new balance at a hotter temperature as a result of the additional carbon dioxide, but getting there is going to be difficult. The carbon dioxide we are releasing is changing the climate, the wind and precipitation patterns, acidifying the oceans, warming the habitats for plants and animals, melting glaciers and ice sheets, increasing the frequency of wildfires and raising sea levels. And we are doing this at such a rapid pace that animals and plants may not have time to evolve to the new conditions. Humans won't have to rely on evolution, but we will have to spend hundreds of billions of dollars on adapting or moving our cities and other infrastructure, and finding ways to grow our food crops under these unfamiliar conditions. Even if we stopped burning fossil fuels today, there is enough carbon dioxide in the atmosphere - and it is such a persistent, lasting gas – that temperatures will continue to rise for a few hundred years. We won't stop emitting carbon dioxide today, of course, and it is now very likely that within the lifetime of people born today we will increase the temperature of the planet by at least 3C more than the average temperature before the industrial revolution. Seek and capture Hence, the idea of finding ways of removing carbon dioxide from the atmosphere. One way to do this is to grow plants that absorb a lot of carbon dioxide and store it. But although we can certainly improve tree-planting, we also need land to grow food for an increasing global population, so there's a limit to how much forestry we can fit on the planet. In recent years there have been attempts to remove the carbon dioxide from its source in power plants. Scrubber deviceshave been fitted to the chimneys in different pilot projects around the world so that the greenhouse gas produced during fossil fuel burning can be removed from the exhaust emissions. The carbon dioxide can then be cooled and pumped for storage in deep underground rock chambers, for example, replacing the fluid in saline aquifers. Another storage option is to use the collected gas to replace crude oil deposits, helping drilling companies to pump out oil from hard to reach places, in a process known as advanced oil recovery. Removing this pollution from power plants – called carbon capture and storage – is a useful way of preventing additional carbon dioxide from entering the atmosphere as we continue to burn fossil fuels. But what about the gas that is already out there? The problem with removing carbon dioxide from the atmosphere is that it’s present at such a low concentration. In a power plant chimney, for instance, carbon dioxide is present at concentrations of 4-12 within a relatively small amount of exhaust air. Removing the gas takes a lot of energy, so it is expensive, but it’s feasible. To extract the 0.04 of carbon dioxide in the atmosphere would require enormous volumes of air to be processed. As a result, most scientists have baulked at the idea. Fake plastic trees Klaus Lackner, director of the Lenfest Center for Sustainable Energy at Columbia University, has come up with a technique that he thinks could solve the problem. Lackner has designed an artificial tree that passively soaks up carbon dioxide from the air using “leaves” that are 1,000 times more efficient than true leaves that use photosynthesis. "We don't need to expose the leaves to sunlight for photosynthesis like a real tree does," Lackner explains. "So our leaves can be much more closely spaced and overlapped – even configured in a honeycomb formation to make them more efficient." The leaves look like sheets of papery plastic and are coated in a resin that contains sodium carbonate, which pulls carbon dioxide out of the air and stores it as a bicarbonate (baking soda) on the leaf. To remove the carbon dioxide, the leaves are rinsed in water vapour and can dry naturally in the wind, soaking up more carbon dioxide. Lackner calculates that his tree can remove one tonne of carbon dioxide a day. Ten million of these trees could remove 3.6 billion tonnes of carbon dioxide a year – equivalent to about 10 of our global annual carbon dioxide emissions. "Our total emissions could be removed with 100 million trees," he says, "whereas we would need 1,000 times that in real trees to have the same effect." If the trees were mass produced they would each initially cost around $20,000 (then falling as production takes over), just below the price of the average family car in the United States, he says, pointing out that 70 million cars are produced each year. And each would fit on a truck to be positioned at sites around the world. "The great thing about the atmosphere is it's a good mixer, so carbon dioxide produced in an American city can be removed in Oman," he says
12/4/21
ND - CP - Transparency Measures
Tournament: Blue Key | Round: 1 | Opponent: Bentonville JH | Judge: Sanjana Bhatnagar Cites are broken - check open source
2/19/22
ND - DA - Business Confidence
Tournament: Blue Key | Round: 4 | Opponent: Scarsdale KS | Judge: Samantha McLoughlin 4 Business Confidence is high now – best surveys. ICAEW 8-20 8-20-2021 "Business confidence remains at record high as economy gets sales boost" https://www.icaew.com/about-icaew/news/press-release-archive/2021-news-releases/business-confidence-remains-at-record-high-as-economy-gets-sales-boost (Institute of Chartered Accountants in England and Wales)Elmer Friday 20 August 2021: Business confidence has hit a record high for the second quarter in a row, a survey of chartered accountants published today has found. Business confidence at record high for second consecutive quarter, ICAEW survey finds Strong sales growth projections key to confidence boost Companies face new challenges as economy reopens Business confidence has hit a record high for the second quarter in a row, a survey of chartered accountants published today (FRIDAY 20 AUGUST 2021) has found. Sentiment tracked by ICAEW’s Business Confidence Monitor™ (BCM) found optimism at 47 on the quarterly index, its highest level since the survey was launched in 2004 and surpassing the previous record set last quarter. 1 The optimism was shared by businesses of all sizes across all sectors, nations and regions in the UK. The record reading was a likely reflection of the expectation of strong sales growth in the year ahead, especially in the domestic market where a record rise of 7.4 is predicted over the coming 12 months. Companies also expect a sharp boost in export sales, which will rebound to pre-pandemic rates of increase. 2 However, the likelihood of confidence remaining positive is highly dependent on the COVID-19 situation not deteriorating further, ICAEW said. Decisions on interest rates, the winding down of support schemes, such as furlough, could also have an impact on future business sentiment. Office for National Statistics figures published last week showed that Britain’s economy grew 4.8 between April and June, below the 5 that the Bank of England had forecast. Michael Izza, ICAEW Chief Executive, said: “Business confidence has now hit record levels for two quarters in a row - companies are clearly benefitting from rising customer demand as the economy reopens and life begins to return to normal. The high level of optimism is unsurprising but it remains vulnerable to a possible resurgence of COVID-19 as we head into the autumn. “While confidence is high across all sectors, with companies reporting record expectations for domestic sales growth, they also told us they face challenges from skills shortages, wage increases and rising costs. “This is a crucial stage for the economy. Despite having to cope with the winding down of government financial support and possible interest rate rises, businesses are definitely bouncing back, but finances are fragile and any additional costs could threaten the recovery.” Right to Strike has unintended effects that threaten growth and business confidence. Tenza 20, Mlungisi. "The effects of violent strikes on the economy of a developing country: a case of South Africa." Obiter 41.3 (2020): 519-537. (lecturer in the field of Labour Law at the School of Law. He holds a LLM Degree.)Elmer 2 BACKGROUND When South Africa obtained democracy in 1994, there was a dream of a better country with a new vision for industrial relations.5 However, the number of violent strikes that have bedevilled this country in recent years seems to have shattered-down the aspirations of a better South Africa. South Africa recorded 114 strikes in 2013 and 88 strikes in 2014, which cost the country about R6.1 billion according to the Department of Labour.6 The impact of these strikes has been hugely felt by the mining sector, particularly the platinum industry. The biggest strike took place in the platinum sector where about 70 000 mineworkers’ downed tools for better wages. Three major platinum producers (Impala, Anglo American and Lonmin Platinum Mines) were affected. The strike started on 23 January 2014 and ended on 25 June 2014. Business Day reported that “the five-month-long strike in the platinum sector pushed the economy to the brink of recession”. 7 This strike was closely followed by a four-week strike in the metal and engineering sector. All these strikes (and those not mentioned here) were characterised with violence accompanied by damage to property, intimidation, assault and sometimes the killing of people. Statistics from the metal and engineering sector showed that about 246 cases of intimidation were reported, 50 violent incidents occurred, and 85 cases of vandalism were recorded.8 Large-scale unemployment, soaring poverty levels and the dramatic income inequality that characterise the South African labour market provide a broad explanation for strike violence.9 While participating in a strike, workers’ stress levels leave them feeling frustrated at their seeming powerlessness, which in turn provokes further violent behaviour.10 These strikes are not only violent but take long to resolve. Generally, a lengthy strike has a negative effect on employment, reduces business confidence and increases the risk of economic stagflation. In addition, such strikes have a major setback on the growth of the economy and investment opportunities. It is common knowledge that consumer spending is directly linked to economic growth. At the same time, if the economy is not showing signs of growth, employment opportunities are shed, and poverty becomes the end result. The economy of South Africa is in need of rapid growth to enable it to deal with the high levels of unemployment and resultant poverty. One of the measures that may boost the country’s economic growth is by attracting potential investors to invest in the country. However, this might be difficult as investors would want to invest in a country where there is a likelihood of getting returns for their investments. The wish of getting returns for investment may not materialise if the labour environment is not fertile for such investments as a result of, for example, unstable labour relations. Therefore, investors may be reluctant to invest where there is an unstable or fragile labour relations environment. 3 THE COMMISSION OF VIOLENCE DURING A STRIKE AND CONSEQUENCES The Constitution guarantees every worker the right to join a trade union, participate in the activities and programmes of a trade union, and to strike. 11 The Constitution grants these rights to a “worker” as an individual.12 However, the right to strike and any other conduct in contemplation or furtherance of a strike such as a picket13 can only be exercised by workers acting collectively.14 The right to strike and participation in the activities of a trade union were given more effect through the enactment of the Labour Relations Act 66 of 199515 (LRA). The main purpose of the LRA is to “advance economic development, social justice, labour peace and the democratisation of the workplace”. 16 The advancement of social justice means that the exercise of the right to strike must advance the interests of workers and at the same time workers must refrain from any conduct that can affect those who are not on strike as well members of society. Even though the right to strike and the right to participate in the activities of a trade union that often flow from a strike17 are guaranteed in the Constitution and specifically regulated by the LRA, it sometimes happens that the right to strike is exercised for purposes not intended by the Constitution and the LRA, generally. 18 For example, it was not the intention of the Constitutional Assembly and the legislature that violence should be used during strikes or pickets. As the Constitution provides, pickets are meant to be peaceful. 19 Contrary to section 17 of the Constitution, the conduct of workers participating in a strike or picket has changed in recent years with workers trying to emphasise their grievances by causing disharmony and chaos in public. A media report by the South African Institute of Race Relations pointed out that between the years 1999 and 2012 there were 181 strike-related deaths, 313 injuries and 3,058 people were arrested for public violence associated with strikes.20 The question is whether employers succumb easily to workers’ demands if a strike is accompanied by violence? In response to this question, one worker remarked as follows: “There is no sweet strike, there is no Christian strike … A strike is a strike. You want to get back what belongs to you ... you won’t win a strike with a Bible. You do not wear high heels and carry an umbrella and say ‘1992 was under apartheid, 2007 is under ANC’. You won’t win a strike like that.” 21 The use of violence during industrial action affects not only the strikers or picketers, the employer and his or her business but it also affects innocent members of the public, non-striking employees, the environment and the economy at large. In addition, striking workers visit non-striking workers’ homes, often at night, threaten them and in some cases, assault or even murder workers who are acting as replacement labour. 22 This points to the fact that for many workers and their families’ living conditions remain unsafe and vulnerable to damage due to violence. In Security Services Employers Organisation v SA Transport and Allied Workers Union (SATAWU),23 it was reported that about 20 people were thrown out of moving trains in the Gauteng province; most of them were security guards who were not on strike and who were believed to be targeted by their striking colleagues. Two of them died, while others were admitted to hospitals with serious injuries.24 In SA Chemical Catering and Allied Workers Union v Check One (Pty) Ltd,25 striking employees were carrying various weapons ranging from sticks, pipes, planks and bottles. One of the strikers Mr Nqoko was alleged to have threatened to cut the throats of those employees who had been brought from other branches of the employer’s business to help in the branch where employees were on strike. Such conduct was held not to be in line with good conduct of striking.26 Corporate optimism, specifically investment, drives self-sustaining recovery. Van der Welle 7-7 Peter Van der Welle 7-7-2021 “How capex holds the key to a self-sustaining economic recovery” https://www.robeco.com/latam/en/insights/2021/07/how-capex-holds-the-key-to-a-self-sustaining-economic-recovery.html (Strategist within the Global Macro team, M.A. in Economics from Tilburg University)Elmer Title: How capex holds the key to a self-sustaining economic recovery. Capital expenditure to fix supply shortages and meet burgeoning demand is seen figuring strongly in the post-Covid recovery. Author and summary omitted. Companies are expected to invest heavily in new equipment and capacity as they seek to meet the pent-up demand released from economic reopening. “The world is emerging from the pandemic, and much of the focus has been on the release of huge pent-up demand for goods and services that have been inaccessible for much of the past year,” says Peter Van der Welle, strategist with Robeco’s multi-asset team. “But there is a bigger issue regarding the ability of companies to supply these goods and services, due to the supply side constraints that have emerged through economic reopening. We believe this is powering a resurgence in capital expenditure by companies, and those which are investing in new equipment to meet greater demand will be the more sought after stocks.” Capex intentions Van der Welle says this trend can already be seen in the US Federal Reserve’s Capex Intentions Index, which shows that steep year-on-year increases in capital expenditures are planned. “So, that's promising for a near-term rebound in the capex cycle,” he says. “The market has already picked up on that theme because you can see a clear outperformance of capex-intensive stocks compared to the broader market year to date.” Fiscal dominance Van der Welle says five elements support the multi-asset team’s view that capex will rise from here onwards. “The first is the overarching macroeconomic picture in that we are increasingly moving towards an environment of fiscal dominance and away from one that has been monetary-led via quantitative easing,” he says. “Central banks have pursued very easy monetary policies, but they have hit the nominal lower bounds with regard to policy rates.” “This is a hard constraint because real rates are difficult for central banks to push even lower than they are nowadays, given the strong consensus among both central bankers and market participants that inflation is transitory.” Big spending plans For stimulus, fiscal policy is better suited to address the negative supply shock that Covid-19 has posed. Fiscal dominance can be seen in the huge infrastructure spending planned in the US, with the USD 1.9 trillion American Rescue Plan already in motion, and the USD 2 trillion American Jobs Plan going through Congress. In Europe, the disbursement of the EUR 750 billion EU Recovery Fund is due to start later in July. “An era of fiscal dominance is able to say goodbye to the secular stagnation thesis, which holds that the economy is suffering from under-investment,” says Van der Welle. “Under-investment due to insufficient demand, which was the biggest problem after the global financial crisis, has become less likely.” “We saw very subdued consumption growth both in the US and elsewhere between 2009 and 2019. That story is reversing in the US. Households’ income has been supported by fiscal policy during the Covid-19 recession, while burgeoning consumer demand in the reopening phase could prove to be more sticky as employment prospects continue to improve in the medium term.” Tobin’s Q looks good A third reason to expect higher capex is driven by ‘Tobin’s Q’ – the market value of a company divided by its assets' replacement cost. If this ratio is above one, then corporates have an incentive to invest directly in the underlying assets rather than buying another company at market value to acquire the same assets. The Tobin’s Q ratio is currently at 1.7 for the US. “So it's very expensive to do MandA, and it is wiser for corporates to invest in the underlying capital goods themselves,” Van der Welle says. “We should therefore expect a gradual move away from MandA activity towards companies making direct investments in capital goods.” Supply-side constraints The fourth element is the severe supply-side constraints seen in the global economy, as capacity shut down during the pandemic. “This is reflected in the ISM Prices Paid Index, which reached an all-time high in June in reflection of rampant shortages of raw materials and labor,” says Van der Welle. “Clearly the issue today following the pandemic is not demand related, but supply related. This will also trigger more awareness to push the productivity frontier and incentivize capital expenditure.” Less reliance on labor The fifth element is the partial substitution from labor to capital in the US against the backdrop of lingering labor shortages. “A decline in the labor force participation rate shows that people are not quickly returning to the labor force, as they have been disincentivized by the subsidies and pay checks they have gained from the stimulus plans, and/or structural changes in their work/life balance due to the pandemic,” says Van der Welle. “When the cost of labor becomes more expensive, substituting labor with capital becomes more attractive for employers. Typically, the inflection point for capex intentions becoming positive is when unit labor costs rise by more than 2 year on year, which is the case today.” Capex will lengthen the earnings cycle Regarding earnings, there is a significant relationship between capex intentions and productivity, though the lag from intending to invest to actually getting a realized productivity gain is quite long – up to several years. Higher capex that eventually brings higher productivity growth will sustain the earnings cycle, Van der Welle says. Higher productivity gives corporates more pricing power because they suppress unit labor costs, and that means profit margins can stay elevated for longer. Business confidence is the best indicator for growth. Khan 20, Hashmat, and Santosh Upadhayaya. "Does business confidence matter for investment?." Empirical Economics 59.4 (2020): 1633-1665. (Economics Professor at Carleton University)Elmer Abstract Business confidence is a well-known leading indicator of future output. Whether it has information about future investment is, however, unclear. We determine how informative business confidence is for investment growth independently of other variables using US business confidence survey data for 1955Q1–2016Q4. Our main findings are: business confidence has predictive ability for investment growth; (ii) remarkably, business confidence has superior forecasting power, relative to conventional predictors, for investment downturns over 1–3-quarter forecast horizons and for the sign of investment growth over a 2-quarter forecast horizon; and (iii) exogenous shifts in business confidence reflect short-lived non-fundamental factors, consistent with the ‘animal spirits’ view of investment. Our findings have implications for improving investment forecasts, developing new business cycle models, and studying the role of social and psychological factors determining investment growth. Introduction Business confidence is a well-known leading indicator of future output, especially during economic downturns, and receives attention from the media, policymakers and forecasters. Somewhat surprisingly, the direct link between business confidence and investment has not yet been investigated. Our paper fills this gap. We provide a quantitative assessment of the information in business confidence for future investment growth, after controlling for the conventional determinants such as user cost, output, cash flow and stock price. Understanding the predictive power of business confidence is valuable along three dimensions. First, it can help forecasters and policymakers improve their investment forecasts. Second, it can provide a rationale for explicitly including business confidence—either as causal or as anticipatory—in theoretical models of business cycles. Third, it can help motivate studies on the how investment managers’ social and psychological circumstances influence investment decisions over and beyond rational cost-benefit analyses.Footnote1 We consider the Organization for Economic Co-Operation and Development (OECD)’s business confidence index for the USA as a measure of business confidence and ask the following three questions.Footnote2 Does business confidence have independent information about future business investment growth? Does it have forecasting power for investment downturns? Does it help in making directional forecasts—the positive or negative movements in the trajectory of investment growth? Previous literature that used business confidence has primarily studied its predictive properties for variables other than investment. Heye (1993) examines the relationship between business confidence and labour market conditions in the USA and other industrialized countries. Dasgupta and Lahiri (1993) show that business sentiments have explanatory power of forecasting business cycle turning points. Taylor and McNabb (2007) find that business confidence is procyclical and plays an important role in forecasting output downturns. Although we focus on business confidence, our paper is related to a large body of previous research that has studied consumer confidence or sentiment and its ability to forecast macroeconomic variables. Leeper (1992) finds that consumer sentiment does not help predict industrial production and unemployment, especially when financial variables are taken into account. On the other hand, Matsusaka and Sbordone (1995) reject the hypothesis that consumer sentiment does not predict output. Carroll et al. (1994), Fuhrer (1993), Bram and Ludvigson (1998), Ludvigson (2004) and Cotsomitis and Kwan (2006) find that the consumer attitudes have some additional information about predicting household spending behaviour. Lahiri et al. (2016) employ a large real-time dataset and find that the consumer confidence survey has important role in improving the accuracy of consumption forecasts. Christiansen et al. (2014) find that consumer and business sentiments contain independent information for forecasting business cycles. Barsky and Sims (2012) find that consumer confidence reflects news about future fundamentals and a confidence shock has a persistent effect on the economy. More recently, Angeletos et al. (2018) quantify the role of confidence for business cycle from both theoretical and empirical perspectives. They construct a measure of confidence within a Vector Autoregressive (VAR) framework by taking the linear combination of the VAR residuals that maximizes the sum of the volatilities of hours and investment at frequencies of 6–32 quarters. Their measure likely captures a mixture of consumer and business confidence and is, therefore, distinct from the survey-based measure that we use in our analysis. We find that business confidence leads US business investment growth by one quarter. It leads structures investment, which is one of the major components of business investment, by two quarters. Our empirical analysis shows that investors’ confidence has statistically significant predictive power for US business investment growth and its components (equipment and non-residential structures) after controlling for other determinants of investment. To better gauge the role of business confidence for investment growth, we also perform Out-Of-Sample (OOS) test for 1990Q1–2016Q4. Our findings suggest that the OOS test results are similar to the in-sample test results.Footnote3 While, as we found, business confidence has predictive power for total investment, it may also contain additional information on the trajectory of investment as captured by downturns and directional changes. This information would be of interest to policymakers in assessing the economy’s near-term outlook, over and above the general ability of business confidence to forecast investment. Indeed, we find that contemporaneous correlation between business confidence and investment growth rises during NBER recession dates. This property of the data suggests that it is worthwhile to explore the forecasting ability of business confidence for investment downturns and directional changes. Towards this end, we define investment downturns as business investment growth below the sample average for more than two consecutive quarters.Footnote4 Using a static probit forecasting model, we assess the OOS forecasting ability of business confidence for investment downturns for 1990Q1–2016Q4. A key finding of this approach in the literature is that term spread and stock price contain information for forecasting US recessions (Estrella and Mishkin 1998; Nyberg 2010; Kauppi and Saikkonen 2008). We follow a similar approach and find that business confidence has statistically significant forecasting power for investment downturns over 1–4-quarter forecast horizons in the US economy. It has stronger forecasting ability than the traditional predictors such as term spread, credit spread and stock price at 1–3-quarter forecast horizons. We also find strong evidence that the business confidence has good incremental predictive power for investment downturns over 1–4-quarter forecast horizons, controlling for other predictors of downturns. Economic decline results in multilateral breakdown that causes state collapse, conflict, climate change, and Arctic and Space War. McLennan 21 – Strategic Partners Marsh McLennan SK Group Zurich Insurance Group, Academic Advisers National University of Singapore Oxford Martin School, University of Oxford Wharton Risk Management and Decision Processes Center, University of Pennsylvania, “The Global Risks Report 2021 16th Edition” “http:www3.weforum.org/docs/WEF_The_Global_Risks_Report_2021.pdf Re-cut by Elmer Forced to choose sides, governments may face economic or diplomatic consequences, as proxy disputes play out in control over economic or geographic resources. The deepening of geopolitical fault lines and the lack of viable middle power alternatives make it harder for countries to cultivate connective tissue with a diverse set of partner countries based on mutual values and maximizing efficiencies. Instead, networks will become thick in some directions and non-existent in others. The COVID-19 crisis has amplified this dynamic, as digital interactions represent a “huge loss in efficiency for diplomacy” compared with face-to-face discussions.23 With some alliances weakening, diplomatic relationships will become more unstable at points where superpower tectonic plates meet or withdraw. At the same time, without superpower referees or middle power enforcement, global norms may no longer govern state behaviour. Some governments will thus see the solidification of rival blocs as an opportunity to engage in regional posturing, which will have destabilizing effects.24 Across societies, domestic discord and economic crises will increase the risk of autocracy, with corresponding censorship, surveillance, restriction of movement and abrogation of rights.25 Economic crises will also amplify the challenges for middle powers
as they navigate geopolitical competition. ASEAN countries, for example, had offered a potential new manufacturing base as the United States and China decouple, but the pandemic has left these countries strapped for cash to invest in the necessary infrastructure and productive capacity.26 Economic fallout is pushing many countries to debt distress (see Chapter 1, Global Risks 2021). While G20 countries are supporting debt restructure for poorer nations,27 larger economies too may be at risk of default in the longer term;28 this would leave them further stranded—and unable to exercise leadership—on the global stage. Multilateral meltdown Middle power weaknesses will be reinforced in weakened institutions, which may translate to more uncertainty and lagging progress on shared global challenges such as climate change, health, poverty reduction and technology governance. In the absence of strong regulating institutions, the Arctic and space represent new realms for potential conflict as the superpowers and middle powers alike compete to extract resources and secure strategic advantage.29 If the global superpowers continue to accumulate economic, military and technological power in a zero-sum playing field, some middle powers could increasingly fall behind. Without cooperation nor access to important innovations, middle powers will struggle to define solutions to the world’s problems. In the long term, GRPS respondents forecasted “weapons of mass destruction” and “state collapse” as the two top critical threats: in the absence of strong institutions or clear rules, clashes— such as those in Nagorno-Karabakh or the Galwan Valley—may more frequently flare into full-fledged interstate conflicts,30 which is particularly worrisome where unresolved tensions among nuclear powers are concerned. These conflicts may lead to state collapse, with weakened middle powers less willing or less able to step in to find a peaceful solution.
10/30/21
ND - DA - Terrorism
Tournament: Blue Key | Round: 1 | Opponent: Bentonville JH | Judge: Sanjana Bhatnagar Tech can solve infrastructure concerns but needs to be integrated – operators are key. Jacobs 5/31 Lionel; Senior Security Architect in the Palo Alto Networks ICS and SCADA solutions team. Coming from the asset-owner side , Lionel has spent more than 20 years working in the IT/OT environment, with a focus on ICS systems design, controls, and implementation. He was a pioneer in bridging the IT-OT security gap and implementing next-generation security into performance and safety critical process control areas. During his tenure, he successfully deployed a large scale ICS/SCADA security architecture composed of over 100 next-generation firewalls, hundreds of advanced endpoint protection clients and SIEM, distributed over dozens of remote plants and a centralized core, all based on a "Zero Trust" philosophy. Lionel graduated from Houston Baptist University with a double degree in Physics and Mathematics and has held certifications as a MCSE, CCA, CCNP, CCIP, CCNA, CSSA, and GICSP; “Critical Infrastructure Protection: Physical and Cyber Security Both Matter,” eSecurity Planet; 5/31/21; https://www.esecurityplanet.com/networks/critical-infrastructure-protection-physical-cybersecurity///SJWen Segmentation based on business criteria Segmentation is not just breaking apart the network based on the IP-Address space. True segmentation requires identifying and grouping devices into Zones or Enclaves based on meaningful business criteria to protect better vulnerable devices found within the address space. Access to devices in the zone needs to be restricted by users, groups, protocols, networks, and devices. In some instances, you may even consider restricting access by time of day. IoT/IIoT is beginning to take hold in the energy industry, which means there are going to be more devices attached to these networks gathering information and possibly running on a vendor’s proprietary software and hardware, which more than likely will not be managed or patchable by the operator of the system. So OandG needs to have a definite plan on how they will address this growing trend, and a zero trust-based strategy offers the best means of doing this integration in a safe, secure, and, most important, reversible manner. Camera and sensor security Segmentation will also include the zoning of radio frequency (RF) technologies like Wi-Fi, Microwave, satellite, and cellular. ICS and SCADA systems operators must remain mindful of the possibility of an upstream attack by threat actors who have managed to compromise their RF facilities. Remote facilities and devices often have cameras and sensors to alert when a door has been opened. Still, because they are remote, attackers have time to enter the facilities and plant a device that can go completely unnoticed. Another option physical access affords them is the opportunity to compromise the runtime operating systems and/or OS of the devices they find. The only way you will find these would be to do a physical search of the facility or cabinet and run an audit of the OS to ensure nothing has been tainted. Zoning limits damage So the reason why the zone trust segmentation (zoning) is so important is if you don’t have the time to perform these acts to confirm that the site is not compromised. With proper zoning enforcement, you can limit and isolate the damage to a region or just that location. Zones in a Zero Trust network also serve as an inspection point for traffic entering and exiting the enclave. The enabling of IPS, IDS, and virtual sandboxing technology can be applied on a per-zone basis, allowing for customized protection for the vulnerable devices contained within. Implementing these security measures is a best practice even on zones where devices can receive updates and have some form of endpoint protection. With proper design and device consideration, zoning with the different inspection technologies enabled can also be a remediating factor for those devices in your network that cannot be patched, updated, and even those that are end-of-life. In short, zoning with inspection technology enabled helps to ensure IT and OT network systems’ safe operations. In even the most secure environments, it is never safe to assume that data traffic transversing the network is free of a potential threat.
Increased strikes send a clear signal to terrorists that critical US infrastructure is vulnerable by weakening organizations. Davies 6 Ross; George Mason University - Antonin Scalia Law School, Faculty, The Green Bag; “Strike Season: Protecting Labor-Management Conflict in the Age of Terror,” SSRN; 4/12/06; https://papers.ssrn.com/sol3/papers.cfm?abstract_id=896185//SJWen Strikes (and, to a lesser extent, lockouts) are painful but necessary parts of private-sector American labor-management relations. Even if they weren't - even if sound public policy called for their eradication - we couldn't stop them. They are an inevitable byproduct of the conflicting interests and limited resources of organized workers and their employers. History shows that this is true even in times of warfare overseas or crisis at home: labor-management strife lessens at the beginning of a conflict and then bounces back. Now, however, we are confronted with warfare at home, a phenomenon that the United States has not had to deal with since the Civil War - before the rise of today's unprecedentedly large, complex, and interdependent economy and government. And history is repeating itself again. After a lull at the beginning of the war with terrorists, work stoppages have returned to their pre-war levels. The overall rate of strike activity is substantially lower than it was during previous wars (it has been slowly declining, along with overall union membership in the private sector, for decades). Today's war, however, is being fought in part on American soil, and against enemies who operate worldwide, but whose attacks tend to be small and local, seeking advantage from the unpredictability and brutality of the damage they inflict rather than from its scale. Thus, even small, localized, and occasional work stoppages - not just the large-scale strikes that arguably affected the military-industrial complex and thus the war efforts in the past - have the potential to increase risks to critical infrastructure and public safety during the war on terror. In other words, persistent strike activity at current levels poses risks of public harm, albeit risks that are difficult to anticipate with specificity in the absence of much experience or available data. This justifies taking some reasonable precautions, including the proposal made in this Article. By its very nature, a labor strike increases the vulnerability of that employer's operations to a terrorist attack. A strike is an act specifically designed to disrupt and weaken an employer's operations, for the (usually) perfectly lawful purpose of pressing for resolution of a dispute with management. A weakened organization or other entity is, of course, less capable of resisting and surviving exogenous shocks, whether they be commercial competition or terrorist attacks. In the United States, with its fully extended and endlessly interconnected critical infrastructure that touches everything from food processing to energy distribution to water quality, a strike in the wrong place at the wrong time that disrupts and weakens some part of that infrastructure could be decisive in the success or failure of a terrorist attack of the small, local sort described above, on such a weakened link in some infrastructural chain. Of course, none of this is to suggest that any union or its members (or any employer or its managers) would knowingly expose their fellow citizens or their property to a terrorist attack. To the contrary, experience to date suggests that union members are at least as patriotic and conscientious as Americans in general. In fact, the effectiveness of the proposal made in this Article is predicated in part on the assumption that neither workers nor their employers will knowingly contribute to the incidence or effectiveness of terrorist attacks. The concern addressed here is, rather, that innocent instigators or perpetuators of a work stoppage might unwittingly facilitate a successful terrorist attack or aggravate its effects.
Attacks on critical infrastructure collapses the economy through multiple avenues. FAS 6 DCSINT Handbook No. 1.02; Info directly from US army and Deputy Chief of Staff for Intelligence; “Critical Infrastructure Threats and Terrorism,” DCSINT/FAS; 8/10/6; https://fas.org/irp/threat/terrorism/sup2.pdf//SJWen Agriculture In 1984, a cult group poisoned salad bars at several Oregon restaurants with Salmonella bacteria as the first recorded event of bioterrorism in the United States. This resulted in 750 people becoming sick.24 A review of the agriculture infrastructure results in vulnerable areas such as the high concentration of the livestock industry and the centralized nature of the food processing industry. The farm-to table chain contains various points into which an attack could be launched. The threat of attack would seriously damage consumer confidence and undermine export markets. Understanding the goal of the threat points to the area most likely attacked. If the intent was economic disruption the target would be livestock and crops, but if the intent was mass casualties the point of attack would be contamination of finished food products. Damage to livestock could be very swift, the USDA calculated that foot-and mouth disease could spread to 25 states in 5 days.25 CDC is presently tracking and developing scenarios for the arrival of Avian Flu. Banking Prior to the destruction of the Twin Towers, physical attacks against the banking industry, such as the destruction of facilities, were rare. Unfortunately, evidence indicates that may change, in March 2005 three British al-Qa’ida operatives were indicted by a U.S. federal court on charges of conducting detailed reconnaissance of financial targets in lower Manhattan, Newark, New Jersey, and Washington, D.C. In addition to video taping the Citigroup Center and the New York Stock Exchange in New York City, the Prudential Financial building in Newark, and the headquarters of the International Monetary Fund and the World Bank in Washington D.C., the men amassed more than 500 photographs of the sites.26 The Banking infrastructures primary weakness is along its cyber axis of attack. Through phishing and banking Trojan targeting specific financial institutions, attackers reduce confidence among consumers. Recently American Express posted an alert online, including a screenshot of a pop-up that appeared when users log in to its secure site.27 The attack not only attempts to obtain personal information that can be used for various operations, but also launches a virus into the user’s computer. CitiBank, and Chase Manhattan Bank have both been victim during 2005 and 2006 to phishing schemes misrepresenting their services to their clients. Energy Recently the oil industry occupied the headlines, and the criticality of this infrastructure is not lost on terrorists. In mid-December 2004, Arab television aired an alleged audiotape message by Usama bin Laden in which he called upon his followers to wreak havoc on the U.S. and world economy by disrupting oil supplies from the Persian Gulf to the United States.28 The U.S. uses over 20.7 million barrels a day of crude oil and products and imports 58.4 of that requirement.29 On 19 January 2006 al-Qaeda leader Osama bin Laden announced in a video release that, “The war against America and its allies will not be confined to Iraq…..”, and since June of 2003 there have been 298 recorded attacks against Iraqi oil facilities.30 Terrorists conduct research as to the easiest point to damage the flow of oil or to the point where the most damage can be done. Scenarios involving the oil fields themselves, a jetliner crashing into the Ras Tanura facility in Saudi Arabia could remove 10 percent of the world’s energy imports in one act.31 Maritime attacks are also option for terrorists; on October 6, 2002 a French tanker carrying 397,000 barrels of crude oil from Iran to Malaysia was rammed by an explosive laden boat off of the port of Ash Shihr, 353 miles east of Aden. The double-hulled tanker was breached, and maritime insurers tripled the rates.32 Energy most travel often long distances from the site where it is obtained to the point where it is converted into energy for use, a catastrophic event at any of the sites or along its route can adversely impact the energy infrastructure and cause ripples in other infrastructures. The security of the pipeline in Alaska increases in importance as efforts are made to make America more independent on energy use. Economy The U.S. economy is the end-state target of several terrorist groups as identified in the introduction quote. The means by which terrorists and other threats attempt to impact the economic infrastructure is through it’s linkage to the other infrastructures. Attacks are launched at other infrastructures, such as energy or the Defense Industrial Base in an effort to achieve a “cascading” result that impacts the economy. Cyber attacks on Banking and Finance are another effort to indirectly impact the economy. The short term impacts of the 9/11 attacks on Lower Manhattan resulted in the loss of 30 of office space and a number of businesses simply ceased to exist. Close to 200,000 jobs were destroyed or relocated out of New York City. The destruction of physical assets was estimated in the national accounts to amount to $14 billion for private businesses, $1.5 billion for state and local government enterprises and $0.7 billion for federal enterprises. Rescue, cleanup and related costs are estimated to at least $11 billion for a total direct cost of $27.2 billion.33 The medium and long term effects cannot be accurately estimated but demonstrate the idea of cascading effects. The five main areas affected over a longer period were Insurance, Airlines, Tourism and other Service Industries, Shipping and Security and military spending. At various times terrorist rhetoric has mentioned attacks against Wall Street proper, but the more realistic damage to the economy will come through the indirect approach of cascading effects. Transportation The attack on commuter trains in Madrid in March of 2004 and the London bombings in July of 2005, which together killed 243 people, clearly indicated the threat to the transportation infrastructure. Statistics provided by the Brookings Institute in Washington DC show that between 1991 and 2001 42 of worldwide terrorist attacks were directed against mass transit. Transportation is viewed by terrorists as a “soft target” and one that will impact the people of a country. Mass Service Transportation (MST) is the likely target of a terrorist attack. MST caters to large volumes of people, crammed into narrow confined spaces MST is designed to move large numbers of people quickly and efficiently, which is often counter to protective measure MST assets are enclosed, serving to amplify explosions MST attacks can result in “cascading effects” because communications and power conduits are usually collocated in proximity to their routes The Department of Homeland Security sent a “public sector notice” in May of 2006 based on two incidents of “suspicious videotaping” of European mass-transit systems.34 The individual had several tapes besides the one in his camera, none of which showed any tourist sites. The tapes focused on the insides of subway cars, the inside and outside of several stations and exit routes from the stations. In June of 2003 the FBI arrested Iyman Faris, a 34 year old naturalized American citizen who had been in contact with Al Qaeda conducting research and reconnaissance in an effort to destroy the Brooklyn Bridge.35 Mr. Faris had traveled to Afghanistan and Pakistan in 2000, meeting with Osama bin Laden, he returned to the U.S. and began gathering information concerning the Brooklyn Bridge and communicating via coded messages with Al Qaeda leaders. An attack on the bridge would have not only damaged the transportation infrastructure, but also a known American landmark. On 24 May 2006, a Pakistani immigrant was convicted on charges of plotting to blow up one of Manhattan’s busiest subway stations in retaliation for the U.S. actions at the Abu Ghraib prison.36 Terrorist threats to the transportation infrastructure extend beyond land to the sea. Vice Admiral Jonathan Greenert, commander of the U.S. Seventh Fleet, said “one of my nightmares would be a maritime terrorism attack in the Strait of Malacca”.37 “There is a strain of al-Qaida in Southeast Asia, called Jemaah Islamiya. They are actively pursuing a maritime terrorism capability that includes diving and mining training.”38 As how this might impact on the economy, $220 billion in trade comes through the Seventh Fleet area of responsibility and 98 of the commerce is moved by sea. Just as ports can be viewed a SPOF within the maritime transport system, there are certain waterway chokepoints or heavily trafficked areas that can be viewed as a high payoff target to a terrorist or result in catastrophic damage from a natural disaster.
Extinction. Liu '18 Qian; 11/13/18; Managing Director of Greater China for The Economist Group, previously director of the global economics unit and director of Access China for the Economist Intelligence Unit, PhD in economics from Uppsala University; "The next economic crisis could cause a global conflict. Here's why," https://www.weforum.org/agenda/2018/11/the-next-economic-crisis-could-cause-a-global-conflict-heres-why/ Re-Cut SJWen The next economic crisis is closer than you think. But what you should really worry about is what comes after: in the current social, political, and technological landscape, a prolonged economic crisis, combined with rising income inequality, could well escalate into a major global military conflict. The 2008-09 global financial crisis almost bankrupted governments and caused systemic collapse. Policymakers managed to pull the global economy back from the brink, using massive monetary stimulus, including quantitative easing and near-zero (or even negative) interest rates. But monetary stimulus is like an adrenaline shot to jump-start an arrested heart; it can revive the patient, but it does nothing to cure the disease. Treating a sick economy requires structural reforms, which can cover everything from financial and labor markets to tax systems, fertility patterns, and education policies. Policymakers have utterly failed to pursue such reforms, despite promising to do so. Instead, they have remained preoccupied with politics. From Italy to Germany, forming and sustaining governments now seems to take more time than actual governing. And Greece, for example, has relied on money from international creditors to keep its head (barely) above water, rather than genuinely reforming its pension system or improving its business environment. The lack of structural reform has meant that the unprecedented excess liquidity that central banks injected into their economies was not allocated to its most efficient uses. Instead, it raised global asset prices to levels even higher than those prevailing before 2008. In the United States, housing prices are now 8 higher than they were at the peak of the property bubble in 2006, according to the property website Zillow. The price-to-earnings (CAPE) ratio, which measures whether stock-market prices are within a reasonable range, is now higher than it was both in 2008 and at the start of the Great Depression in 1929. As monetary tightening reveals the vulnerabilities in the real economy, the collapse of asset-price bubbles will trigger another economic crisis – one that could be even more severe than the last, because we have built up a tolerance to our strongest macroeconomic medications. A decade of regular adrenaline shots, in the form of ultra-low interest rates and unconventional monetary policies, has severely depleted their power to stabilize and stimulate the economy. If history is any guide, the consequences of this mistake could extend far beyond the economy. According to Harvard’s Benjamin Friedman, prolonged periods of economic distress have been characterized also by public antipathy toward minority groups or foreign countries – attitudes that can help to fuel unrest, terrorism, or even war. For example, during the Great Depression, US President Herbert Hoover signed the 1930 Smoot-Hawley Tariff Act, intended to protect American workers and farmers from foreign competition. In the subsequent five years, global trade shrank by two-thirds. Within a decade, World War II had begun. To be sure, WWII, like World War I, was caused by a multitude of factors; there is no standard path to war. But there is reason to believe that high levels of inequality can play a significant role in stoking conflict. According to research by the economist Thomas Piketty, a spike in income inequality is often followed by a great crisis. Income inequality then declines for a while, before rising again, until a new peak – and a new disaster. Though causality has yet to be proven, given the limited number of data points, this correlation should not be taken lightly, especially with wealth and income inequality at historically high levels. This is all the more worrying in view of the numerous other factors stoking social unrest and diplomatic tension, including technological disruption, a record-breaking migration crisis, anxiety over globalization, political polarization, and rising nationalism. All are symptoms of failed policies that could turn out to be trigger points for a future crisis. Voters have good reason to be frustrated, but the emotionally appealing populists to whom they are increasingly giving their support are offering ill-advised solutions that will only make matters worse. For example, despite the world’s unprecedented interconnectedness, multilateralism is increasingly being eschewed, as countries – most notably, Donald Trump’s US – pursue unilateral, isolationist policies. Meanwhile, proxy wars are raging in Syria and Yemen. Against this background, we must take seriously the possibility that the next economic crisis could lead to a large-scale military confrontation. By the logic of the political scientist Samuel Huntington , considering such a scenario could help us avoid it, because it would force us to take action. In this case, the key will be for policymakers to pursue the structural reforms that they have long promised, while replacing finger-pointing and antagonism with a sensible and respectful global dialogue. The alternative may well be global conflagration.
2/19/22
ND - NC - Kant
Tournament: Apple Valley | Round: 1 | Opponent: Evergreen Valley Independent SE | Judge: Jeong-Wan Choi 3 Framework The meta-ethic is procedural moral realism. This entails that moral facts stem from procedures while substantive realism holds that moral truths exist independently of that in the empirical world. Prefer procedural realism – 1 Collapses – the only way to verify whether something is a moral fact is by using procedures to warrant it. 2 Uncertainty – our experiences are inaccessible to others which allows people to say they don’t experience the same, however a priori principles are universally applied to all agents. 3 Is/Ought Gap – we can only perceive what is, not what ought to be. It’s impossible to derive an ought statement from descriptive facts about the world, necessitating a priori premises. Regress – I can keep asking “why should I follow this” which results in skep since obligations are predicated on ignorantly accepting rules. Only reason solves since asking “why reason?” requires reason which is self-justified. That means we must universally will maxims— any non-universalizable norm justifies someone’s ability to impede on your ends. Thus, the standard is consistency with the categorical imperative. Prefer – 1 Performativity—freedom is the key to the process of justification of arguments. Willing that we should abide by their ethical theory presupposes that we own ourselves in the first place. 2 All other frameworks collapse—non-Kantian theories source obligations in extrinsically good objects, but that presupposes the goodness of the rational will. 3 Necessity—my framework is inherent to the way we set ends. Ethics must be necessary and not contingent since otherwise its claims could be escapable. Necessary truths outweigh on probability—if a necessary truth is possible that means it’s true in a possible world, but that implies it’s true in all worlds since that’s what necessity is, so they have to prove there’s 0 risk of my framework. 4 TJFs and they outweigh since it precludes engagement on the framework layer – prefer for Resource disparities- Our framework ensures big squads don’t have a comparative advantage since debates become about quality of arguments rather than quantity - their model crowds out small schools because they have to prep for every unique advantage under each aff, every counterplan, and every disad with carded responses to each of them
Offense 1 The process of strike uses patients or beneficiaries of work as a means to an end Howard 20 Danielle Howard Mar 2020, "What Should Physicians Consider Prior to Unionizing?," Journal of Ethics | American Medical Association, https://journalofethics.ama-assn.org/article/what-should-physicians-consider-prior-unionizing/2020-03 LEX JB - Written in the context of doctors, warrant can be used for all jobs The possible disadvantage to patients highlights the crux of the moral issue of physician strikes. In Immanuel Kant’s Groundwork for the Metaphysics of Morals, one formulation of the categorical imperative is to “Act in such a way as to treat humanity, whether in your own person or in that of anyone else, always as an end and never merely as a means.”24 When patient care is leveraged by physicians during strikes, patients serve as a means to the union’s ends. Unless physicians act to improve everyone’s care, union action—if it jeopardizes the care of some hospitalized patients, for example—cannot be ethical. It is for this reason that, in the case of physicians looking to form a new union, the argument can be made that unionization should be used only as a last resort. Physician union members must be prepared to utilize collective action and accept its risks to patient care, but every effort should be made to avoid actions that risk harm to patients. 2 Going on strike isn’t universalizable – a) if everyone leaves work then there will be no concept of a job b) everyone means the employer even leaves which is a contradiction in contraception 3 No aff offense – no unique obligation of the state to give ability to strike – if a workplace is coercive you can use legal means or just find another job
11/6/21
ND - Theory - Spec Jurisdiction
Tournament: Princeton | Round: 1 | Opponent: Ardrey Kell RG | Judge: Tara Riggs 1 Interp: The affirmative must specify the jurisdiction the right to strike is recognized within a delimited text in the 1AC. Jurisdiction is flexible and has too many interps– normal means shows no consensus. Leyton Garcia 17 Jorge Andrés Leyton García (Postgraduate Research Student / Assistant Teacher en University of Bristol). “THE RIGHT TO STRIKE AS A FUNDAMENTAL HUMAN RIGHT: RECOGNITION AND LIMITATIONS IN INTERNATIONAL LAW”. Revista Chilena de Derecho, vol. 44, núm. 3, 2017, pp. 781-804. Accessed 6/24/21. https://www.redalyc.org/pdf/1770/177054481008.pdfXu The fi eld in which these pages will revolve is indeed complex and full of paradoxes. The right to strike has been recognized in diverse forms in different international and national legal systems. In some cases it has been expressly recognized in the text of conventions and treaties (European Social Charter), while in others the recognition has been achieved through the principled work of supervisory or jurisdictional bodies (like it has been the case in the ILO and the ECHR), not without diffi culties and doubts, as we shall see in the following pages. The analysis that follows will show, however, that the form of recognition does not necessary defi ne the scope and extent of the right. 1.1. THE ILO Despite being the most important source of labor standards, there is no defi nition of the right to strike in any of the ILO binding instruments. The right to strike is not mentioned in the ILO Constitution or in the Declaration of Philadelphia, and Convention N°87 on Freedom of Association and Protection of the Right to Organise contains no specifi c reference to it. There is no textual recognition and no canonical defi nition in any of the Conventions and Recommendations that constitute the ILO’s body of norms. Nevertheless, it is fair to say that throughout the history of the ILO there has been a wide consensus among its members regarding the existence of a right to strike which emanates from the dispositions of Convention N°87 as a fundamental aspect of Freedom of Association. As Janice Bellace has pointed out: “Over the past 60 years the ILO constituents have recognized that there is a positive right to strike that is inextricably linked to – and an inevitable corollary of – the right to freedom of association”3 . Violation – you don’t. Prefer – 1 Stable Advocacy – they can redefine in the 1AR to wriggle out of DA’s which kills high-quality engagement. We lose access to Readiness DA’s, Unions DA’s, basic case turns, and core process counter plans that have different definitions and 1NC pre-round prep. 2 Ground – Policy makers will always define the entity that they are recognizing. Not defining hurts my strategy since they can shift out as I ask DA questions, so I err on the side of caution and read generics which get destroyed by AC frontlines. 3 JSpec isn’t regressive or arbitrary – its core topic lit for what happens when the aff is implemented and cannot be discounted from recognition policies that require enforcement to function. Fairness and education are voters – its how judges evaluate rounds and why schools fund debate DTD – it’s key to norm set and deter future abuse Competing interps – Reasonability invites arbitrary judge intervention and a race to the bottom of questionable argumentation – it also collapses since brightlines operate on an offense-defense paradigm No RVIs – A – Encourages theory baiting – outweighs because if the shell is frivolous, they can beat it quickly B – its illogical for you to win for proving you were fair – outweighs since logic is a litmus test for other arguments Reject 1ar Theory and voting issues A 7 - 6 time skew B No 3nr, so 2ar gets to weigh however they want C Judges are more likely to by 2a arguments as they are the last speech
12/4/21
SO - CP - Anonymous Donation
Tournament: Valley RR | Round: 1 | Opponent: Strake Jesuit KS | Judge: Spencer Orlowski, Brixz Gonzaba Text – the United States ought to - anonymously invest $25 billion into 25 production lines dedicated solely to COVID-19 vaccines to boost global vaccine production managed by the Biomedical Advanced Research and Development Authority. - distribute 8 billion doses of COVID vaccines using an equitable distribution framework prioritizing developing countries in the Global South. The CP solves the entirety of the case and does it faster. Stankiewicz 21 Mike Stankiewicz 5-6-2021"Opinion: For just $25 billion, the U.S. could jump-start a project to quickly vaccinate the entire world against COVID" https://www.marketwatch.com/story/for-just-25-billion-the-u-s-could-jump-start-a-project-to-quickly-vaccinate-the-entire-world-against-covid-11614898552 (a press officer in Public Citizen's communication's department, where he focuses on legislative policy and health-orientated advocacy)Elmer Despite wealthy countries such as the U.S. ramping up COVID-19 vaccination efforts, it still may take years to vaccinate the world, especially poorer countries, and the economic and humanitarian impacts could be devastating. But an injection of just $25 billion into global vaccine production efforts by the U.S. government could save millions of lives and help prevent economic disaster. The most up-to-date numbers paint incredibly different futures between wealthy and low-income countries. At the current rate of vaccination, analysts predict that developing countries, including almost all of Southeast Asia, may not reach meaningful vaccine coverage until 2023. Comparatively, President Joe Biden has promised that the U.S. will have enough vaccine doses to inoculate every adult within the next three months. Increased fatalities And as wealthy countries such as the U.S. are starting to see lower death, transmission and hospitalization rates, low-income countries are experiencing increased hardship and fatalities. Countries such as Hungry are being forced to tighten restrictions as infection rates increase, and deaths in Africa have spiked by 40 in the past month, according to the World Health Organization (WHO). No country can be left behind in this global pandemic, and the U.S. is in a unique position to make sure every country gets the ample amount of vaccines they need. Public Citizen research has found that just a $25 billion investment in COVID-19 vaccine production by the U.S. government would produce enough vaccine for developing countries, potentially shaving years from the global pandemic. Public Citizen estimates that 8 billion doses of National Institutes of Health-Moderna MRNA, +1.98 vaccine can be produced for just over $3 per dose. To bolster production and supply the necessary 8 billion doses, it would take $1.9 billion to fund the necessary 25 production lines. Another $19 billion would pay for materials and labor, and $3 billion would compensate Moderna for making technology available to manufacturers in other countries. An additional $500 million would cover costs to staff and run a rapid-response federal program that provides technical assistance and facilitates technology transfer to manufacturers and works with the WHO’s technology hub. In total, vaccinating the world would cost less than 1.4 the total of Biden’s $1.9 trillion COVID relief plan. But such a program also needs to be properly managed to be successful. To help facilitate these efforts, the Biden administration should also designate the government’s Biomedical Advanced Research and Development Authority (BARDA) to lead the world-wide vaccine manufacturing effort. BARDA has the necessary experience to coordinate an initiative of this scale with the WHO, building on its partnership to build pandemic flu manufacturing capacity in developing countries after the bird-flu scare of 2006. Widespread vaccines would help U.S. economy These efforts would dramatically increase access to vaccines in developing countries and speed up global vaccination by years, saving countless lives. But allowing the current vaccine supply crisis to continue is not just inhumane, it is also not in our own economic interest to do so.
9/24/21
SO - CP - Ban Nukes
Tournament: Jack Howe | Round: 2 | Opponent: Brentwood BB | Judge: Vanessa Ngywen 6 Counterplan Text – States ought to eliminate their nuclear arsenals – solves their terminal of nuclear war on the advantage
9/18/21
SO - CP - Consult WHO
Tournament: Jack Howe | Round: 3 | Opponent: Ayala AM | Judge: Srinidhi Yerraguntala cites broken - check open source
9/19/21
SO - CP - Eliminate Nukes
Tournament: Voices | Round: 5 | Opponent: Prospect ST | Judge: Vishan Chaudhary 7 Counterplan Text – States ought to eliminate their nuclear arsenals – solves adv 1 – their only terminal impact is nuke war.
10/9/21
SO - CP - Gene Editing Regulation
Tournament: Voices | Round: 4 | Opponent: Immaculate Heart RR | Judge: Quentin Clark 3 Text – Member nations of the World Trade Organization ought to - Maintain patent protection over Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR). - Purchase CRISPR patents and distribute associated technology developed from CRISPR research. - Intervene and force licenses if market prices of CRISPR technology is too high. - Create an advisory committee for gene-editing patents to reviewing and regulating CRISPR patents in accordance to 1NC Parthasarathy. - Ensure Transparency and Certainty in the Patent-filing and application process. - Invest in University-level research on Genome Patenting through licenses. - Officially change and announce the WTO stance and rulings on Genomic Editing to allow patent protection on Genomic Editing. Patents are necessary to guide what functions and uses CRISPR takes – state control and regulation is key to actively guide development paths. Parthasarathy 18 Shobita Parthasarathy 10-23-2018 "Use the patent system to regulate gene editing" https://www.nature.com/articles/d41586-018-07108-3 (professor and director of the Science, Technology, and Public Policy Program at the Gerald R. Ford School of Public Policy, University of Michigan, Ann Arbor, Michigan, USA.)Elmer Next month, researchers, policymakers, ethicists and social scientists will meet in Hong Kong for the second International Summit on Human Gene Editing. Since the first summit, held in Washington DC nearly three years ago, researchers have continued to apply the versatile gene-editing technology CRISPR–Cas9 to diverse domains — from crop enhancement and pest eradication to human disease. Many have flagged the ethical, economic and environmental concerns raised by manipulating plant and animal genomes, including our own. But, so far, governments have struggled to develop viable approaches to regulation. A crucial part of the arsenal for shaping the future of gene editing is hiding in plain sight: the patent system. In the past, patents have played an important part in regulating new technologies and research, from the atom bomb to work involving human embryonic stem cells. Some organizations and individual researchers using CRISPR–Cas9 are already creating licensing agreements that reflect their own moral codes. In my view, government-driven efforts centred on national patent systems should be deployed to help regulate gene editing. New laws needed Last year, the US National Academies of Science, Engineering, and Medicine recommended that clinical trials involving gene editing in human eggs, sperm or embryos should be permitted only for the treatment and prevention of serious disease or disability. They also urged that a “stringent oversight” system be developed to limit the use of the technology in this context1. In July, the Nuffield Council on Bioethics, a highly respected bioethics body in the United Kingdom, similarly stated that the use of heritable genome editing “could be ethically acceptable” only after appropriate governance measures are put in place2. These recommendations haven’t yet translated into legal frameworks or formal governance structures. And the history of regulating emerging biotechnologies suggests that such laws could be a long time coming, if they end up being formed at all3. For now, when it comes to editing genes in humans and other organisms, the United States and the United Kingdom — along with many other countries — rely on laws and policies that cover existing genetic-engineering technologies. Or, as in the case of human germline editing in the United States, the government simply bans the use of federal funds for such research. Such policies have been criticized for decades as being inadequate4. Their insufficiencies are considerably more problematic in the context of gene editing, which, largely thanks to the development and uptake of CRISPR–Cas9 (see ‘Invention protection’), promises to have much greater societal impact than previous technologies for modifying genomes. In the United States, for instance, the oversight provided by the ‘coordinated framework’ (developed in the 1980s to deal with genetically engineered organisms) handles only immediate risks. The framework covers the management of altered plants and animals that have already been created, and does not consider the socio-economic, ecological or ethical consequences of creating organisms not found in nature. Likewise, since 2016, a condition in the budget for the US Food and Drug Administration has prohibited the agency from authorizing clinical trials in which a “human embryo is intentionally created or modified to include a heritable genetic modification”. But there is nothing to stop US researchers using private funds to edit the genes of human embryos in the lab. Historical precedents How could patents help? These legal instruments — which give inventors the right to prevent others from commercializing their technologies — are usually seen solely as contracts that incentivize innovation. In fact, they can do much more, directly and indirectly. They can lead to higher prices for products, for instance, and reduce people’s access to important technologies if inventors use them to establish and maintain monopolies. Perhaps most importantly, they can shape innovation trajectories. Patent laws were a major factor in the ‘war of the currents’ in the 1880s, driving people to favour engineer George Westinghouse’s alternating current (AC) over the direct-current system invented by Thomas Edison. (Westinghouse licensed the US patents for AC from inventor Nikola Tesla.) The decisions that governments make about whether to grant patents implicitly demonstrate their moral approval of an invention and indicate what types of technology are likely to generate exclusive markets5. The idea that governments could use patent systems to shape both the development of a technology and its impact on society is not new. In the 1940s, the US Congress used the patent system to control the development and commercialization of atomic weaponry. To try to reduce the possibility of private actors developing atomic bombs, or of US intelligence leaking, Congress created a three-tier system of non-patentable, government patentable and privately patentable technologies in the Atomic Energy Act of 19546. The US Patent and Trademark Office offered standard patents for technologies that fell into the ‘privately patentable’ category. But inventions that would be useful only in the production of fissionable material, or when using such material or atomic energy in a military weapon, were non-patentable. The government (specifically, the Atomic Energy Commission) could also step in and require ‘compulsory licenses’ for technologies deemed to be in the public interest. Even further back, in the nineteenth century, the governments of several European countries, including France, Switzerland and Italy, limited or even banned patents on foods and pharmaceuticals to ensure that people had sufficient access to these products7. Existing frameworks Biotechnology, including gene editing, is already regulated to some degree through patents. In 1998, the European Parliament and Council passed a directive on the legal protection of biotechnological inventions. This harmonized Europe’s approach to patents in the emerging field; it covers all European Union countries and the 38 member countries of the European Patent Office. It also addressed people’s concerns about the moral and socio-economic implications of individuals being able to obtain patents on living entities, such as human embryos or genetically engineered plants and animals. The directive states that governments can grant patents on animals that have been modified only if the resulting benefit to humankind outweighs the animal’s suffering. It even includes prohibitions on patenting processes that could be used to modify human sperm, eggs or embryos. Moreover, some scientific organizations and researchers who use CRISPR–Cas9 have themselves recognized the power of patents to govern gene editing, and are writing their own licensing agreements. For example, the Broad Institute of MIT and Harvard, in Cambridge, Massachusetts, is a non-profit research institution that holds expansive patents on CRISPR–Cas9 technology. It prohibits its licensees from using CRISPR–Cas9 to modify human embryos, alter ecosystems or modify tobacco plants8. Similarly, Kevin Esvelt at the Massachusetts Institute of Technology (MIT), also in Cambridge, holds a patent on a ‘gene drive’ that could be used to spread a particular genomic alteration throughout an animal population. He requires those who wish to license this patent to disclose their proposed use, and has suggested that other scientists working on gene drives do the same. He argues that this will enable public discussion9. What I’m calling for, however, is different: more-formal, comprehensive, government-driven regulation using the patent system. This would cover all domains of gene editing, not just certain areas of research. It would have more transparency and political legitimacy than individual efforts ever could, by involving government institutions that are explicitly charged with representing the public interest. And it would enable governments to exploit the unique vantage point that patent offices have on the early stages of scientific fields and industries.
(Inventors usually file patent applications before they try to get regulatory approval for new technologies.) In the United States, Congress could authorize a working group to convene an advisory committee for gene-editing patents. The working group could include: individuals from the Environmental Protection Agency, who are trained in assessing ecosystem impacts; staff from the Department of Commerce, which oversees the US Patent and Trademark Office; personnel from the Department of Health and Human Services, who have deep understanding of biomedical research, health-care costs and research ethics; and staff from the Government Accountability Office, which in the past few years has developed expertise in technology assessment. The advisory committee should also comprise scientists, physicians, ethicists, social scientists, historians, lawyers and representatives from the private sector. Building on existing laws such as the 1954 Atomic Energy Act, the committee could put together a regulatory framework for reviewing and awarding patents related to gene editing. It would need to incorporate the perspectives of citizens at every step10, and might place inventions into distinct categories. Perhaps the use of CRISPR–Cas9 for editing human embryos would not receive patent protection, for instance, whereas the use of the technology to correct a common mutation that causes heart failure would. Under such a framework, the committee could identify inventions that are likely to be so important to the public interest that the government should monitor closely how associated patents are used and licensed, and step in to force broad licensing if a patent holder charges too high a price for access to their invention. (Currently, the 1980 Bayh–Dole Act gives the US government ‘march-in’ rights in the case of taxpayer-funded research, although it has never been used in this way11.) The EU directive on the legal protection of biotechnological inventions already provides Europe with some guidance on which gene-editing processes and products to exclude from patentability5. But 20 years on, additional oversight is needed. To develop a more detailed governance framework, the European Patent Office should convene an advisory committee to develop a framework, similar to the one proposed for the United States. This could then by adopted by the European Patent Office and EU member countries. Ultimately, patent law will need to be just one of many regulatory schemes. Some developers might still create and use ethically problematic technology, even if they are unable to patent it. But existing approaches, and the entities that are conventionally tasked with overseeing areas of scientific research, seem ill-equipped to address complex societal and value-based concerns in an increasingly privatized world. Patents, which affect the thousands of investigators now using CRISPR–Cas9 in both the private and public sector, should be part of the mix. Absent Government regulation, gene editing is dominated by the market which is used to reduce genetic diversity – that causes Extinction. Wolfe 9 Christian Wolfe 7-27-2009 “Human Genetic Diversity and the Threat to the Survivability of Human Populations” https://www.ohio.edu/ethics/2003-conferences/human-genetic-diversity-and-the-threat-to-the-survivability-of-human-populations/ (Associate Editor for American Association of Inside Sales Professionals)re-cut by Elmer Through advances in reproductive technologies humans will eventually have the ability to utilize nearly fully artificial selection on human populations. These technologies raise many ethical and theological concerns. I will address one of the pragmatic ethical concerns, the potential loss of genetic diversity. Genetic diversity has a direct relation to the fitness and survivability of various species and populations; as genetic diversity decreases within a population, so does the fitness and survivability of that population. An examination of the genetic diversity argument (GDA) reveals that there is not strongly persuasive evidence regarding the effects on genetic diversity of the reproductive technologies on human populations. The only method available to produce the required evidence is through a very complex form of human experimentation. The type of human experiment that would produce the evidence is incompatible with present ethical codes of conduct. Therefore, any implementation of these technologies on human populations should be banned. There are many emerging technologies that could potentially affect genetic diversity. These include genetic testing and screening, selective breeding, population control, sterilization, selective abortion, embryo testing and selection, sperm donation, egg donation, embryo donation, surrogate pregnancy, fertility drugs, contraception, cloning embryos, and germ line or somatic cell manipulation (Resnik 2000, 454). Each of these reproductive technologies affects the composition of the human gene pool by increasing or decreasing the frequency of different genotypes or combinations of genotypes (Resnik 2000, 454). The germ-cell line, or just germ-line, constitutes a cell line through which genes are passed from generation to generation (World of Genetics 322). Germ-line therapy is often differentiated from somatic cell therapy, which is the alteration of non-reproductive cells. This distinction is not as clear as much of the literature supposes, but the problems with the germ-line/somatic cell distinction are beyond the scope of this paper. The focus of this paper includes the screening of embryos with the possibility of destruction of certain embryos, the modification of DNA (deoxyribonucleic acid) of early stage embryos through in-vitro fertilization (IVF), and the modification of parent gametes (Zimmerman 594-5). These technologies pose the clearest threat to genetic diversity of human populations. Genetic testing and screening examines the genetic information contained in a person’s cells to determine whether that person has or will develop a certain disease, is more susceptible to certain environmental risks, or could pass a disease on to his or her offspring (World 305). Parents could subject themselves to testing to determine whether or not to reproduce based on the likelihood of their potential children inheriting their genetic maladies. Also, embryos can be subjected to testing and screening to determine the likelihood that the future individual will develop a genetic disease. From that information, parents can decide to destroy the embryo, alter the embryo, or leave the embryo unmodified and risk that the child will develop a genetic disease. Germ-line gene therapy (GLGT) is germ-line manipulation on the genetic level in order to prevent genetic diseases in future persons (Richter and Bacchetta 304). The goal of GLGT is to treat human diseases by correcting the genetic defects that underlie the genetic disorders (Anderson and Friedmann 907). Therapy presents an alternative to destroying embryos likely to develop genetic disease by actually correcting genetic defects. Also available is the alteration of parent gametes in order to eliminate the possibility of passing on genetic disease to their offspring. GLGT allows for the alteration of either the early stage embryo or the parent gametes to prevent genetic disease. By either eliminating those genotypes that are likely to produce genetic disease or by altering the genome to actually prevent the genetic disease from developing, these technologies have great potential to affect the genetic diversity of a population. Genetic diversity is the variety and frequency of different genotypes or combinations of different genotypes within a population. A population is a geographically, socially, or culturally linked group whose reproductive decisions affect those within the group. Genetic diversity is measured by genetic variability, which diminishes in a population when the number of different phenotypes or the number of different combinations of genotypes decreases. Since populations are composed of individuals that carry genotypes, individual reproductive outcomes affect the genetic variability within specific populations (Resnik 2000, 452). Genetic diversity provides the resource for phenotypic variation that is integral in determining the rate of evolutionary change in an environment. A population that lacks genetic diversity will be poorly equipped to meet environmental changes and demands (Resnik 2000, 452). The importance of genetic diversity is undeniable; the survivability of a population is directly related to genetic diversity. While genetic diversity has no intrinsic value, genetic diversity has a clear instrumental value. Humans place positive value in genetic diversity as it promotes the extrinsic value of survivability. There is an ethical duty to prevent decreases in the genetic diversity of populations because of its importance in the survivability of those populations. Decreases in genetic diversity in populations are ethically undesirable because actions that reduce the survivability of the population are unethical. The genetic diversity argument (GDA) starts from the fact that scientific and technological developments in the realm of genetics and human reproduction will greatly affect the genetic diversity of human populations. There are both pessimistic and optimistic versions of the argument. I will briefly describe both versions of the GDA. The pessimistic version of the argument contends that the increased ability to control human reproduction will result in a loss of genetic diversity that will threaten the health and survivability of human populations (Resnik 2000, 451). This threat to health and survivability is due to a decrease in the populations’ ability to adapt to environmental changes and demands. In effect, these technologies have the potential to make the pool of available phenotypic traits limited enough so that human populations will not be able to respond to changes in environmental demand. This version of the GDA warns that germ-line altering reproductive technologies will reduce populations’ gene pools and eliminate potentially useful genes. Genetic diversity provides a resource of these useful genes. Evolutionary change is blind and has no way to know which genes are useful, therefore it is potentially damaging to population survivability to eliminate genes of any sort.
As Glenn McGee notes, “The point of the GDA is that human beings also have no way of knowing which genes will be useful in the future or in different environments” (cited in Resnik 2000, 456). For instance, genetically homogenous populations of corn face problems with blight due to lack of genetic diversity. Although human populations have an ever-increasing level of control over the environment, the pessimistic response still turns on the inability to determine which genes will be useful in the future. The optimistic version of the genetic diversity argument contends that these reproductive technologies could lead to increases in human health and survivability resulting in an improvement of the well being of populations (Resnik 2000, 457). The basis for this response rests on the historical fact that advances in technology increase humans’ ability to control nature. The ability to control nature often leads to positive changes in the adaptability and survivability of human populations. The optimistic GDA relies on this historical fact and the seemingly obvious inference that the above technologies will increase the ability to affect the genetic diversity of human populations (Resnik 2000, 457). A commonly cited example of how genetic diversity can be increased with the implementation of such technologies is the incredible diversity of canines. Of course, there are important dissimilarities such as the explicit intention to increase phenotypic diversity. A major factor in whether these reproductive technologies will increase or decrease genetic diversity is what model they are implemented under, free market or state control. Each model addresses the concerns and motivations of those affected differently. The free market model is based upon the reproductive decisions of a diverse group of potential parents with separate interests, motivations, and means. The free market is the method by which many consumer decisions are made in the United States. This model is fundamentally based on the interaction between supply and demand. If a market demands diversity of a product, then the market will often supply the desired diversity. If the market demands the standardization of goods, such as building supplies, then that homogeneity is likely to be supplied. Also, markets create new preferences and demands by introducing new goods and services to the market. Most often, advancements in technology increase market variability, except of course if that development results in the formation of a monopoly. The diversity of goods in the free market system of America seemingly justifies the inference that a free market model for reproductive technologies would lead to increases, not decreases, in the genetic diversity of human populations. Both J. Glover and W. Gardner’s individual studies conclude, “Increases in our ability to control human reproduction will result in more genetic diversity in the human population because parents will have a variety of preferences and values that they can use in selecting offspring” (cited in Resnik 2000, 458). Just as technological advancements have increased the availability of diverse consumer products, germ-line altering technologies could increase the available options in reproduction and therefore increase the diversity of human populations. Nevertheless, confounding factors such homogeneity of desirable characteristics makes the above inference much more dubious than it first appears. The major problem with the free market model is the potential emergence of the homogeneity of desirable characteristics. Many characteristics such as intelligence, athleticism, and health, are almost universally accepted as desirable. Other characteristics such as height, eye color, and hair color, also have particular value attached to them. Genetic homogeneity could arise if the consumers of reproductive technologies have similar preferences for traits. As Resnik states, “If most people want tall, intelligent, healthy children with blonde hair and blue eyes, then parental choices could produce a phenotypically and genetically homogeneous population” (2000, 459). This problem is only exacerbated when one considers the phenomenon of fads. Societal pressures and obligations may also produce conformity. While these social effects may not take hold immediately, it seems possible, if not probable that these pressures would eventually affect reproductive decisions. Genetic homogeneity may be an unintended consequence of a population sharing common values (Resnik 2000, 459). If most people within a population have similar characteristic preferences and a desire to conform, genetic homogeneity is almost inevitable. Of course much of this line of reasoning depends on genetic determinism, which is incredibly naïve and misinformed. Environmental factors often play a decisive role in which phenotypes are displayed. If certain desirable traits, such as intelligence or health, were strongly linked to environmental factors regardless of genotype, then the inference from individual choices to phenotypic characteristics would be dramatically weakened (Resnik 2000, 465). On the other hand, if certain genes or series of genes are linked to a trait, and that genotype is most frequently selected, it would still poses the potential threat of a genetically homogeneous population, although not phenotypically homogeneous. Genetic Diversity outweighs – hurts resilience to shock which allows exogenous factors to cause Extinction – specifically turning disease. Becker 17 Rachel Becker 10-9-2017 “Sex, disease, and extinction: what ancient DNA tells us about humans and Neanderthals” https://www.theverge.com/2017/10/9/16448412/neanderthal-stone-age-human-genes-dna-schizophrenia-cholesterol-hair-skin-loneliness (Verge Contributor)Elmer The findings help explain what exactly Neanderthal DNA is doing in many modern human genomes, and how it affects our health. Piecing together the sex lives of our human ancestors may also help us understand how and when these genes were exchanged. All together, the three studies — published in various journals last week — contribute key clues to the mystery of why humans survived to populate the globe, even as our close cousins, the Neanderthals, died out. Modern humans, or Homo sapiens, and Neanderthals shared a common ancestor roughly half a million years ago. They then split and evolved in parallel: humans in Africa, and Neanderthals on the Eurasian continent. When humans finally ventured to Eurasia, they had sex with Neanderthals, swapping DNA around. Today, people who aren’t of African descent owe roughly 2 percent of their DNA to their Neanderthal ancestors. “The first question that anyone ever asks is ‘Well, what does it do?’” says Janet Kelso, a bioinformatician who studies genome evolution at the Max Planck Institute in Germany. Previous studies have linked Neanderthal DNA to a big range of health conditions in modern-day people, including depression, nicotine addiction, and skin disorders. But it’s not all bad: understanding which stretches of Neanderthal DNA stuck around might also help scientists tease apart which traits might have helped ancient humans survive in Eurasia, like changes to skin and hair, or resistance to certain diseases. There’s also another mystery to solve: Neanderthals went extinct about 40,000 years ago, while Homo sapiens did not. Why? There are a lot of theories, including that alliances between modern humans and dogs helped humans hunt food better, essentially starving Neanderthals out of Europe. Or, humans might have reproduced faster than Neanderthals, multiplying and edging them out. “It’s still one of those unsolved and really interesting questions,” says Martin Sikora, a geneticist at the University of Copenhagen. “Were we more successful because we had better technology, or was it just a consequence of pure numbers?” To piece the story together, scientists are searching for more Neanderthal genomes locked in ancient bones, and for more Neanderthal DNA hiding in present-day genomes. The studies published last week have uncovered both. A NEW ANCIENT NEANDERTHAL GENOME The first study, published in Science, describes a bone fragment called Vindija 33.19, which was found in a Croatian cave of the same name in the 1980s. Now, researchers have finally been able to sequence the DNA locked inside, discovering it belonged to a female Neanderthal who lived 52,000 years ago. Researchers found that the Vindija Neanderthal was very similar genetically to another Neanderthal who died about 122,000 years ago in the Altai mountains of Siberia (dubbed the Altai Neanderthal). The fact that two Neanderthals separated by more than 3,700 miles and 70,000 years were so similar suggests that Neanderthal communities were tiny, with very little genetic diversity. “It’s quite amazing when you think about it,” says study author Kay Pruefer, at the Max Planck Institute. “They are really so closely related that you cannot find any two people on this planet that are this close.” That could support the theory that Neanderthals’ low genetic diversity may have contributed to their extinction. Genetic diversity forms the basis for natural selection. If everyone in a population had the exact same versions of the same genes, then one plague or one hard winter could wipe everyone out. And then there’d be no survivors to pass on the genes that would give their offspring a chance to survive the next plague or harsh winter. Incest can also lead to genetic abnormalities: the Altai Neanderthal was the daughter of two half-siblings, and while the Vindija Neanderthal’s parents weren’t related, they were very, very genetically similar.
10/9/21
SO - CP - Indian Nukes
Tournament: Valley RR | Round: 4 | Opponent: Lexington BF | Judge: Keshav Dandu, Triniti Krauss 6 Counterplan Text – The Republic of India ought to eliminate its nuclear arsenals – solves the second advantage
9/25/21
SO - CP - SCOTUS
Tournament: Valley RR | Round: 4 | Opponent: Lexington BF | Judge: Keshav Dandu, Triniti Krauss 5 States except the United States ought to reduce intellectual property protections for medicines through an IP waiver (to clarify they do the aff) The United States Federal Judiciary ought to rule that not reducing intellectual property protections for medicines by implementing a one-and-done approach for patent protection is unconstitutional. Solves and they can do it – empirical influence over medicine Capone 20 Connie Capone, writer for MDLinx, September 3, 2020. “Court rulings that changed medicine” https://www.mdlinx.com/article/court-rulings-that-changed-medicine/147FEf8WGxGdBQI4b8HG7u Accessed 8/27 gord0 What happens when technology firms, insurance companies, healthcare systems, and even the US government encroach on medical practice? In short, the courts get involved. Court decisions have frequently ruled on medical ethics and shaped healthcare policy. Landmark Supreme Court cases and lower court rulings have set the tone on medical ethics and shaped healthcare policy. Here are five such cases that made their mark on medicine. Vizzoni v. Mulford-Dera, 2019 In this case, the Superior Court of New Jersey Appellate Division upheld a trial court decision to dismiss a malpractice lawsuit after the family of a New Jersey woman who was killed during a car-bicycle accident sued the driver’s psychiatrist for medical negligence. The psychiatrist had been treating the driver, Barbara Mulford-Dera, for psychological conditions, and when Mulford-Dera struck and killed the cyclist, she had been taking a prescription medication that she allegedly did not know made it dangerous to drive. The bicyclist’s family maintained that the psychiatrist should have disclosed the potentially harmful effects of driving while under the influence of the prescribed psychotropic medication. But the trial court dismissed the case, ruling that it was not medical negligence. In an amicus brief, the American Medical Association warned that expanding physician legal obligations to the general public would have profound negative implications for medical professionals. State of Washington v. US Department of Health and Human Services, 2019 In this case, a federal judge in Washington issued a nationwide injunction blocking a series of proposed abortion restrictions. The restrictions, issued by the Trump administration, would have barred federally funded family planning facilities from advising or assisting patients seeking an abortion. Facilities backed by federal funding under the Title X program, including Planned Parenthood, were already prohibited from using those funds to perform abortions, but under this so-called “gag rule,” they would no longer be able to say or do anything to assist patients who were seeking an abortion, including referring them for abortion procedures. The rule was promulgated in March 2019 by the Department of Health and Human Services, and blocked by a federal judge the following month. In support of the injunction against the proposed plan, Washington state Attorney General Bob Ferguson said that it “ensures that clinics across the nation can remain open and continue to provide quality, unbiased healthcare to women.” National Federation of Independent Business v. Sebelius, 2012 In a Supreme Court ruling, a key provision in the Affordable Care Act (ACA), passed by Congress in 2010, was upheld. The ACA, created during the Obama administration, contained an individual mandate that required all Americans to buy health insurance or pay a tax penalty. It also required states to expand their Medicaid programs or risk losing federal funding. The court upheld the individual mandate on American citizens but rejected the provision to withhold federal funding from states that didn’t expand Medicaid, ruling that state participation in the program would be voluntary. “The Affordable Care Act’s requirement that certain individuals pay a financial penalty for not obtaining health insurance may reasonably be characterized as a tax,” Chief Justice John Roberts wrote in the ruling. “Because the Constitution permits such a tax, it is not our role to forbid it, or to pass upon its wisdom or fairness.”
9/25/21
SO - CP - Single Payer
Tournament: Jack Howe | Round: 2 | Opponent: Brentwood BB | Judge: Vanessa Ngywen 4 Text – Member states of the World Trade Organization ought to establish single-payer national health insurance individually domestically after September 25th, 2021 Single Payer solves High Drug Prices. Rotolo 19 Shannon Rotolo 11-18-2019 "Letters: ‘Medicare for All’ would drive down drug costs" https://www.chicagotribune.com/opinion/letters/ct-letters-vp-111819-20191118-3q6k5toz6fgmvafspzylpbs2ca-story.html (pharmacist and member of the Illinois Single-Payer Coalition, Chicago)Elmer In 2017, a study found that more than 15 of people living in the United States went without a needed medication because of its cost. This is significantly higher than the nonadherence in a majority of European countries. While there are multiple bills at the state and federal level aimed at reducing drug prices for single classes of drugs, such as insulin, or targeting high-cost drugs as a category, none of these bills has the potential to make the same impact as a switch to a single-payer system, commonly known as “Medicare for All.” Creation of a single-payer system has the ability to drive down drug prices by consolidating negotiating power. This is something we’ve been told pharmacy benefit managers (PBMs), middlemen in our current system, could achieve. But despite their presence, drug costs have continued to skyrocket. A single-payer system, on the other hand, is projected to reduce brand name drug prices by about 50. These changes in average wholesale price (AWP) or any other price measures used by the industry or in retail pharmacies aren’t necessarily tied to the copay you see at the pharmacy counter, though. Medicare for All would address that piece as well, with no copays or deductibles in one proposed version, and a maximum of $200 per year on prescriptions in the other. Another unique advantage of Medicare for All is that it would restore patient choice in pharmacy. Private insurance and PBMs ensure greater profits for themselves by restricting choice, driving prescriptions to the chains they own. When they do permit patients to use alternative pharmacies, the reimbursement to those small businesses can be so low that prescriptions are often filled at a loss. The end result is the pharmacy deserts we see on the South and West sides of Chicago, and closing of independent pharmacies in the Chicago area in general. Medication only helps if you can take it, and you can only take it if you can afford it. Everyone deserves to get the medication they need from a pharmacy they trust. I encourage everyone who takes medication or loves someone who takes medication to learn more about Medicare for All and to support the candidates who will fight for it.
9/18/21
SO - CP - TRIPS Info-Sharing
Tournament: Voices | Round: 2 | Opponent: Notre Dame AR | Judge: Felicity Park 3 The World Trade Organization ought to increase intellectual property protections for insert aff’s medicine. The United States ought to designate intellectual property protections on insert aff’s medicine as adversely affecting the international transfer of technology.
Member states can waive IP rights if they hamper the international flow of medical technology. WTO ’21 (World Trade Organization; 2021; “Obligations and exceptions”; World Trade Organization; Accessed: 8-30-2021; exact date not provided, but copyright was updated in 2021) Article 8 Principles … 2. Appropriate measures, provided that they are consistent with the provisions of this Agreement, may be needed to prevent the abuse of intellectual property rights by right holders or the resort to practices which unreasonably restrain trade or adversely affect the international transfer of technology. SECTION 8: CONTROL OF ANTI-COMPETITIVE PRACTICES IN CONTRACTUAL LICENCES Article 40 1. Members agree that some licensing practices or conditions pertaining to intellectual property rights which restrain competition may have adverse effects on trade and may impede the transfer and dissemination of technology. 2. Nothing in this Agreement shall prevent Members from specifying in their legislation licensing practices or conditions that may in particular cases constitute an abuse of intellectual property rights having an adverse effect on competition in the relevant market. As provided above, a Member may adopt, consistently with the other provisions of this Agreement, appropriate measures to prevent or control such practices, which may include for example exclusive grantback conditions, conditions preventing challenges to validity and coercive package licensing, in the light of the relevant laws and regulations of that Member. … Designating IP protections as antithetical to the global health system revitalizes info-sharing. Youde ’16 (Jeremy; writer for World Politics Review; 4-29-2016; “Technology Transfer Is a Weak Link in the Global Health System”; World Politics Review; https://www.worldpoliticsreview.com/articles/18639/technology-transfer-is-a-weak-link-in-the-global-health-system; Accessed: 8-30-2021) In mid-April, a spokesperson for the Ugandan government admitted that the country’s only functioning cancer treatment machine had broken earlier that month. The radiotherapy machine, donated by China to Uganda in 1995 and housed at Mulago Hospital in Kampala, is now considered beyond repair. While the government did acquire a second radiotherapy machine in 2013, it has not been operational because of delays in allocating 30 billion shillings—just shy of $9 million—to construct a new building to house it. The funding delay has lifted, but the machine won’t be up and running for at least six months. The government has announced plans to airlift some cancer patients to Nairobi for treatment, but that plan will only accommodate 400 of the estimated 17,000 to 33,000 cancer patients who need treatment annually in Uganda. This breakdown of technology is a human tragedy for the cancer patients from Uganda as well as elsewhere in East Africa that the radiotherapy machine helped treat. Beyond the personal level, though, the episode illustrates a larger shortcoming in global health. Total annual development assistance for health is approximately $36 billion, but that funding is overwhelmingly concentrated on specific infectious diseases. Noncommunicable diseases like cancer receive relatively little international funding—only 1.3 percent in 2015, and the dollar amount has declined since 2013. Funds to strengthen health systems, geared toward building and supporting a resilient health care system, are similarly low, making up only 7.3 percent of development assistance in 2015. Noncommunicable diseases kill more people every year than infectious diseases and accidents do, but this balance is not reflected in global health spending. ... These shortcomings also speak to larger problems in global health around issues of technology transfers and long-term commitments to keep that technology working. It’s one thing to provide necessary medical technologies in the first place; it’s another to ensure that those technologies are accessible and operational going forward. Despite the importance of technology transfers, questions of long-term support for them have received relatively little attention from the global health regime. As noncommunicable diseases like cancer cause an even-higher proportion of deaths each year, it will become all the more imperative that the international community address this gap in sharing and funding crucial health care technology. This does not mean that there are no efforts to facilitate technology transfers around the world. The Fogarty International Center, a part of the U.S. National Institutes of Health, has had an Office of Technology Transfer since 1989 to make medical innovations developed in the United States more widely available. The World Health Organization (WHO) also has a Technology Transfer Initiative to improve access to health care technologies in developing countries. These efforts are laudable, but their interpretation of technology transfer is almost entirely rooted in access to pharmaceuticals and vaccines. To be sure, that is a very important issue—but it only deals with one narrow element of technology transfer. The problems of global health technology transfers illustrated in Uganda underscore a larger issue: the need for a so-called fourth industrial revolution, what has been described as “blurring the real world with the technological world.” This idea gained prominence earlier this year when it served as the theme for the World Economic Forum in Davos. For global health, this means embracing technology to find low-cost ways to promote health, spread education, and reach communities whose access to the health care infrastructure is weak. It expands on the notion of telemedicine and eHealth to make it more encompassing. According to health care entrepreneur Jonathan Jackson, the fourth industrial revolution could change global health by encouraging a shift in focus “from healthcare to health promotion.” Moving from high-cost treatment to low-cost prevention, he has argued, will have significant and far-reaching positive economic implications for developing countries around the world. Its inspiring sense of technological optimism notwithstanding, this sort of approach cannot be the sole focus of technology transfers in global health. Prevention is indeed important, but the fact of the matter remains that people will get sick—and those sick people will need treatment. Mobile applications and electronic access to health care providers can be useful, but they cannot replace a radiotherapy machine. Understanding the root causes of noncommunicable diseases goes far beyond individual choices and intersects with the larger political, economic and social context, so we cannot assume that cybertechnology alone can stop cancer. It is also important to remember that the results of greater technological innovation and integration won’t be free. Sub-Saharan African states, on average, spend $200 per person per year on health care. Even if technology allows costs to decline, they are still likely to be out of reach for many people in most of these countries—in the same way that the purchase and maintenance of medical technologies are prohibitively expensive in these same states today. Technology in and of itself is not useful unless it can be maintained over the long term. This, then, is a weak link in the larger global health system: How do we ensure access to life-prolonging medical technologies beyond pharmaceuticals and vaccines in a sustainable way? Consider two ideas. First, development assistance for health must orient more of its resources toward treating noncommunicable diseases and strengthening health systems. These are the areas in which these technologies are likely to be used, but are not currently supported by the international system. The changing nature of health and disease will only make them even more important in the years to come. Second, longer-term funding commitments would provide a greater opportunity to incorporate medical technologies into health care systems sustainably. Machines will break down, and technologies will fail. That is inevitable. But the global health regime, from the WHO and its regional organizations like the Regional Office for Africa to major donors like the United States government and the Bill and Melinda Gates Foundation, needs to figure out how to ensure that these problems do not put lives in peril. Technology alone will not improve global health unless it is properly supported and funded. International collaboration’s key to check future pandemics – otherwise, extinction. Dulaney ’20 Michael; digital journalist with the ABC June 2020; "'A question of when, not if': Another pandemic is coming – and sooner than we think", No Publication; https://www.abc.net.au/news/science/2020-06-07/a-matter-of-when-not-if-the-next-pandemic-is-around-the-corner/12313372, accessed 4-12-2021 And as recently as September last year — just a few months before COVID-19 was detected in China — an independent watchdog set up by the WHO warned the world was "grossly" unprepared for the "very real threat" of a pandemic. But even more alarming is what the new coronavirus indicates about the future. Researchers say human impacts on the natural world are causing new infectious diseases to emerge more frequently than ever before, meaning the next pandemic — one perhaps even worse than COVID-19 — is only a matter of time. "We know that it's a probability, not a possibility," Dr Reid says. "The roulette wheel will start to spin again. "If you don't resolve the conditions that generated the problem, then we sit waiting for the next probability equation to come through. "And it will, and sadly it's possible that it's in our lifetime." The growing threat to human health Nearly all emerging pathogens like COVID-19 come from "zoonotic transfer" — essentially, when a virus present in animals jumps to infect humans. The US Centers for Disease Control and Prevention estimates three out of every four new infectious diseases, and nearly all pandemics, emerge this way. Researchers have counted around 200 infectious diseases that have broken out more than 12,000 times over the past three decades. On average, one new infectious disease jumps to humans every four months. Animal species like civet cats (SARS), camels (MERS), horses (Hendra), pigs (Nipah) and chimpanzees (HIV) have all been implicated in the spread of new viruses at different times.
10/9/21
SO - DA - Climate Patents
Tournament: Loyola | Round: 3 | Opponent: Bishops AC | Judge: Abhishek Rao Climate Patents and Innovation high now and solving Warming but patent waivers set a dangerous precedent for appropriations - the mere threat is sufficient is enough to kill investment. Brand 5-26, Melissa. “Trips Ip Waiver Could Establish Dangerous Precedent for Climate Change and Other Biotech Sectors.” IPWatchdog.com | Patents and Patent Law, 26 May 2021, www.ipwatchdog.com/2021/05/26/trips-ip-waiver-establish-dangerous-precedent-climate-change-biotech-sectors/id=133964/. sid The biotech industry is making remarkable advances towards climate change solutions, and it is precisely for this reason that it can expect to be in the crosshairs of potential IP waiver discussions. President Biden is correct to refer to climate change as an existential crisis. Yet it does not take too much effort to connect the dots between President Biden’s focus on climate change and his Administration’s recent commitment to waive global IP rights for Covid vaccines (TRIPS IP Waiver). “This is a global health crisis, and the extraordinary circumstances of the COVID-19 pandemic call for extraordinary measures.” If an IP waiver is purportedly necessary to solve the COVID-19 global health crisis (and of course we dispute this notion), can we really feel confident that this or some future Administration will not apply the same logic to the climate crisis? And, without the confidence in the underlying IP for such solutions, what does this mean for U.S. innovation and economic growth? United States Trade Representative (USTR) Katherine Tai was subject to questioning along this very line during a recent Senate Finance Committee hearing. And while Ambassador Tai did not affirmatively state that an IP waiver would be in the future for climate change technology, she surely did not assuage the concerns of interested parties. The United States has historically supported robust IP protection. This support is one reason the United States is the center of biotechnology innovation and leading the fight against COVID-19. However, a brief review of the domestic legislation arguably most relevant to this discussion shows just how far the international campaign against IP rights has eroded our normative position. The Clean Air Act, for example, contains a provision allowing for the mandatory licensing of patents covering certain devices for reducing air pollution. Importantly, however, the patent owner is accorded due process and the statute lays out a detailed process regulating the manner in which any such license can be issued, including findings of necessity and that no reasonable alternative method to accomplish the legislated goal exists. Also of critical importance is that the statute requires compensation to the patent holder. Similarly, the Atomic Energy Act contemplates mandatory licensing of patents covering inventions of primary importance in producing or utilizing atomic energy. This statute, too, requires due process, findings of importance to the statutory goals and compensation to the rights holder. A TRIPS IP waiver would operate outside of these types of frameworks. There would be no due process, no particularized findings, no compensation and no recourse. Indeed, the fact that the World Trade Organization (WTO) already has a process under the TRIPS agreement to address public health crises, including the compulsory licensing provisions, with necessary guardrails and compensation, makes quite clear that the waiver would operate as a free for all. Forced Tech Transfer Could Be on The Table When being questioned about the scope of a potential TRIPS IP waiver, Ambassador Tai invoked the proverb “Give a man a fish and you feed him for a day. Teach a man to fish and you feed him for a lifetime.” While this answer suggests primarily that, in times of famine, the Administration would rather give away other people’s fishing rods than share its own plentiful supply of fish (here: actual COVID-19 vaccine stocks), it is apparent that in Ambassador Tai’s view waiving patent rights alone would not help lower- and middle-income countries produce their own vaccines. Rather, they would need to be taught how to make the vaccines and given the biotech industry’s manufacturing know-how, sensitive cell lines, and proprietary cell culture media in order to do so. In other words, Ambassador Tai acknowledged that the scope of the current TRIPS IP waiver discussions includes the concept of forced tech transfer. In the context of climate change, the idea would be that companies who develop successful methods for producing new seed technologies and sustainable biomass, reducing greenhouse gases in manufacturing and transportation, capturing and sequestering carbon in soil and products, and more, would be required to turn over their proprietary know-how to global competitors. While it is unclear how this concept would work in practice and under the constitutions of certain countries, the suggestion alone could be devastating to voluntary international collaborations. Even if one could assume that the United States could not implement forced tech transfer on its own soil, what about the governments of our international development partners? It is not hard to understand that a U.S.-based company developing climate change technologies would be unenthusiastic about partnering with a company abroad knowing that the foreign country’s government is on track – with the assent of the U.S. government – to change its laws and seize proprietary materials and know-how that had been voluntarily transferred to the local company. Necessary Investment Could Diminish Developing climate change solutions is not an easy endeavor and bad policy positions threaten the likelihood that they will materialize. These products have long lead times from research and development to market introduction, owing not only to a high rate of failure but also rigorous regulatory oversight. Significant investment is required to sustain and drive these challenging and long-enduring endeavors. For example, synthetic biology companies critical to this area of innovation raised over $1 billion in investment in the second quarter of 2019 alone. If investors cannot be confident that IP will be in place to protect important climate change technologies after their long road from bench to market, it is unlikely they will continue to invest at the current and required levels. Climate change destroys the world. Specktor 19 Brandon writes about the science of everyday life for Live Science, and previously for Reader's Digest magazine, where he served as an editor for five years 6-4-2019, "Human Civilization Will Crumble by 2050 If We Don't Stop Climate Change Now, New Paper Claims," livescience, https://www.livescience.com/65633-climate-change-dooms-humans-by-2050.html Justin The current climate crisis, they say, is larger and more complex than any humans have ever dealt with before. General climate models — like the one that the United Nations' Panel on Climate Change (IPCC) used in 2018 to predict that a global temperature increase of 3.6 degrees Fahrenheit (2 degrees Celsius) could put hundreds of millions of people at risk — fail to account for the sheer complexity of Earth's many interlinked geological processes; as such, they fail to adequately predict the scale of the potential consequences. The truth, the authors wrote, is probably far worse than any models can fathom. How the world ends What might an accurate worst-case picture of the planet's climate-addled future actually look like, then? The authors provide one particularly grim scenario that begins with world governments "politely ignoring" the advice of scientists and the will of the public to decarbonize the economy (finding alternative energy sources), resulting in a global temperature increase 5.4 F (3 C) by the year 2050. At this point, the world's ice sheets vanish; brutal droughts kill many of the trees in the Amazon rainforest (removing one of the world's largest carbon offsets); and the planet plunges into a feedback loop of ever-hotter, ever-deadlier conditions. "Thirty-five percent of the global land area, and 55 percent of the global population, are subject to more than 20 days a year of lethal heat conditions, beyond the threshold of human survivability," the authors hypothesized. Meanwhile, droughts, floods and wildfires regularly ravage the land. Nearly one-third of the world's land surface turns to desert. Entire ecosystems collapse, beginning with the planet's coral reefs, the rainforest and the Arctic ice sheets. The world's tropics are hit hardest by these new climate extremes, destroying the region's agriculture and turning more than 1 billion people into refugees. This mass movement of refugees — coupled with shrinking coastlines and severe drops in food and water availability — begin to stress the fabric of the world's largest nations, including the United States. Armed conflicts over resources, perhaps culminating in nuclear war, are likely. The result, according to the new paper, is "outright chaos" and perhaps "the end of human global civilization as we know it."
9/25/21
SO - DA - Climate Patents v2
Tournament: Valley RR | Round: 1 | Opponent: Strake Jesuit KS | Judge: Spencer Orlowski, Brixz Gonzaba 4 Climate Patents and Innovation high now and solving Warming but COVID waiver sets a dangerous precedent for appropriations - the mere threat is sufficient is enough to kill investment. Brand 5-26, Melissa. “Trips Ip Waiver Could Establish Dangerous Precedent for Climate Change and Other Biotech Sectors.” IPWatchdog.com | Patents and Patent Law, 26 May 2021, www.ipwatchdog.com/2021/05/26/trips-ip-waiver-establish-dangerous-precedent-climate-change-biotech-sectors/id=133964/. sid The biotech industry is making remarkable advances towards climate change solutions, and it is precisely for this reason that it can expect to be in the crosshairs of potential IP waiver discussions. President Biden is correct to refer to climate change as an existential crisis. Yet it does not take too much effort to connect the dots between President Biden’s focus on climate change and his Administration’s recent commitment to waive global IP rights for Covid vaccines (TRIPS IP Waiver). “This is a global health crisis, and the extraordinary circumstances of the COVID-19 pandemic call for extraordinary measures.” If an IP waiver is purportedly necessary to solve the COVID-19 global health crisis (and of course we dispute this notion), can we really feel confident that this or some future Administration will not apply the same logic to the climate crisis? And, without the confidence in the underlying IP for such solutions, what does this mean for U.S. innovation and economic growth? United States Trade Representative (USTR) Katherine Tai was subject to questioning along this very line during a recent Senate Finance Committee hearing. And while Ambassador Tai did not affirmatively state that an IP waiver would be in the future for climate change technology, she surely did not assuage the concerns of interested parties. The United States has historically supported robust IP protection. This support is one reason the United States is the center of biotechnology innovation and leading the fight against COVID-19. However, a brief review of the domestic legislation arguably most relevant to this discussion shows just how far the international campaign against IP rights has eroded our normative position. The Clean Air Act, for example, contains a provision allowing for the mandatory licensing of patents covering certain devices for reducing air pollution. Importantly, however, the patent owner is accorded due process and the statute lays out a detailed process regulating the manner in which any such license can be issued, including findings of necessity and that no reasonable alternative method to accomplish the legislated goal exists. Also of critical importance is that the statute requires compensation to the patent holder. Similarly, the Atomic Energy Act contemplates mandatory licensing of patents covering inventions of primary importance in producing or utilizing atomic energy. This statute, too, requires due process, findings of importance to the statutory goals and compensation to the rights holder. A TRIPS IP waiver would operate outside of these types of frameworks. There would be no due process, no particularized findings, no compensation and no recourse. Indeed, the fact that the World Trade Organization (WTO) already has a process under the TRIPS agreement to address public health crises, including the compulsory licensing provisions, with necessary guardrails and compensation, makes quite clear that the waiver would operate as a free for all. Forced Tech Transfer Could Be on The Table When being questioned about the scope of a potential TRIPS IP waiver, Ambassador Tai invoked the proverb “Give a man a fish and you feed him for a day. Teach a man to fish and you feed him for a lifetime.” While this answer suggests primarily that, in times of famine, the Administration would rather give away other people’s fishing rods than share its own plentiful supply of fish (here: actual COVID-19 vaccine stocks), it is apparent that in Ambassador Tai’s view waiving patent rights alone would not help lower- and middle-income countries produce their own vaccines. Rather, they would need to be taught how to make the vaccines and given the biotech industry’s manufacturing know-how, sensitive cell lines, and proprietary cell culture media in order to do so. In other words, Ambassador Tai acknowledged that the scope of the current TRIPS IP waiver discussions includes the concept of forced tech transfer. In the context of climate change, the idea would be that companies who develop successful methods for producing new seed technologies and sustainable biomass, reducing greenhouse gases in manufacturing and transportation, capturing and sequestering carbon in soil and products, and more, would be required to turn over their proprietary know-how to global competitors. While it is unclear how this concept would work in practice and under the constitutions of certain countries, the suggestion alone could be devastating to voluntary international collaborations. Even if one could assume that the United States could not implement forced tech transfer on its own soil, what about the governments of our international development partners? It is not hard to understand that a U.S.-based company developing climate change technologies would be unenthusiastic about partnering with a company abroad knowing that the foreign country’s government is on track – with the assent of the U.S. government – to change its laws and seize proprietary materials and know-how that had been voluntarily transferred to the local company. Necessary Investment Could Diminish Developing climate change solutions is not an easy endeavor and bad policy positions threaten the likelihood that they will materialize. These products have long lead times from research and development to market introduction, owing not only to a high rate of failure but also rigorous regulatory oversight. Significant investment is required to sustain and drive these challenging and long-enduring endeavors. For example, synthetic biology companies critical to this area of innovation raised over $1 billion in investment in the second quarter of 2019 alone. If investors cannot be confident that IP will be in place to protect important climate change technologies after their long road from bench to market, it is unlikely they will continue to invest at the current and required levels. Private sector innovation is key to solve climate change – short term politicking and priority shifts means government can’t solve alone. Henry 17, Simon. “Climate Change Cannot Be Solved by Governments Alone. How Can the Private Sector Help?” World Economic Forum, 21 Nov. 2017, www.weforum.org/agenda/2017/11/governments-alone-cannot-halt-climate-change-what-can-private-sector-do/. Programme Director, International Carbon Reduction and Offset Alliance (ICROA) sid Climate leadership is also an opportunity for many organizations, and this was the most popular reason for purchasing carbon credits in Ecosystem Marketplace’s 2016 survey of buyers. Companies are looking to differentiate from their competitors, and build their brand, by taking a leadership role on climate. Offsetting plays an integral role in delivering this climate leadership status, alongside direct emissions reductions. The survey indicated that companies that included offsetting in their carbon management strategy typically spend about 10 times more on emissions reductions activities than the typical company that doesn’t offset. Beyond these direct commercial reasons for companies to take voluntary action, there are many broader, societal motivations at play. Climate change is a global, multidecade challenge that needs solutions and input from all stakeholders. It transcends the short-term nature of politics, which will inevitably experience changes in priorities, personnel and knowledge. Because of this, climate change cannot be solved by governments alone. Instead, it needs significant and long-term investment from the private sector. Companies that take a longer-term outlook recognise this and want to contribute to the solution to help secure the viability of their businesses. Warming causes Extinction Kareiva 18, Peter, and Valerie Carranza. "Existential risk due to ecosystem collapse: Nature strikes back." Futures 102 (2018): 39-50. (Ph.D. in ecology and applied mathematics from Cornell University, director of the Institute of the Environment and Sustainability at UCLA, Pritzker Distinguished Professor in Environment and Sustainability at UCLA)Re-cut by Elmer In summary, six of the nine proposed planetary boundaries (phosphorous, nitrogen, biodiversity, land use, atmospheric aerosol loading, and chemical pollution) are unlikely to be associated with existential risks. They all correspond to a degraded environment, but in our assessment do not represent existential risks. However, the three remaining boundaries (climate change, global freshwater cycle, and ocean acidification) do pose existential risks. This is because of intrinsic positive feedback loops, substantial lag times between system change and experiencing the consequences of that change, and the fact these different boundaries interact with one another in ways that yield surprises. In addition, climate, freshwater, and ocean acidification are all directly connected to the provision of food and water, and shortages of food and water can create conflict and social unrest. Climate change has a long history of disrupting civilizations and sometimes precipitating the collapse of cultures or mass emigrations (McMichael, 2017). For example, the 12th century drought in the North American Southwest is held responsible for the collapse of the Anasazi pueblo culture. More recently, the infamous potato famine of 1846–1849 and the large migration of Irish to the U.S. can be traced to a combination of factors, one of which was climate. Specifically, 1846 was an unusually warm and moist year in Ireland, providing the climatic conditions favorable to the fungus that caused the potato blight. As is so often the case, poor government had a role as well—as the British government forbade the import of grains from outside Britain (imports that could have helped to redress the ravaged potato yields). Climate change intersects with freshwater resources because it is expected to exacerbate drought and water scarcity, as well as flooding. Climate change can even impair water quality because it is associated with heavy rains that overwhelm sewage treatment facilities, or because it results in higher concentrations of pollutants in groundwater as a result of enhanced evaporation and reduced groundwater recharge. Ample clean water is not a luxury—it is essential for human survival. Consequently, cities, regions and nations that lack clean freshwater are vulnerable to social disruption and disease. Finally, ocean acidification is linked to climate change because it is driven by CO2 emissions just as global warming is. With close to 20 of the world’s protein coming from oceans (FAO, 2016), the potential for severe impacts due to acidification is obvious. Less obvious, but perhaps more insidious, is the interaction between climate change and the loss of oyster and coral reefs due to acidification. Acidification is known to interfere with oyster reef building and coral reefs. Climate change also increases storm frequency and severity. Coral reefs and oyster reefs provide protection from storm surge because they reduce wave energy (Spalding et al., 2014). If these reefs are lost due to acidification at the same time as storms become more severe and sea level rises, coastal communities will be exposed to unprecedented storm surge—and may be ravaged by recurrent storms. A key feature of the risk associated with climate change is that mean annual temperature and mean annual rainfall are not the variables of interest. Rather it is extreme episodic events that place nations and entire regions of the world at risk. These extreme events are by definition “rare” (once every hundred years), and changes in their likelihood are challenging to detect because of their rarity, but are exactly the manifestations of climate change that we must get better at anticipating (Diffenbaugh et al., 2017). Society will have a hard time responding to shorter intervals between rare extreme events because in the lifespan of an individual human, a person might experience as few as two or three extreme events. How likely is it that you would notice a change in the interval between events that are separated by decades, especially given that the interval is not regular but varies stochastically? A concrete example of this dilemma can be found in the past and expected future changes in storm-related flooding of New York City. The highly disruptive flooding of New York City associated with Hurricane Sandy represented a flood height that occurred once every 500 years in the 18th century, and that occurs now once every 25 years, but is expected to occur once every 5 years by 2050 (Garner et al., 2017). This change in frequency of extreme floods has profound implications for the measures New York City should take to protect its infrastructure and its population, yet because of the stochastic nature of such events, this shift in flood frequency is an elevated risk that will go unnoticed by most people. 4. The combination of positive feedback loops and societal inertia is fertile ground for global environmental catastrophes Humans are remarkably ingenious, and have adapted to crises throughout their history. Our doom has been repeatedly predicted, only to be averted by innovation (Ridley, 2011). However, the many stories of human ingenuity successfully addressing existential risks such as global famine or extreme air pollution represent environmental challenges that are largely linear, have immediate consequences, and operate without positive feedbacks. For example, the fact that food is in short supply does not increase the rate at which humans consume food—thereby increasing the shortage. Similarly, massive air pollution episodes such as the London fog of 1952 that killed 12,000 people did not make future air pollution events more likely. In fact it was just the opposite—the London fog sent such a clear message that Britain quickly enacted pollution control measures (Stradling, 2016). Food shortages, air pollution, water pollution, etc. send immediate signals to society of harm, which then trigger a negative feedback of society seeking to reduce the harm. In contrast, today’s great environmental crisis of climate change may cause some harm but there are generally long time delays between rising CO2 concentrations and damage to humans. The consequence of these delays are an absence of urgency; thus although 70 of Americans believe global warming is happening, only 40 think it will harm them (http://climatecommunication.yale.edu/visualizations-data/ycom-us-2016/). Secondly, unlike past environmental challenges, the Earth’s climate system is rife with positive feedback loops. In particular, as CO2 increases and the climate warms, that very warming can cause more CO2 release which further increases global warming, and then more CO2, and so on. Table 2 summarizes the best documented positive feedback loops for the Earth’s climate system. These feedbacks can be neatly categorized into carbon cycle, biogeochemical, biogeophysical, cloud, ice-albedo, and water vapor feedbacks. As important as it is to understand these feedbacks individually, it is even more essential to study the interactive nature of these feedbacks. Modeling studies show that when interactions among feedback loops are included, uncertainty increases dramatically and there is a heightened potential for perturbations to be magnified (e.g., Cox, Betts, Jones, Spall, and Totterdell, 2000; Hajima, Tachiiri, Ito, and Kawamiya, 2014; Knutti and Rugenstein, 2015; Rosenfeld, Sherwood, Wood, and Donner, 2014). This produces a wide range of future scenarios. Positive feedbacks in the carbon cycle involves the enhancement of future carbon contributions to the atmosphere due to some initial increase in atmospheric CO2. This happens because as CO2 accumulates, it reduces the efficiency in which oceans and terrestrial ecosystems sequester carbon, which in return feeds back to exacerbate climate change (Friedlingstein et al., 2001). Warming can also increase the rate at which organic matter decays and carbon is released into the atmosphere, thereby causing more warming (Melillo et al., 2017). Increases in food shortages and lack of water is also of major concern when biogeophysical feedback mechanisms perpetuate drought conditions. The underlying mechanism here is that losses in vegetation increases the surface albedo, which suppresses rainfall, and thus enhances future vegetation loss and more suppression of rainfall—thereby initiating or prolonging a drought (Chamey, Stone, and Quirk, 1975). To top it off, overgrazing depletes the soil, leading to augmented vegetation loss (Anderies, Janssen, and Walker, 2002). Climate change often also increases the risk of forest fires, as a result of higher temperatures and persistent drought conditions. The expectation is that forest fires will become more frequent and severe with climate warming and drought (Scholze, Knorr, Arnell, and Prentice, 2006), a trend for which we have already seen evidence (Allen et al., 2010). Tragically, the increased severity and risk of Southern California wildfires recently predicted by climate scientists (Jin et al., 2015), was realized in December 2017, with the largest fire in the history of California (the “Thomas fire” that burned 282,000 acres, https://www.vox.com/2017/12/27/16822180/thomas-fire-california-largest-wildfire). This catastrophic fire embodies the sorts of positive feedbacks and interacting factors that could catch humanity off-guard and produce a true apocalyptic event. Record-breaking rains produced an extraordinary flush of new vegetation, that then dried out as record heat waves and dry conditions took hold, coupled with stronger than normal winds, and ignition. Of course the record-fire released CO2 into the atmosphere, thereby contributing to future warming. Out of all types of feedbacks, water vapor and the ice-albedo feedbacks are the most clearly understood mechanisms. Losses in reflective snow and ice cover drive up surface temperatures, leading to even more melting of snow and ice cover—this is known as the ice-albedo feedback (Curry, Schramm, and Ebert, 1995). As snow and ice continue to melt at a more rapid pace, millions of people may be displaced by flooding risks as a consequence of sea level rise near coastal communities (Biermann and Boas, 2010; Myers, 2002; Nicholls et al., 2011). The water vapor feedback operates when warmer atmospheric conditions strengthen the saturation vapor pressure, which creates a warming effect given water vapor’s strong greenhouse gas properties (Manabe and Wetherald, 1967). Global warming tends to increase cloud formation because warmer temperatures lead to more evaporation of water into the atmosphere, and warmer temperature also allows the atmosphere to hold more water. The key question is whether this increase in clouds associated with global warming will result in a positive feedback loop (more warming) or a negative feedback loop (less warming). For decades, scientists have sought to answer this question and understand the net role clouds play in future climate projections (Schneider et al., 2017). Clouds are complex because they both have a cooling (reflecting incoming solar radiation) and warming (absorbing incoming solar radiation) effect (Lashof, DeAngelo, Saleska, and Harte, 1997). The type of cloud, altitude, and optical properties combine to determine how these countervailing effects balance out. Although still under debate, it appears that in most circumstances the cloud feedback is likely positive (Boucher et al., 2013). For example, models and observations show that increasing greenhouse gas concentrations reduces the low-level cloud fraction in the Northeast Pacific at decadal time scales. This then has a positive feedback effect and enhances climate warming since less solar radiation is reflected by the atmosphere (Clement, Burgman, and Norris, 2009). The key lesson from the long list of potentially positive feedbacks and their interactions is that runaway climate change, and runaway perturbations have to be taken as a serious possibility. Table 2 is just a snapshot of the type of feedbacks that have been identified (see Supplementary material for a more thorough explanation of positive feedback loops). However, this list is not exhaustive and the possibility of undiscovered positive feedbacks portends even greater existential risks. The many environmental crises humankind has previously averted (famine, ozone depletion, London fog, water pollution, etc.) were averted because of political will based on solid scientific understanding. We cannot count on complete scientific understanding when it comes to positive feedback loops and climate change.
9/24/21
SO - DA - Infrastructure
Tournament: Loyola | Round: 3 | Opponent: Bishops AC | Judge: Abhishek Rao 4 Biden’s infrastructure bill will pass through reconciliation but absolute Dem Unity is key. - Turns Structural Violence Pramuk and Franck 8-25 Jacob Pramuk and Thomas Franck 8-25-2021 "Here’s what happens next as Democrats try to pass Biden’s multitrillion-dollar economic plans" https://www.cnbc.com/2021/08/25/what-happens-next-with-biden-infrastructure-budget-bills-in-congress.html (Staff Reporter at CNBC)Elmer WASHINGTON — House Democrats just patched up a party fracture to take a critical step forward with a mammoth economic agenda. But the path ahead could get trickier as party leaders try to thread a legislative needle to pass more than $4 trillion in new spending. In the coming weeks, Democrats aim to approve a $1 trillion bipartisan infrastructure plan and up to $3.5 trillion in investments in social programs. Passing both will require a heavy lift, as leaders will need to satisfy competing demands of centrists wary of spending and progressives who want to reimagine government’s role in American households. The House is leaving Washington until Sept. 20 after taking key steps toward pushing through the sprawling economic plans. The chamber on Tuesday approved a $3.5 trillion budget resolution and advanced the infrastructure bill, as House Speaker Nancy Pelosi, D-Calif., promised centrist Democrats to take up the bipartisan plan by Sept. 27. The Senate already passed the infrastructure legislation, so a final House vote would send it to Biden’s desk for his signature. Now that both chambers have passed the budget measure, Democrats can move without Republicans to push through their spending plan via reconciliation. Party leaders want committees to write their pieces of the bill by Sept. 15 before budget committees package them into one massive measure that can move through Congress. Committees could start marking up legislation in early September. Party leaders face a challenge in coming up with a bill that will satisfy centrists who want to trim back the $3.5 trillion price tag and progressives who consider it the minimum Congress should spend. As one defection in the Senate — and four in the House — would sink legislation, Democrats have to satisfy a diverse range of views to pass their agenda. “We write a bill with the Senate because it’s no use doing a bill that’s not going to pass the Senate, in the interest of getting things done,” Pelosi told reporters on Wednesday. Given the magnitude of the legislation, passing it quickly could prove difficult. To appease congressional progressives who have prioritized passage of the budget bill, Democrats could move to pass both proposals at about the same time. While Pelosi gave a Sept. 27 target date to approve the infrastructure plan, the commitment is not binding. Still, she noted Wednesday that Congress needs to pass the bill before surface transportation spending authorization expires Sept. 30. “We have long had an eye to having the infrastructure bill on the President’s desk by the October 1, the effective date of the legislation,” she wrote in a separate letter to Democrats on Wednesday. Democrats say the bills combined will provide a jolt to the economy and a lifeline for households. Supporters of the Democratic spending plan, including Pelosi and Senate Budget Committee Chair Bernie Sanders, I-Vt., have cast it as the biggest expansion of the U.S. social safety net in decades. “This is a truly historic opportunity to pass the most transformative and consequential legislation for families in a century, and will stand alongside the New Deal and Great Society as pillars of economic security,” Pelosi wrote to colleagues Wednesday. The plan would expand Medicare, paid leave and child care, extend enhanced household tax credits and encourage green energy adoption, while hiking taxes on corporations and the wealthy. Democrats hope to sell a wave of new support for families as they campaign to keep control of Congress in next year’s midterms. Those elections, though, have helped to generate staunch opposition on the other side of the aisle. The GOP has cited the trillions in new spending and the proposed reversal of some of its 2017 tax cuts in trying to take down the Democratic budget bill. Republicans and some Democrats have in recent weeks said that another $4.5 trillion in fiscal stimulus could not only boost economic growth but have the adverse effect of fueling inflation. Pharma backlashes to the Plan – they’re aggressive lobbyists and will do anything to preserve patent rights. - Turns Case – Waters down the Plan due to lobbying - Optional Card – still thinking on if its necessary note from Elmer Huetteman 19 Emmarie Huetteman 2-26-2019 “Senators Who Led Pharma-Friendly Patent Reform Also Prime Targets For Pharma Cash” https://khn.org/news/senators-who-led-pharma-friendly-patent-reform-also-prime-targets-for-pharma-cash/ (former NYT Congressional correspondent with an MA in public affairs reporting from Northwestern University’s Medill School)Elmer Early last year, as lawmakers vowed to curb rising drug prices, Sen. Thom Tillis was named chairman of the Senate Judiciary Committee’s subcommittee on intellectual property rights, a committee that had not met since 2007. As the new gatekeeper for laws and oversight of the nation’s patent system, the North Carolina Republican signaled he was determined to make it easier for American businesses to benefit from it — a welcome message to the drugmakers who already leverage patents to block competitors and keep prices high. Less than three weeks after introducing a bill that would make it harder for generic drugmakers to compete with patent-holding drugmakers, Tillis opened the subcommittee’s first meeting on Feb. 26, 2019, with his own vow. “From the United States Patent and Trademark Office to the State Department’s Office of Intellectual Property Enforcement, no department or bureau is too big or too small for this subcommittee to take interest,” he said. “And we will.” In the months that followed, tens of thousands of dollars flowed from pharmaceutical companies toward his campaign, as well as to the campaigns of other subcommittee members — including some who promised to stop drugmakers from playing money-making games with the patent system, like Sen. John Cornyn (R-Texas). Tillis received more than $156,000 from political action committees tied to drug manufacturers in 2019, more than any other member of Congress, a new analysis of KHN’s Pharma Cash to Congress database shows. Sen. Chris Coons (D-Del.), the top Democrat on the subcommittee who worked side by side with Tillis, received more than $124,000 in drugmaker contributions last year, making him the No. 3 recipient in Congress. No. 2 was Sen. Mitch McConnell (R-Ky.), who took in about $139,000. As the Senate majority leader, he controls what legislation gets voted on by the Senate. Neither Tillis nor Coons sits on the Senate committees that introduced legislation last year to lower drug prices through methods like capping price increases to the rate of inflation. Of the four senators who drafted those bills, none received more than $76,000 from drug manufacturers in 2019. Tillis and Coons spent much of last year working on significant legislation that would expand the range of items eligible to be patented — a change that some experts say would make it easier for companies developing medical tests and treatments to own things that aren’t traditionally inventions, like genetic code. They have not yet officially introduced a bill. As obscure as patents might seem in an era of public outrage over drug prices, the fact that drugmakers gave most to the lawmakers working to change the patent system belies how important securing the exclusive right to market a drug, and keep competitors at bay, is to their bottom line. “Pharma will fight to the death to preserve patent rights,” said Robin Feldman, a professor at the UC Hastings College of the Law in San Francisco who is an expert in intellectual property rights and drug pricing. “Strong patent rights are central to the games drug companies play to extend their monopolies and keep prices high.” Campaign contributions, closely tracked by the Federal Election Commission, are among the few windows into how much money flows from the political groups of drugmakers and other companies to the lawmakers and their campaigns. Private companies generally give money to members of Congress to encourage them to listen to the companies, typically through lobbyists, whose activities are difficult to track. They may also communicate through so-called dark money groups, which are not required to report who gives them money. Over the past 10 years, the pharmaceutical industry has spent about $233 million per year on lobbying, according to a new study published in JAMA Internal Medicine. That is more than any other industry, including the oil and gas industry. Why Patents Matter Developing and testing a new drug, and gaining approval from the Food and Drug Administration, can take years and cost hundreds of millions of dollars. Drugmakers are generally granted a six- or seven-year exclusivity period to recoup their investments. But drugmakers have found ways to extend that period of exclusivity, sometimes accumulating hundreds of patents on the same drug and blocking competition for decades. One method is to patent many inventions beyond a drug’s active ingredient, such as patenting the injection device that administers the drug. Keeping that arrangement intact, or expanding what can be patented, is where lawmakers come in. Lawmakers Dig In Tillis’ home state of North Carolina is also home to three major research universities and, not coincidentally, multiple drugmakers’ headquarters, factories and other facilities. From his swearing-in in 2015 to the end of 2018, Tillis received about $160,000 from drugmakers based there or beyond. He almost matched that four-year total in 2019 alone, in the midst of a difficult reelection campaign to be decided this fall. He has raised nearly $10 million for his campaign, with lobbyists among his biggest contributors, according to OpenSecrets. Daniel Keylin, a spokesperson for Tillis, said Tillis and Coons, the subcommittee’s top Democrat, are working to overhaul the country’s “antiquated intellectual property laws.” Keylin said the bipartisan effort protects the development and access to affordable, lifesaving medication for patients,” adding: “No contribution has any impact on how Tillis votes or legislates.” Tillis signaled his openness to the drug industry early on. The day before being named chairman, he reintroduced a bill that would limit the options generic drugmakers have to challenge allegedly invalid patents, effectively helping brand-name drugmakers protect their monopolies. Former Sen. Orrin Hatch (R-Utah), whose warm relationship with the drug industry was well-known, had introduced the legislation, the Hatch-Waxman Integrity Act, just days before his retirement in 2018. At his subcommittee’s first hearing, Tillis said the members would rely on testimony from private businesses to guide them. He promised to hold hearings on patent eligibility standards and “reforms to the Patent Trial and Appeal Board.” In practice, the Hatch-Waxman Integrity Act would require generics makers challenging another drugmaker’s patent to either take their claim to the Patent Trial and Appeal Board, which acts as a sort of cheaper, faster quality check to catch bad patents, or file a lawsuit. A study released last year found that, since Congress created the Patent Trial and Appeal Board in 2011, it has narrowed or overturned about 51 of the drugmaker patents that generics makers have challenged. Feldman said the drug industry “went berserk” over the number of patents the board changed and has been eager to limit use of the board as much as possible. Patent reviewers are often stretched thin and sometimes make mistakes, said Aaron Kesselheim, a Harvard Medical School professor who is an expert in intellectual property rights and drug development. Limiting the ways to challenge patents, as Tillis’ bill would, does not strengthen the patent system, he said. “You want overlapping oversight for a system that is as important and fundamental as this system is,” he said. As promised, Tillis and Coons also spent much of the year working on so-called Section 101 reform regarding what is eligible to be patented — “a very major change” that “would overturn more than a century of Supreme Court law,” Feldman said. Sean Coit, Coons’ spokesperson, said lowering drug prices is one of the senator’s top priorities and pointed to Coon’s support for legislation the pharmaceutical industry opposes. “One of the reasons Senator Coons is leading efforts in Congress to fix our broken patent system is so that life-saving medicines can actually be developed and produced at affordable prices for every American,” Coit wrote in an email, adding that “his work on Section 101 reform has brought together advocates from across the spectrum, including academics and health experts.” In August, when much of Capitol Hill had emptied for summer recess, Tillis and Coons held closed-door meetings to preview their legislation to stakeholders, including the Pharmaceutical Research and Manufacturers of America, or PhRMA, the brand-name drug industry’s lobbying group. “We regularly engage with members of Congress in both parties to advance practical policy solutions that will lower medicine costs for patients,” said Holly Campbell, a PhRMA spokesperson. Neither proposal has received a public hearing. In the 30 days before Tillis and Coons were named leaders of the revived subcommittee, drug manufacturers gave them $21,000 from their political action committees. In the 30 days following that first hearing, Tillis and Coons received $60,000. Among their donors were PhRMA; the Biotechnology Innovation Organization, the biotech lobbying group; and five of the seven drugmakers whose executives — as Tillis laid out a pharma-friendly agenda for his new subcommittee — were getting chewed out by senators in a different hearing room over patent abuse. Cornyn Goes After Patent Abuse Richard Gonzalez, chief executive of AbbVie Inc., the company known for its top-selling drug, Humira, had spent the morning sitting stone-faced before the Senate Finance Committee as, one after another, senators excoriated him and six other executives of brand-name drug manufacturers over how they price their products. Cornyn brought up AbbVie’s more than 130 patents on Humira. Hadn’t the company blocked its competition? Cornyn asked Gonzalez, who carefully explained how AbbVie’s lawsuit against a generics competitor and subsequent licensing deal was not what he would describe as anti-competitive behavior. “I realize it may not be popular,” Gonzalez said. “But I think it is a reasonable balance.” A minute later, Cornyn turned to Sen. Chuck Grassley (R-Iowa), who, like Cornyn, was also a member of the revived intellectual property subcommittee. This is worth looking into with “our Judiciary Committee authorities as well,” Cornyn said, effectively threatening legislation on patent abuse. The next day, Mylan, one of the largest producers of generic drugs, gave Cornyn $5,000, FEC records show. The company had not donated to Cornyn in years. By midsummer, every drug company that sent an executive to that hearing had given money to Cornyn, including AbbVie. Cornyn, who faces perhaps the most difficult reelection fight of his career this fall, ranks No. 6 among members of Congress in drugmaker PAC contributions last year, KHN’s analysis shows. He received about $104,000. Cornyn has received about $708,500 from drugmakers since 2007, KHN’s database shows. According to OpenSecrets, he has raised more than $17 million for this year’s reelection campaign. Cornyn’s office declined to comment. On May 9, Cornyn and Sen. Richard Blumenthal (D-Conn.) introduced the Affordable Prescriptions for Patients Act, which proposed to define two tactics used by drug companies to make it easier for the Federal Trade Commission to prosecute them: “product-hopping,” when drugmakers withdraw older versions of their drugs from the market to push patients toward newer, more expensive ones, and “patent-thicketing,” when drugmakers amass a series of patents to drag out their exclusivity and slow rival generics makers, who must challenge those patents to enter the market once the initial exclusivity ends. PhRMA opposed the bill. The next day, it gave Cornyn $1,000. Cornyn and Blumenthal’s bill would have been “very tough on the techniques that pharmaceutical companies use to extend patent protections and to keep prices high,” Feldman said. “The pharmaceutical industry lobbied tooth and nail against it,” she said. “And when the bill finally came out of committee, the strongest provisions — the patent-thicketing provisions — had been stripped.” In the months after the bill cleared committee and waited to be taken up by the Senate, Cornyn blamed Senate Democrats for blocking the bill while trying to secure votes on legislation with more direct controls on drug prices. The Senate has not voted on the bill. They choose Infrastructure as backlash – the bill costs Pharma millions – lobbyists can derail the Agenda. Brennan 8-2 Zachary Brennan 8-2-2021 "How the biopharma industry is helping to pay for the bipartisan infrastructure bill" https://endpts.com/how-the-biopharma-industry-is-helping-to-pay-for-the-bipartisan-infrastructure-bill/ (Senior Editor at Endpoint News)Elmer Senators on Sunday finalized the text of a massive, bipartisan infrastructure bill that contains little that might impact the biopharma industry other than two ways the legislators are planning to pay for the $1.2 trillion deal. On the one hand, senators are seeking to further delay a Trump-era Medicare Part D rule related to drug rebates, this time until 2026. Senators claim the rule could end up saving about $49 billion (and that number increased this week to $51 billion), but the PBM industry has attacked it as it would remove rebates from a safe harbor that provides protection from federal anti-kickback laws. The pharmaceutical industry, however, is in favor of the rule and opposes this latest delay as it continues to point its finger at the PBM industry for the rising cost of out-of-pocket expenses. Debra DeShong, EVP of public affairs at PhRMA, said via email: Despite railing against high drug costs on the campaign trail, lawmakers are threatening to gut a rule that would provide patients meaningful relief at the pharmacy. If it is included in the infrastructure package, this proposal will provide health insurers and drug middlemen a windfall and turn Medicare into a piggybank to fund projects that have nothing to do with lowering out-of-pocket costs for medicines. This would be an unconscionable move that robs patients of the prescription drug savings they deserve to help fill potholes and fund other infrastructure projects. The other provision in the infrastructure bill, which is estimated to save about $3 billion, would save money for Medicare on discarded medications from large, single-use drug vials. Manufacturers will be required to pay refunds for such discarded drugs, and each manufacturer will be subject to periodic audits on the refunds issued. If manufacturers don’t comply, HHS can fine them the refund amount that they would have paid plus 25. Drugs that will be excluded from these refund payments include radiopharmaceuticals or imaging agents, as well as those that require filtration during the drug preparation process. So do these two pay-fors mean that the pharma industry is getting off without any serious drug pricing reforms? Not quite, according to Alex Lawson, executive director of Social Security Works. Lawson told Endpoints News in an interview that he still fully expects major drug pricing reforms to make their way through Congress between now and the end of September as Sen. Ron Wyden (D-OR) refines his plan, part of an early fall spending package. Senate Majority Leader Chuck Schumer has promised both the infrastructure and spending package will pass before the Senate leaves for August recess. At the very least in terms of drug pricing provisions, expect to see a combination of the Wyden bill he co-wrote with Sen. Chuck Grassley (R-IA) last year, alongside further Medicare negotiations, Lawson said. “Talk is still optimistic,” Lawson said on the prospects of a drug pricing deal getting done, while noting that pharmaceutical company lobbyists are swarming Capitol Hill at the moment because of not just drug pricing plans, but tax provisions and the TRIPS waiver that the biopharma industry is worried about. “These are challenges to their entire existence, so they’re willing to protect them at any cost,” Lawson said, noting the target for drug pricing is about $500 billion in savings. As the House has jetted off to enjoy what might be an abbreviated summer recess, the Senate has just this week to get its work done, unless its recess is cut short too. “There’s a real possibility that the whole thing blows up and we get nothing on either side,” Lawson said. Infrastructure reform solves Existential Climate Change – it results in spill-over. USA Today 7-20 7-20-2021 "Climate change is at 'code red' status for the planet, and inaction is no longer an option" https://www.usatoday.com/story/opinion/todaysdebate/2021/07/20/climate-change-biden-infrastructure-bill-good-start/7877118002/Elmer Not long ago, climate change for many Americans was like a distant bell. News of starving polar bears or melting glaciers was tragic and disturbing, but other worldly. Not any more. Top climate scientists from around the world warned of a "code red for humanity" in a report issued Monday that says severe, human-caused global warming is become unassailable. Proof of the findings by the United Nations' Intergovernmental Panel on Climate Change is a now a factor of daily life. Due to intense heat waves and drought, 107 wildfires – including the largest ever in California – are now raging across the West, consuming 2.3 million acres. Earlier this summer, hundreds of people died in unprecedented triple-digit heat in Oregon, Washington and western Canada, when a "heat dome" of enormous proportions settled over the region for days. Some victims brought by stretcher into crowded hospital wards had body temperatures so high, their nervous systems had shut down. People collapsed trying to make their way to cooling shelters. Heat-trapping greenhouse gases Scientists say the event was almost certainly made worse and more intransigent by human-caused climate change. They attribute it to a combination of warming Arctic temperatures and a growing accumulation of heat-trapping greenhouse gases caused by the burning of fossil fuels. The consequences of what mankind has done to the atmosphere are now inescapable. Periods of extreme heat are projected to double in the lower 48 states by 2100. Heat deaths are far outpacing every other form of weather killer in a 30-year average. A persistent megadrought in America's West continues to create tinder-dry conditions that augur another devastating wildfire season. And scientists say warming oceans are fueling ever more powerful storms, evidenced by Elsa and the early arrival of hurricane season this year. Increasingly severe weather is causing an estimated $100 billion in damage to the United States every year. "It is honestly surreal to see your projections manifesting themselves in real time, with all the suffering that accompanies them. It is heartbreaking," said climate scientist Katharine Hayhoe. Rising seas from global warming Investigators are still trying to determine what led to the collapse of a Miami-area condominium that left more than 100 dead or missing. But one concerning factor is the corrosive effect on reinforced steel structures of encroaching saltwater, made worse in Florida by a foot of rising seas from global warming since the 1900s. The clock is ticking for planet Earth. While the U.N. report concludes some level of severe climate change is now unavoidable, there is still a window of time when far more catastrophic events can be mitigated. But mankind must act soon to curb the release of heat-trapping gases. Global temperature has risen nearly 2 degrees Fahrenheit since the pre-industrial era of the late 19th century. Scientists warn that in a decade, it could surpass a 2.7-degree increase. That's enough warming to cause catastrophic climate changes. After a brief decline in global greenhouse gas emissions during the pandemic, pollution is on the rise. Years that could have been devoted to addressing the crisis were wasted during a feckless period of inaction by the Trump administration. Congress must act Joe Biden won the presidency promising broad new policies to cut America's greenhouse gas emissions. But Congress needs to act on those ideas this year. Democrats cannot risk losing narrow control of one or both chambers of Congress in the 2022 elections to a Republican Party too long resistant to meaningful action on the climate. So what's at issue? A trillion dollar infrastructure bill negotiated between Biden and a group of centrist senators (including 10 Republicans) is a start. In addition to repairing bridges, roads and rails, it would improve access by the nation's power infrastructure to renewable energy sources, cap millions of abandoned oil and gas wells spewing greenhouse gases, and harden structures against climate change. It also offers tax credits for the purchase of electric vehicles and funds the construction of charging stations. (The nation's largest source of climate pollution are gas-powered vehicles.) Senate approval could come very soon. Much more is needed if the nation is going to reach Biden's necessary goal of cutting U.S. climate pollution in half from 2005 levels by 2030. His ideas worth considering include a federal clean electricity standard for utilities, federal investments and tax credits to promote renewable energy, and tens of billions of dollars in clean energy research and development, including into ways of extracting greenhouse gases from the skies. Another idea worth considering is a fully refundable carbon tax. The vehicle for these additional proposals would be a second infrastructure bill. And if Republicans balk at the cost of such vital investment, Biden is rightly proposing to pass this package through a process known as budget reconciliation, which allows bills to clear the Senate with a simple majority vote. These are drastic legislative steps. But drastic times call for them. And when Biden attends a U.N. climate conference in November, he can use American progress on climate change as a mean of persuading others to follow our lead. Further delay is not an option.
9/25/21
SO - DA - Infrastructure v2
Tournament: Valley RR | Round: 4 | Opponent: Lexington BF | Judge: Keshav Dandu, Triniti Krauss 4 Biden’s infrastructure bill will pass through reconciliation but absolute Dem Unity is key. - Turns Structural Violence Pramuk and Franck 8-25 Jacob Pramuk and Thomas Franck 8-25-2021 "Here’s what happens next as Democrats try to pass Biden’s multitrillion-dollar economic plans" https://www.cnbc.com/2021/08/25/what-happens-next-with-biden-infrastructure-budget-bills-in-congress.html (Staff Reporter at CNBC)Elmer WASHINGTON — House Democrats just patched up a party fracture to take a critical step forward with a mammoth economic agenda. But the path ahead could get trickier as party leaders try to thread a legislative needle to pass more than $4 trillion in new spending. In the coming weeks, Democrats aim to approve a $1 trillion bipartisan infrastructure plan and up to $3.5 trillion in investments in social programs. Passing both will require a heavy lift, as leaders will need to satisfy competing demands of centrists wary of spending and progressives who want to reimagine government’s role in American households. The House is leaving Washington until Sept. 20 after taking key steps toward pushing through the sprawling economic plans. The chamber on Tuesday approved a $3.5 trillion budget resolution and advanced the infrastructure bill, as House Speaker Nancy Pelosi, D-Calif., promised centrist Democrats to take up the bipartisan plan by Sept. 27. The Senate already passed the infrastructure legislation, so a final House vote would send it to Biden’s desk for his signature. Now that both chambers have passed the budget measure, Democrats can move without Republicans to push through their spending plan via reconciliation. Party leaders want committees to write their pieces of the bill by Sept. 15 before budget committees package them into one massive measure that can move through Congress. Committees could start marking up legislation in early September. Party leaders face a challenge in coming up with a bill that will satisfy centrists who want to trim back the $3.5 trillion price tag and progressives who consider it the minimum Congress should spend. As one defection in the Senate — and four in the House — would sink legislation, Democrats have to satisfy a diverse range of views to pass their agenda. “We write a bill with the Senate because it’s no use doing a bill that’s not going to pass the Senate, in the interest of getting things done,” Pelosi told reporters on Wednesday. Given the magnitude of the legislation, passing it quickly could prove difficult. To appease congressional progressives who have prioritized passage of the budget bill, Democrats could move to pass both proposals at about the same time. While Pelosi gave a Sept. 27 target date to approve the infrastructure plan, the commitment is not binding. Still, she noted Wednesday that Congress needs to pass the bill before surface transportation spending authorization expires Sept. 30. “We have long had an eye to having the infrastructure bill on the President’s desk by the October 1, the effective date of the legislation,” she wrote in a separate letter to Democrats on Wednesday. Democrats say the bills combined will provide a jolt to the economy and a lifeline for households. Supporters of the Democratic spending plan, including Pelosi and Senate Budget Committee Chair Bernie Sanders, I-Vt., have cast it as the biggest expansion of the U.S. social safety net in decades. “This is a truly historic opportunity to pass the most transformative and consequential legislation for families in a century, and will stand alongside the New Deal and Great Society as pillars of economic security,” Pelosi wrote to colleagues Wednesday. The plan would expand Medicare, paid leave and child care, extend enhanced household tax credits and encourage green energy adoption, while hiking taxes on corporations and the wealthy. Democrats hope to sell a wave of new support for families as they campaign to keep control of Congress in next year’s midterms. Those elections, though, have helped to generate staunch opposition on the other side of the aisle. The GOP has cited the trillions in new spending and the proposed reversal of some of its 2017 tax cuts in trying to take down the Democratic budget bill. Republicans and some Democrats have in recent weeks said that another $4.5 trillion in fiscal stimulus could not only boost economic growth but have the adverse effect of fueling inflation. Pharma backlashes to the Plan – they’re aggressive lobbyists and will do anything to preserve patent rights. - Turns Case – Waters down the Plan due to lobbying - Optional Card – still thinking on if its necessary note from Elmer Huetteman 19 Emmarie Huetteman 2-26-2019 “Senators Who Led Pharma-Friendly Patent Reform Also Prime Targets For Pharma Cash” https://khn.org/news/senators-who-led-pharma-friendly-patent-reform-also-prime-targets-for-pharma-cash/ (former NYT Congressional correspondent with an MA in public affairs reporting from Northwestern University’s Medill School)Elmer Early last year, as lawmakers vowed to curb rising drug prices, Sen. Thom Tillis was named chairman of the Senate Judiciary Committee’s subcommittee on intellectual property rights, a committee that had not met since 2007. As the new gatekeeper for laws and oversight of the nation’s patent system, the North Carolina Republican signaled he was determined to make it easier for American businesses to benefit from it — a welcome message to the drugmakers who already leverage patents to block competitors and keep prices high. Less than three weeks after introducing a bill that would make it harder for generic drugmakers to compete with patent-holding drugmakers, Tillis opened the subcommittee’s first meeting on Feb. 26, 2019, with his own vow. “From the United States Patent and Trademark Office to the State Department’s Office of Intellectual Property Enforcement, no department or bureau is too big or too small for this subcommittee to take interest,” he said. “And we will.” In the months that followed, tens of thousands of dollars flowed from pharmaceutical companies toward his campaign, as well as to the campaigns of other subcommittee members — including some who promised to stop drugmakers from playing money-making games with the patent system, like Sen. John Cornyn (R-Texas). Tillis received more than $156,000 from political action committees tied to drug manufacturers in 2019, more than any other member of Congress, a new analysis of KHN’s Pharma Cash to Congress database shows. Sen. Chris Coons (D-Del.), the top Democrat on the subcommittee who worked side by side with Tillis, received more than $124,000 in drugmaker contributions last year, making him the No. 3 recipient in Congress. No. 2 was Sen. Mitch McConnell (R-Ky.), who took in about $139,000. As the Senate majority leader, he controls what legislation gets voted on by the Senate. Neither Tillis nor Coons sits on the Senate committees that introduced legislation last year to lower drug prices through methods like capping price increases to the rate of inflation. Of the four senators who drafted those bills, none received more than $76,000 from drug manufacturers in 2019. Tillis and Coons spent much of last year working on significant legislation that would expand the range of items eligible to be patented — a change that some experts say would make it easier for companies developing medical tests and treatments to own things that aren’t traditionally inventions, like genetic code. They have not yet officially introduced a bill. As obscure as patents might seem in an era of public outrage over drug prices, the fact that drugmakers gave most to the lawmakers working to change the patent system belies how important securing the exclusive right to market a drug, and keep competitors at bay, is to their bottom line. “Pharma will fight to the death to preserve patent rights,” said Robin Feldman, a professor at the UC Hastings College of the Law in San Francisco who is an expert in intellectual property rights and drug pricing. “Strong patent rights are central to the games drug companies play to extend their monopolies and keep prices high.” Campaign contributions, closely tracked by the Federal Election Commission, are among the few windows into how much money flows from the political groups of drugmakers and other companies to the lawmakers and their campaigns. Private companies generally give money to members of Congress to encourage them to listen to the companies, typically through lobbyists, whose activities are difficult to track. They may also communicate through so-called dark money groups, which are not required to report who gives them money. Over the past 10 years, the pharmaceutical industry has spent about $233 million per year on lobbying, according to a new study published in JAMA Internal Medicine. That is more than any other industry, including the oil and gas industry. Why Patents Matter Developing and testing a new drug, and gaining approval from the Food and Drug Administration, can take years and cost hundreds of millions of dollars. Drugmakers are generally granted a six- or seven-year exclusivity period to recoup their investments. But drugmakers have found ways to extend that period of exclusivity, sometimes accumulating hundreds of patents on the same drug and blocking competition for decades. One method is to patent many inventions beyond a drug’s active ingredient, such as patenting the injection device that administers the drug. Keeping that arrangement intact, or expanding what can be patented, is where lawmakers come in. Lawmakers Dig In Tillis’ home state of North Carolina is also home to three major research universities and, not coincidentally, multiple drugmakers’ headquarters, factories and other facilities. From his swearing-in in 2015 to the end of 2018, Tillis received about $160,000 from drugmakers based there or beyond. He almost matched that four-year total in 2019 alone, in the midst of a difficult reelection campaign to be decided this fall. He has raised nearly $10 million for his campaign, with lobbyists among his biggest contributors, according to OpenSecrets. Daniel Keylin, a spokesperson for Tillis, said Tillis and Coons, the subcommittee’s top Democrat, are working to overhaul the country’s “antiquated intellectual property laws.” Keylin said the bipartisan effort protects the development and access to affordable, lifesaving medication for patients,” adding: “No contribution has any impact on how Tillis votes or legislates.” Tillis signaled his openness to the drug industry early on. The day before being named chairman, he reintroduced a bill that would limit the options generic drugmakers have to challenge allegedly invalid patents, effectively helping brand-name drugmakers protect their monopolies. Former Sen. Orrin Hatch (R-Utah), whose warm relationship with the drug industry was well-known, had introduced the legislation, the Hatch-Waxman Integrity Act, just days before his retirement in 2018. At his subcommittee’s first hearing, Tillis said the members would rely on testimony from private businesses to guide them. He promised to hold hearings on patent eligibility standards and “reforms to the Patent Trial and Appeal Board.” In practice, the Hatch-Waxman Integrity Act would require generics makers challenging another drugmaker’s patent to either take their claim to the Patent Trial and Appeal Board, which acts as a sort of cheaper, faster quality check to catch bad patents, or file a lawsuit. A study released last year found that, since Congress created the Patent Trial and Appeal Board in 2011, it has narrowed or overturned about 51 of the drugmaker patents that generics makers have challenged. Feldman said the drug industry “went berserk” over the number of patents the board changed and has been eager to limit use of the board as much as possible. Patent reviewers are often stretched thin and sometimes make mistakes, said Aaron Kesselheim, a Harvard Medical School professor who is an expert in intellectual property rights and drug development. Limiting the ways to challenge patents, as Tillis’ bill would, does not strengthen the patent system, he said. “You want overlapping oversight for a system that is as important and fundamental as this system is,” he said. As promised, Tillis and Coons also spent much of the year working on so-called Section 101 reform regarding what is eligible to be patented — “a very major change” that “would overturn more than a century of Supreme Court law,” Feldman said. Sean Coit, Coons’ spokesperson, said lowering drug prices is one of the senator’s top priorities and pointed to Coon’s support for legislation the pharmaceutical industry opposes. “One of the reasons Senator Coons is leading efforts in Congress to fix our broken patent system is so that life-saving medicines can actually be developed and produced at affordable prices for every American,” Coit wrote in an email, adding that “his work on Section 101 reform has brought together advocates from across the spectrum, including academics and health experts.” In August, when much of Capitol Hill had emptied for summer recess, Tillis and Coons held closed-door meetings to preview their legislation to stakeholders, including the Pharmaceutical Research and Manufacturers of America, or PhRMA, the brand-name drug industry’s lobbying group. “We regularly engage with members of Congress in both parties to advance practical policy solutions that will lower medicine costs for patients,” said Holly Campbell, a PhRMA spokesperson. Neither proposal has received a public hearing. In the 30 days before Tillis and Coons were named leaders of the revived subcommittee, drug manufacturers gave them $21,000 from their political action committees. In the 30 days following that first hearing, Tillis and Coons received $60,000. Among their donors were PhRMA; the Biotechnology Innovation Organization, the biotech lobbying group; and five of the seven drugmakers whose executives — as Tillis laid out a pharma-friendly agenda for his new subcommittee — were getting chewed out by senators in a different hearing room over patent abuse. Cornyn Goes After Patent Abuse Richard Gonzalez, chief executive of AbbVie Inc., the company known for its top-selling drug, Humira, had spent the morning sitting stone-faced before the Senate Finance Committee as, one after another, senators excoriated him and six other executives of brand-name drug manufacturers over how they price their products. Cornyn brought up AbbVie’s more than 130 patents on Humira. Hadn’t the company blocked its competition? Cornyn asked Gonzalez, who carefully explained how AbbVie’s lawsuit against a generics competitor and subsequent licensing deal was not what he would describe as anti-competitive behavior. “I realize it may not be popular,” Gonzalez said. “But I think it is a reasonable balance.” A minute later, Cornyn turned to Sen. Chuck Grassley (R-Iowa), who, like Cornyn, was also a member of the revived intellectual property subcommittee. This is worth looking into with “our Judiciary Committee authorities as well,” Cornyn said, effectively threatening legislation on patent abuse. The next day, Mylan, one of the largest producers of generic drugs, gave Cornyn $5,000, FEC records show. The company had not donated to Cornyn in years. By midsummer, every drug company that sent an executive to that hearing had given money to Cornyn, including AbbVie. Cornyn, who faces perhaps the most difficult reelection fight of his career this fall, ranks No. 6 among members of Congress in drugmaker PAC contributions last year, KHN’s analysis shows. He received about $104,000. Cornyn has received about $708,500 from drugmakers since 2007, KHN’s database shows. According to OpenSecrets, he has raised more than $17 million for this year’s reelection campaign. Cornyn’s office declined to comment. On May 9, Cornyn and Sen. Richard Blumenthal (D-Conn.) introduced the Affordable Prescriptions for Patients Act, which proposed to define two tactics used by drug companies to make it easier for the Federal Trade Commission to prosecute them: “product-hopping,” when drugmakers withdraw older versions of their drugs from the market to push patients toward newer, more expensive ones, and “patent-thicketing,” when drugmakers amass a series of patents to drag out their exclusivity and slow rival generics makers, who must challenge those patents to enter the market once the initial exclusivity ends. PhRMA opposed the bill. The next day, it gave Cornyn $1,000. Cornyn and Blumenthal’s bill would have been “very tough on the techniques that pharmaceutical companies use to extend patent protections and to keep prices high,” Feldman said. “The pharmaceutical industry lobbied tooth and nail against it,” she said. “And when the bill finally came out of committee, the strongest provisions — the patent-thicketing provisions — had been stripped.” In the months after the bill cleared committee and waited to be taken up by the Senate, Cornyn blamed Senate Democrats for blocking the bill while trying to secure votes on legislation with more direct controls on drug prices. The Senate has not voted on the bill. They choose Infrastructure as backlash – they bill costs Pharma millions – lobbyists can derail the Agenda. Brennan 8-2 Zachary Brennan 8-2-2021 "How the biopharma industry is helping to pay for the bipartisan infrastructure bill" https://endpts.com/how-the-biopharma-industry-is-helping-to-pay-for-the-bipartisan-infrastructure-bill/ (Senior Editor at Endpoint News)Elmer Senators on Sunday finalized the text of a massive, bipartisan infrastructure bill that contains little that might impact the biopharma industry other than two ways the legislators are planning to pay for the $1.2 trillion deal. On the one hand, senators are seeking to further delay a Trump-era Medicare Part D rule related to drug rebates, this time until 2026. Senators claim the rule could end up saving about $49 billion (and that number increased this week to $51 billion), but the PBM industry has attacked it as it would remove rebates from a safe harbor that provides protection from federal anti-kickback laws. The pharmaceutical industry, however, is in favor of the rule and opposes this latest delay as it continues to point its finger at the PBM industry for the rising cost of out-of-pocket expenses. Debra DeShong, EVP of public affairs at PhRMA, said via email: Despite railing against high drug costs on the campaign trail, lawmakers are threatening to gut a rule that would provide patients meaningful relief at the pharmacy. If it is included in the infrastructure package, this proposal will provide health insurers and drug middlemen a windfall and turn Medicare into a piggybank to fund projects that have nothing to do with lowering out-of-pocket costs for medicines. This would be an unconscionable move that robs patients of the prescription drug savings they deserve to help fill potholes and fund other infrastructure projects. The other provision in the infrastructure bill, which is estimated to save about $3 billion, would save money for Medicare on discarded medications from large, single-use drug vials. Manufacturers will be required to pay refunds for such discarded drugs, and each manufacturer will be subject to periodic audits on the refunds issued. If manufacturers don’t comply, HHS can fine them the refund amount that they would have paid plus 25. Drugs that will be excluded from these refund payments include radiopharmaceuticals or imaging agents, as well as those that require filtration during the drug preparation process. So do these two pay-fors mean that the pharma industry is getting off without any serious drug pricing reforms? Not quite, according to Alex Lawson, executive director of Social Security Works. Lawson told Endpoints News in an interview that he still fully expects major drug pricing reforms to make their way through Congress between now and the end of September as Sen. Ron Wyden (D-OR) refines his plan, part of an early fall spending package. Senate Majority Leader Chuck Schumer has promised both the infrastructure and spending package will pass before the Senate leaves for August recess. At the very least in terms of drug pricing provisions, expect to see a combination of the Wyden bill he co-wrote with Sen. Chuck Grassley (R-IA) last year, alongside further Medicare negotiations, Lawson said. “Talk is still optimistic,” Lawson said on the prospects of a drug pricing deal getting done, while noting that pharmaceutical company lobbyists are swarming Capitol Hill at the moment because of not just drug pricing plans, but tax provisions and the TRIPS waiver that the biopharma industry is worried about. “These are challenges to their entire existence, so they’re willing to protect them at any cost,” Lawson said, noting the target for drug pricing is about $500 billion in savings. As the House has jetted off to enjoy what might be an abbreviated summer recess, the Senate has just this week to get its work done, unless its recess is cut short too. “There’s a real possibility that the whole thing blows up and we get nothing on either side,” Lawson said. Bill key to prevent infrastructure disaster from Grid Collapse PPG, 3/4/2021 (MAR 4, 2021 9:00 PM, Pittsburgh Post-Gazette Editorial Board. Invest in infrastructure. March 4, 2021. https://www.post-gazette.com/opinion/editorials/2021/03/05/Invest-in-infrastructure/stories/202102270028, recut by JMP) Now is the time for a reckoning, a realization: While it’s important to study the past to avoid repeating the same mistakes, the country must also look to its future and see the obvious — that America’s infrastructure as a whole needs some serious upkeep. Democrats and Republicans alike have flirted with the idea of a sweeping infrastructure bill in recent years, and President Joe Biden’s team is working to outline such legislation. These efforts should proceed swiftly — now is the time for Congress to invest in infrastructure, not only to help prevent crises, but also to jump-start an economy mired in the coronavirus pandemic. Despite being one of the richest countries in the world, the U.S. seems constantly to hover on the edge of disaster, with news of natural forces smashing through power grids and levies and fire prevention strategies on a yearly or monthly basis. Texas is only the most recent state to have been pushed over the edge. The American Society of Civil Engineers just this week gave America’s infrastructure an overall grade of C-minus in its quadrennial report card. The last grade was D-plus and that report cited decades of underfunding and unheeded recommendations. C-minus is an improvement but deserves not just federal attention but actual intervention. The report notes “we are heading in the right direction, but a lot of work remains.” There is opportunity in the recent economic and environmental devastation that grabs headlines and breaks hearts. In the aftermath of the Great Depression, the government put millions to work improving parks and building roads and bridges and airports. President Dwight Eisenhower’s interstate highway system remains the life veins of interstate travel. A new and vigorous infrastructure package for America would fix what needs to be fixed and offer the promise of an economic boon. The purpose of the federal government is to address the needs of American society in a way that can’t be tackled by states in a piecemeal fashion. What has happened in recent days within The Lone Star State demonstrates keenly that this is the time — actually past the time — that our federal leaders must shore up the foundations of our federation. Congress should act swiftly to lead states in reversing the entropy chewing away at America’s foundations. Until this happens, society stands on shifting sands. Grid collapse causes extinction. Greene ’19 Sherrell R.; Nuclear Engineering M.S. degrees from the University of Tennessee, recognized subject matter expert in nuclear reactor safety, nuclear fuel cycle technologies, and advanced reactor concept development, worked at the Oak Ridge National Laboratory (ORNL) for over three decades, as Director of Research Reactor Development Programs and Director of Nuclear Technology Programs; “Enhancing Electric Grid, Critical Infrastructure, and Societal Resilience with Resilient Nuclear Power Plants (rNPPs),” Nuclear Technology 205(3), https://ans.tandfonline.com/doi/pdf/10.1080/00295450.2018.1505357?needAccess=true recut gord0 There are a variety of events that could deal crippling blows to a nation’s Grid, Critical Infrastructure, and social fabric. The types of catastrophes under consideration here are “very bad day” scenarios that might result from severe GMDs induced by solar CMEs, HEMP attacks, cyber attacks, etc.5 As briefly discussed in Sec. III.C, the probability of a GMD of the magnitude of the 1859 Carrington Event is now believed to be on the order of 1/year. The Earth narrowly missed (by only several days) intercepting a CME stream in July 2012 that would have created a GMD equal to or larger than the Carrington Event.41 Lloyd’s, in its 2013 report, “Solar Storm Risk to the North American Electric Grid,” 42 stated the following: “A Carrington-level, extreme geomagnetic storm is almost inevitable in the future…The total U.S. population at risk of extended power outage from a Carrington-level storm is between 20-40 million, with durations of 16 days to 1-2 years…The total economic cost for such a scenario is estimated at $0.6-2.6 trillion USD.” Analyses conducted subsequent to the Lloyd’s assessment indicated the geographical area impacted by the CME would be larger than that estimated in Lloyd’s analysis (extending farther northward along the New England coast of the United States and in the state of Minnesota),43 and that the actual consequences of such an event could actually be greater than estimated by Lloyd’s. Based on “Report of the Commission to Assess the Threat to the United States from Electromagnetic Pulse (EMP) Attack: Critical National Infrastructures” to Congress in 2008 (Ref. 39), a HEMP attack over the Central U.S. could impact virtually the entire North American continent. The consequences of such an event are difficult to quantify with confidence. Experts affiliated with the aforementioned Commission and others familiar with the details of the Commission’s work have stated in Congressional testimony that such an event could “kill up to 90 percent of the national population through starvation, disease, and societal collapse.” 44,45 Most of these consequences are either direct or indirect impacts of the predicted collapse of virtually the entire U.S. Critical Infrastructure system in the wake of the attack. Last, recent analyses by both the U.S. Department of Energy46 and the U.S. National Academies of Sciences, Engineering, and Medicine47 have concluded that cyber threats to the U.S. Grid from both state-level and substatelevel entities are likely to grow in number and sophistication in the coming years, posing a growing threat to the U.S. Grid. These three “very bad day” scenarios are not creations of overzealous science fiction writers. A variety of mitigating actions to reduce both the vulnerability and the consequences of these events has been identified, and some are being implemented. However, the fact remains that events such as those described here have the potential to change life as we know it in the United States and other developed nations in the 21st century, whether the events occur individually, or simultaneously, and with or without coordinated physical attacks on Critical Infrastructure assets.
9/25/21
SO - DA - Infrastructure v3
Tournament: Voices | Round: Doubles | Opponent: Archbishop Mitty AS | Judge: Quentin Clark, Gordon Krauss, Samantha McLoughlin 3 Reconciliation passes now – the delay gives Biden time to work magic in the wings, but PC and focus are key Herb et al. 10-1 (Jeremy Herb, CNN Politics Reporter, Kevin Liptak, Reporter, Phil Mattingly, Senior White House Correspondent, Lauren Fox, CNN Congressional Correspondent, Melanie Zanona, Capitol Hill Reporter, “'It doesn't matter when': How Biden gave feuding House Democrats an off-ramp”, CNN Politics, 10-1-21, https://www.cnn.com/2021/10/01/politics/dems-biden-infrastructure-delay/index.html)//babcii (CNN)President Joe Biden didn't travel to Capitol Hill on Friday to close the deal, or to rally the troops through a final legislative gantlet. There was nothing cinematic -- or dramatic -- about the trip down Pennsylvania Avenue for the 36-year Senate veteran, who has more than once informed aides of his unparalleled ability to read, speak to and corral lawmakers. Instead, in remarks that lasted less than 30 minutes, Biden served a singular purpose: a presidential pressure relief valve. In a week deemed an "inflection point" by top aides, where the President was rarely seen in public as his entire domestic agenda hung in the balance, it marked a seemingly low bar to clear for success. There would be no miraculous deal to unlock the formula to move forward on the two key components Democrats are attempting to pass. The promised vote on the $1.2 trillion infrastructure bill would not materialize. But after days of intraparty warfare and feverish late-night negotiations, a reset was desperately needed -- and the best Biden could offer. In delivering an unscripted and at times unwieldy message that the infrastructure vote wasn't likely to happen -- and the top-line cost of the economic and climate package was going to have to come down -- the President made the bet that he can keep both sides of the intraparty feud on board in the critical days and weeks to follow. White House and Democratic leaders will now launch an all-out effort to win over the two Senate Democratic holdouts, Sens. Joe Manchin of West Virginia and Kyrsten Sinema of Arizona, as they shape what the multitrillion-dollar economic and social package looks like -- and how high its price tag will be. Congressional Democrats and White House officials say progress was made this week getting all sides closer to an agreement on the massive economic, climate and health care spending package that Democratic leaders intend to pair with the bipartisan $1.2 trillion infrastructure bill that's passed the Senate already. But in the House, moderate and progressive Democrats were engaged in a slow-motion game of chicken over the infrastructure vote, with moderates demanding a vote on the infrastructure bill this week that had been pledged by House Speaker Nancy Pelosi -- and progressives standing firm that they would vote it down without an agreement on the framework for the larger economic package. On Friday, Biden sought the off-ramp. It marked his most direct effort to date to cajole the House Democratic caucus at a moment when its members have grown increasingly frustrated about the amount of attention the President and his team have paid to their side of the Capitol. Though well received with several ovations, the appearance didn't serve to salve those wounds entirely -- with some saying afterward that his pep talk had actually exacerbated them. But it did deliver a critical message and a consequential moment, multiple members said: Compromise now -- or end up with nothing. It's likely too soon to say whether the debate this week is just a preamble to Democrats' enacting their historic agenda or if it's a feud that leads to legislative defeat, hobbling the President's party ahead of a tough midterm election cycle with little to show for controlling both chambers of Congress and the White House. 'Who knows what label I get' After the roughly half hour meeting with the President, Democrats described a leader who was in his element and not working to change minds as much as remind members of their shared and unified goals as a caucus. Throughout the infrastructure push, Biden has made clear to Democrats that party unity -- or, in some participants' interpretation, loyalty -- is of utmost importance with only the slimmest of majorities in the House and Senate. He tried to break down the stalemate and the tensions that have hung over the party for weeks, reminding them that he's not on one side or the other. At one point, he made a reference to his own political ideology, saying, "Who knows what label I get." To which Pelosi replied: "President," prompting loud laughter from the room. Biden also talked about how he had redone his office to have paintings hung of Lincoln and FDR -- "A deeply divided country and the biggest economic transformation," said Rep. David Cicilline of Rhode Island, "which is kind of the moment we're in." White House officials think the President accomplished what he went to do on Capitol Hill: Remind Democrats of what is at stake while relieving some of the pressure that had built up over the last several days and reiterating his commitment to passing both pieces of legislation. With that done, officials believe, negotiators have a better environment to be able to push toward a deal. "We're going to get this done," Biden told reporters as he left the meeting. "It doesn't matter when. It doesn't, whether it's in six minutes, six days or six weeks -- we're going to get it done." 'As long as we're still alive' Even before Friday, Biden had alluded in recent days to negotiations slipping beyond the week's end. With the stakes simply too high -- on both the political and policy fronts -- there are no plans to walk away. "It may not be by the end of the week," the President had responded when asked Monday how he would define success at the end of this week. "I hope it's by the end of the week." "But as long as we're still alive ...," Biden said before shifting course in his thought. Attacks on Pharmaceutical Profits triggers Mod Dem Backlash – it disrupts unity. Cohen 9-6 Joshua Cohen 9-6-2021 "Democrats’ Plans To Introduce Prescription Drug Pricing Reform Face Formidable Obstacles" https://www.forbes.com/sites/joshuacohen/2021/09/06/democrats-plans-to-introduce-prescription-drug-pricing-reform-face-obstacles/?sh=37a269917395 (independent healthcare analyst with over 22 years of experience analyzing healthcare and pharmaceuticals.)Elmer There’s considerable uncertainty regarding passage with a simple majority of the 2021 massive budget reconciliation bill. Last week, Senator Joe Manchin called on Democrats to pause pushing forward the budget reconciliation bill. If Manchin winds up saying no to the bill, this would scuttle it as the Democrats can’t afford to lose a single Senator. And, there’s speculation that provisions to reduce prescription drug prices may be watered down and not incorporate international price referencing. Additionally, reduced prices derived through Medicare negotiation may not be able to be applied to those with employer-based coverage. While the progressive wing of the Democratic Party supports drug pricing reform, several key centrist Democrats in both the House and Senate appear to be uncomfortable with particular aspects of the budget reconciliation bill, including a potential deal-breaker, namely the potential negative impact of drug price controls on the domestic pharmaceutical industry, as well as long-term patient access to new drugs. A paper released in 2019 by the nonpartisan Congressional Budget Office found that the proposed legislation, H.R. 3, would reduce global revenue for new drugs by 19, leading to 8 fewer drugs approved in the U.S. between 2020 and 2029, and 30 fewer drugs over the next decade. And, a new report from the CBO reinforces the message that drug pricing legislation under consideration in Congress could lead to fewer new drugs being developed and launched. Intense lobbying efforts from biopharmaceutical industry groups are underway, warning of what they deem are harms from price controls in the form of diminished patient access to new innovations. The argument, based in part on assumptions and modeling included in the CBO reports, asserts that price controls would dampen investment critical to the biopharmaceutical industry’s pipeline of drugs and biologics. This won’t sway most Democrats, but has been a traditional talking point in the Republican Party for decades, and may convince some centrist Democrats to withdraw backing of provisions that in their eyes stymie pharmaceutical innovation. If the budget reconciliation bill would fail to garner a majority, a pared down version of H.R. 3, or perhaps a new bill altogether, with Senator Wyden spearheading the effort, could eventually land in the Senate. But, a similar set of provisos would apply, as majority support in both chambers would be far from a sure thing. In brief, Democrats’ plans at both the executive and legislative branch levels to introduce prescription drug pricing reform encounter challenges which may prevent impactful modifications from taking place. Sinema specifically jumps Ship. Hancock and Lucas 20 Jay Hancock and Elizabeth Lucas 5-29-2020 "A Senator From Arizona Emerges As A Pharma Favorite" https://khn.org/news/a-senator-from-arizona-emerges-as-a-pharma-favorite/ (Senior Correspondent, joined KHN in 2012 from The Baltimore Sun, where he wrote a column on business and finance. Previously he covered the State Department and the economics beat for The Sun and health care for The Virginian-Pilot of Norfolk and the Daily Press of Newport News. He has a bachelor’s degree from Colgate University and a master’s in journalism from Northwestern University.)Elmer Sen. Kyrsten Sinema formed a congressional caucus to raise “awareness of the benefits of personalized medicine” in February. Soon after that, employees of pharmaceutical companies donated $35,000 to her campaign committee. Amgen gave $5,000. So did Genentech and Merck. Sanofi, Pfizer and Eli Lilly all gave $2,500. Each of those companies has invested heavily in personalized medicine, which promises individually tailored drugs that can cost a patient hundreds of thousands of dollars. Sinema is a first-term Democrat from Arizona but has nonetheless emerged as a pharma favorite in Congress as the industry steers through a new political and economic landscape formed by the coronavirus. She is a leading recipient of pharma campaign cash even though she’s not up for reelection until 2024 and lacks major committee or subcommittee leadership posts. For the 2019-20 election cycle through March, political action committees run by employees of drug companies and their trade groups gave her $98,500 in campaign funds, Kaiser Health News’ Pharma Cash to Congress database shows. That stands out in a Congress in which a third of the members got no pharma cash for the period and half of those who did got $10,000 or less. The contributions give companies a chance to cultivate Sinema as she restocks from a brutal 2018 election victory that cost nearly $25 million. Altogether, pharma PACs have so far given $9.2 million to congressional campaign chests in this cycle, compared with $9.4 million at this point in the 2017-18 period, a sustained surge as the industry has responded to complaints about soaring prices. Sinema’s pharma haul was twice that of Sen. Susan Collins of Maine, considered one of the most vulnerable Republicans in November, and approached that of fellow Democrat Steny Hoyer, the powerful House majority leader from Maryland. It all adds up to a bet by drug companies that the 43-year-old Sinema, first elected to the Senate in 2018, will gain influence in coming years and serve as an industry ally in a party that also includes many lawmakers harshly critical of high drug prices and the companies that set them. Pharma backlash independently turns Case. Huetteman 19 Emmarie Huetteman 2-26-2019 “Senators Who Led Pharma-Friendly Patent Reform Also Prime Targets For Pharma Cash” https://khn.org/news/senators-who-led-pharma-friendly-patent-reform-also-prime-targets-for-pharma-cash/ (former NYT Congressional correspondent with an MA in public affairs reporting from Northwestern University’s Medill School)Elmer Early last year, as lawmakers vowed to curb rising drug prices, Sen. Thom Tillis was named chairman of the Senate Judiciary Committee’s subcommittee on intellectual property rights, a committee that had not met since 2007. As the new gatekeeper for laws and oversight of the nation’s patent system, the North Carolina Republican signaled he was determined to make it easier for American businesses to benefit from it — a welcome message to the drugmakers who already leverage patents to block competitors and keep prices high. Less than three weeks after introducing a bill that would make it harder for generic drugmakers to compete with patent-holding drugmakers, Tillis opened the subcommittee’s first meeting on Feb. 26, 2019, with his own vow. “From the United States Patent and Trademark Office to the State Department’s Office of Intellectual Property Enforcement, no department or bureau is too big or too small for this subcommittee to take interest,” he said. “And we will.” In the months that followed, tens of thousands of dollars flowed from pharmaceutical companies toward his campaign, as well as to the campaigns of other subcommittee members — including some who promised to stop drugmakers from playing money-making games with the patent system, like Sen. John Cornyn (R-Texas). Tillis received more than $156,000 from political action committees tied to drug manufacturers in 2019, more than any other member of Congress, a new analysis of KHN’s Pharma Cash to Congress database shows. Sen. Chris Coons (D-Del.), the top Democrat on the subcommittee who worked side by side with Tillis, received more than $124,000 in drugmaker contributions last year, making him the No. 3 recipient in Congress. No. 2 was Sen. Mitch McConnell (R-Ky.), who took in about $139,000. As the Senate majority leader, he controls what legislation gets voted on by the Senate. Neither Tillis nor Coons sits on the Senate committees that introduced legislation last year to lower drug prices through methods like capping price increases to the rate of inflation. Of the four senators who drafted those bills, none received more than $76,000 from drug manufacturers in 2019. Tillis and Coons spent much of last year working on significant legislation that would expand the range of items eligible to be patented — a change that some experts say would make it easier for companies developing medical tests and treatments to own things that aren’t traditionally inventions, like genetic code. They have not yet officially introduced a bill. As obscure as patents might seem in an era of public outrage over drug prices, the fact that drugmakers gave most to the lawmakers working to change the patent system belies how important securing the exclusive right to market a drug, and keep competitors at bay, is to their bottom line. “Pharma will fight to the death to preserve patent rights,” said Robin Feldman, a professor at the UC Hastings College of the Law in San Francisco who is an expert in intellectual property rights and drug pricing. “Strong patent rights are central to the games drug companies play to extend their monopolies and keep prices high.” Campaign contributions, closely tracked by the Federal Election Commission, are among the few windows into how much money flows from the political groups of drugmakers and other companies to the lawmakers and their campaigns. Private companies generally give money to members of Congress to encourage them to listen to the companies, typically through lobbyists, whose activities are difficult to track. They may also communicate through so-called dark money groups, which are not required to report who gives them money. Over the past 10 years, the pharmaceutical industry has spent about $233 million per year on lobbying, according to a new study published in JAMA Internal Medicine. That is more than any other industry, including the oil and gas industry. Why Patents Matter Developing and testing a new drug, and gaining approval from the Food and Drug Administration, can take years and cost hundreds of millions of dollars. Drugmakers are generally granted a six- or seven-year exclusivity period to recoup their investments. But drugmakers have found ways to extend that period of exclusivity, sometimes accumulating hundreds of patents on the same drug and blocking competition for decades. One method is to patent many inventions beyond a drug’s active ingredient, such as patenting the injection device that administers the drug. Keeping that arrangement intact, or expanding what can be patented, is where lawmakers come in. Lawmakers Dig In Tillis’ home state of North Carolina is also home to three major research universities and, not coincidentally, multiple drugmakers’ headquarters, factories and other facilities. From his swearing-in in 2015 to the end of 2018, Tillis received about $160,000 from drugmakers based there or beyond. He almost matched that four-year total in 2019 alone, in the midst of a difficult reelection campaign to be decided this fall. He has raised nearly $10 million for his campaign, with lobbyists among his biggest contributors, according to OpenSecrets. Daniel Keylin, a spokesperson for Tillis, said Tillis and Coons, the subcommittee’s top Democrat, are working to overhaul the country’s “antiquated intellectual property laws.” Keylin said the bipartisan effort protects the development and access to affordable, lifesaving medication for patients,” adding: “No contribution has any impact on how Tillis votes or legislates.” Tillis signaled his openness to the drug industry early on. The day before being named chairman, he reintroduced a bill that would limit the options generic drugmakers have to challenge allegedly invalid patents, effectively helping brand-name drugmakers protect their monopolies. Former Sen. Orrin Hatch (R-Utah), whose warm relationship with the drug industry was well-known, had introduced the legislation, the Hatch-Waxman Integrity Act, just days before his retirement in 2018. At his subcommittee’s first hearing, Tillis said the members would rely on testimony from private businesses to guide them. He promised to hold hearings on patent eligibility standards and “reforms to the Patent Trial and Appeal Board.” In practice, the Hatch-Waxman Integrity Act would require generics makers challenging another drugmaker’s patent to either take their claim to the Patent Trial and Appeal Board, which acts as a sort of cheaper, faster quality check to catch bad patents, or file a lawsuit. A study released last year found that, since Congress created the Patent Trial and Appeal Board in 2011, it has narrowed or overturned about 51 of the drugmaker patents that generics makers have challenged. Feldman said the drug industry “went berserk” over the number of patents the board changed and has been eager to limit use of the board as much as possible. Patent reviewers are often stretched thin and sometimes make mistakes, said Aaron Kesselheim, a Harvard Medical School professor who is an expert in intellectual property rights and drug development. Limiting the ways to challenge patents, as Tillis’ bill would, does not strengthen the patent system, he said. “You want overlapping oversight for a system that is as important and fundamental as this system is,” he said. As promised, Tillis and Coons also spent much of the year working on so-called Section 101 reform regarding what is eligible to be patented — “a very major change” that “would overturn more than a century of Supreme Court law,” Feldman said. Sean Coit, Coons’ spokesperson, said lowering drug prices is one of the senator’s top priorities and pointed to Coon’s support for legislation the pharmaceutical industry opposes. “One of the reasons Senator Coons is leading efforts in Congress to fix our broken patent system is so that life-saving medicines can actually be developed and produced at affordable prices for every American,” Coit wrote in an email, adding that “his work on Section 101 reform has brought together advocates from across the spectrum, including academics and health experts.” In August, when much of Capitol Hill had emptied for summer recess, Tillis and Coons held closed-door meetings to preview their legislation to stakeholders, including the Pharmaceutical Research and Manufacturers of America, or PhRMA, the brand-name drug industry’s lobbying group. “We regularly engage with members of Congress in both parties to advance practical policy solutions that will lower medicine costs for patients,” said Holly Campbell, a PhRMA spokesperson. Neither proposal has received a public hearing. In the 30 days before Tillis and Coons were named leaders of the revived subcommittee, drug manufacturers gave them $21,000 from their political action committees. In the 30 days following that first hearing, Tillis and Coons received $60,000. Among their donors were PhRMA; the Biotechnology Innovation Organization, the biotech lobbying group; and five of the seven drugmakers whose executives — as Tillis laid out a pharma-friendly agenda for his new subcommittee — were getting chewed out by senators in a different hearing room over patent abuse. Cornyn Goes After Patent Abuse Richard Gonzalez, chief executive of AbbVie Inc., the company known for its top-selling drug, Humira, had spent the morning sitting stone-faced before the Senate Finance Committee as, one after another, senators excoriated him and six other executives of brand-name drug manufacturers over how they price their products. Cornyn brought up AbbVie’s more than 130 patents on Humira. Hadn’t the company blocked its competition? Cornyn asked Gonzalez, who carefully explained how AbbVie’s lawsuit against a generics competitor and subsequent licensing deal was not what he would describe as anti-competitive behavior. “I realize it may not be popular,” Gonzalez said. “But I think it is a reasonable balance.” A minute later, Cornyn turned to Sen. Chuck Grassley (R-Iowa), who, like Cornyn, was also a member of the revived intellectual property subcommittee. This is worth looking into with “our Judiciary Committee authorities as well,” Cornyn said, effectively threatening legislation on patent abuse. The next day, Mylan, one of the largest producers of generic drugs, gave Cornyn $5,000, FEC records show. The company had not donated to Cornyn in years. By midsummer, every drug company that sent an executive to that hearing had given money to Cornyn, including AbbVie. Cornyn, who faces perhaps the most difficult reelection fight of his career this fall, ranks No. 6 among members of Congress in drugmaker PAC contributions last year, KHN’s analysis shows. He received about $104,000. Cornyn has received about $708,500 from drugmakers since 2007, KHN’s database shows. According to OpenSecrets, he has raised more than $17 million for this year’s reelection campaign. Cornyn’s office declined to comment. On May 9, Cornyn and Sen. Richard Blumenthal (D-Conn.) introduced the Affordable Prescriptions for Patients Act, which proposed to define two tactics used by drug companies to make it easier for the Federal Trade Commission to prosecute them: “product-hopping,” when drugmakers withdraw older versions of their drugs from the market to push patients toward newer, more expensive ones, and “patent-thicketing,” when drugmakers amass a series of patents to drag out their exclusivity and slow rival generics makers, who must challenge those patents to enter the market once the initial exclusivity ends. PhRMA opposed the bill. The next day, it gave Cornyn $1,000. Cornyn and Blumenthal’s bill would have been “very tough on the techniques that pharmaceutical companies use to extend patent protections and to keep prices high,” Feldman said. “The pharmaceutical industry lobbied tooth and nail against it,” she said. “And when the bill finally came out of committee, the strongest provisions — the patent-thicketing provisions — had been stripped.” In the months after the bill cleared committee and waited to be taken up by the Senate, Cornyn blamed Senate Democrats for blocking the bill while trying to secure votes on legislation with more direct controls on drug prices. The Senate has not voted on the bill. Infrastructure reform solves Existential Climate Change – it results in spill-over. USA Today 7-20 7-20-2021 "Climate change is at 'code red' status for the planet, and inaction is no longer an option" https://www.usatoday.com/story/opinion/todaysdebate/2021/07/20/climate-change-biden-infrastructure-bill-good-start/7877118002/Elmer Not long ago, climate change for many Americans was like a distant bell. News of starving polar bears or melting glaciers was tragic and disturbing, but other worldly. Not any more. Top climate scientists from around the world warned of a "code red for humanity" in a report issued Monday that says severe, human-caused global warming is become unassailable. Proof of the findings by the United Nations' Intergovernmental Panel on Climate Change is a now a factor of daily life. Due to intense heat waves and drought, 107 wildfires – including the largest ever in California – are now raging across the West, consuming 2.3 million acres. Earlier this summer, hundreds of people died in unprecedented triple-digit heat in Oregon, Washington and western Canada, when a "heat dome" of enormous proportions settled over the region for days. Some victims brought by stretcher into crowded hospital wards had body temperatures so high, their nervous systems had shut down. People collapsed trying to make their way to cooling shelters. Heat-trapping greenhouse gases Scientists say the event was almost certainly made worse and more intransigent by human-caused climate change. They attribute it to a combination of warming Arctic temperatures and a growing accumulation of heat-trapping greenhouse gases caused by the burning of fossil fuels. The consequences of what mankind has done to the atmosphere are now inescapable. Periods of extreme heat are projected to double in the lower 48 states by 2100. Heat deaths are far outpacing every other form of weather killer in a 30-year average. A persistent megadrought in America's West continues to create tinder-dry conditions that augur another devastating wildfire season. And scientists say warming oceans are fueling ever more powerful storms, evidenced by Elsa and the early arrival of hurricane season this year. Increasingly severe weather is causing an estimated $100 billion in damage to the United States every year. "It is honestly surreal to see your projections manifesting themselves in real time, with all the suffering that accompanies them. It is heartbreaking," said climate scientist Katharine Hayhoe. Rising seas from global warming Investigators are still trying to determine what led to the collapse of a Miami-area condominium that left more than 100 dead or missing. But one concerning factor is the corrosive effect on reinforced steel structures of encroaching saltwater, made worse in Florida by a foot of rising seas from global warming since the 1900s. The clock is ticking for planet Earth. While the U.N. report concludes some level of severe climate change is now unavoidable, there is still a window of time when far more catastrophic events can be mitigated. But mankind must act soon to curb the release of heat-trapping gases. Global temperature has risen nearly 2 degrees Fahrenheit since the pre-industrial era of the late 19th century. Scientists warn that in a decade, it could surpass a 2.7-degree increase. That's enough warming to cause catastrophic climate changes. After a brief decline in global greenhouse gas emissions during the pandemic, pollution is on the rise. Years that could have been devoted to addressing the crisis were wasted during a feckless period of inaction by the Trump administration. Congress must act Joe Biden won the presidency promising broad new policies to cut America's greenhouse gas emissions. But Congress needs to act on those ideas this year. Democrats cannot risk losing narrow control of one or both chambers of Congress in the 2022 elections to a Republican Party too long resistant to meaningful action on the climate. So what's at issue? A trillion dollar infrastructure bill negotiated between Biden and a group of centrist senators (including 10 Republicans) is a start. In addition to repairing bridges, roads and rails, it would improve access by the nation's power infrastructure to renewable energy sources, cap millions of abandoned oil and gas wells spewing greenhouse gases, and harden structures against climate change. It also offers tax credits for the purchase of electric vehicles and funds the construction of charging stations. (The nation's largest source of climate pollution are gas-powered vehicles.) Senate approval could come very soon. Much more is needed if the nation is going to reach Biden's necessary goal of cutting U.S. climate pollution in half from 2005 levels by 2030. His ideas worth considering include a federal clean electricity standard for utilities, federal investments and tax credits to promote renewable energy, and tens of billions of dollars in clean energy research and development, including into ways of extracting greenhouse gases from the skies. Another idea worth considering is a fully refundable carbon tax. The vehicle for these additional proposals would be a second infrastructure bill. And if Republicans balk at the cost of such vital investment, Biden is rightly proposing to pass this package through a process known as budget reconciliation, which allows bills to clear the Senate with a simple majority vote. These are drastic legislative steps. But drastic times call for them. And when Biden attends a U.N. climate conference in November, he can use American progress on climate change as a mean of persuading others to follow our lead. Further delay is not an option.
10/10/21
SO - DA - Pharma Innovation
Tournament: Valley | Round: 1 | Opponent: Harker PG | Judge: TJ Maher 5 Strong current IP guarantees causes massive Pharma innovation. - Answers Evergreening/Me-Too Drugs Stevens and Ezell 20 Philip Stevens and Stephen Ezell 2-3-2020 "Delinkage Debunked: Why Replacing Patents With Prizes for Drug Development Won’t Work" https://itif.org/publications/2020/02/03/delinkage-debunked-why-replacing-patents-prizes-drug-development-wont-work (Philip founded Geneva Network in 2015. His main research interests are the intersection of intellectual property, trade, and health policy. Formerly he was an official at the World Intellectual Property Organization (WIPO) in Geneva, where he worked in its Global Challenges Division on a range of IP and health issues. Prior to his time with WIPO, Philip worked as director of policy for International Policy Network, a UK-based think tank, as well as holding research positions with the Adam Smith Institute and Reform, both in London. He has also worked as a political risk consultant and a management consultant. He is a regular columnist in a wide range of international newspapers and has published a number of academic studies. He holds degrees from the London School of Economics and Durham University (UK).)Elmer The Current System Has Produced a Tremendous Amount of Life-Sciences Innovation The frontier for biomedical innovation is seemingly limitless, and the challenges remain numerous—whether it comes to diseases that afflict millions, such as cancer or malaria, or the estimated 7,000 rare diseases that afflict fewer than 200,000 patients.24 And while certainly citizens in developed and developing nations confront differing health challenges, those challenges are increasingly converging. For instance, as of this year, analysts expect that noncommunicable diseases such as cardiovascular disease and diabetes will account for 70 percent of natural fatalities in developing countries.25 Citizens of low- and middle-income countries bear 80 percent of the world’s death burden from cardiovascular disease.26 Forty-six percent of Africans over 25 suffer from hypertension, more than anywhere else in the world. Similarly, 85 percent of the disease burden of cervical cancer is borne by individuals living in low- and middle-income countries.27 To develop treatments or cures for these conditions, novel biomedical innovation will be needed from everywhere. Yet tremendous progress has been made in recent decades. To tackle these challenges, the global pharmaceutical industry invested over $1.36 trillion in RandD in the decade from 2007 to 2016—and it’s expected that annual RandD investment by the global pharmaceutical industry will reach $181 billion by 2022.28 In no small part due to that investment, 943 new active substances have been introduced globally over the prior 25 years.29 The U.S. Food and Drug Administration (FDA) has approved more than 500 new medicines since 2000 alone. And these medicines are getting to more individuals: Global medicine use in 2020 will reach 4.5 trillion doses,
up 24 percent from 2015.30 Moreover, there are an estimated 7,000 new medicines under development globally (about half of them in the United States), with 74 percent being potentially first in class, meaning they use a new and unique mechanism of action for treating a medical condition.31 In the United States, over 85 percent of all drugs sold are generics (only 10 percent of U.S. prescriptions are filled by brand-name drugs).32 And while some assert that biotechnology companies focus too often on “me-too” drugs that compete with other treatments already on the market, the reality is many drugs currently under development are meant to tackle some of the world’s most intractable diseases, including cancer and Alzheimer’s.33 Moreover, such arguments miss that many of the drugs developed in recent years have in fact been first of their kind. For instance, in 2014, the FDA approved 41 new medicines (at that point, the most since 1996) many of which were first-in-class medicines.34 In that year, 28 of the 41 drugs approved were considered biologic or specialty agents, and 41 percent of medicines approved were intended to treat rare diseases.35 Yet even when a new drug isn’t first of its kind, it can still produce benefits for patients, both through enhanced clinical efficacy (for instance, taking the treatment as a pill rather than an injection, with a superior dosing regimen, or better treatment for some individuals who don’t respond well to the original drug) and by generating competition that exerts downward price pressures. For example, a patient needing a cholesterol drug has a host of statins from which to choose, which is important because some statins produce harmful side effects for some patients. Similarly, patients with osteoporosis can choose from Actonel, Boniva, or Fosomax. Or take for example Hepatitis C, which until recently was an incurable disease eventually requiring a liver transplant for many patients. In 2013, a revolutionary new treatment called Solvadi was released that boosted cure rates to 90 percent. This was followed in 2014 by an improved treatment called Harvoni, which cures the Hepatitis C variant left untouched by Solvadi. Since then, an astonishing six new treatments for the disease have received FDA approval, opening up a wide range of treatment options that take into account patients’ liver and kidney status, co-infections, potential drug interactions, previous treatment failures, and the genotype of HCV virus.36 “If you have to have Hepatitis C, now is the time to have it,” as Douglas Dieterich, a liver specialist at the Icahn School of Medicine at Mount Sinai Hospital in New York, told the Financial Times. “We have these marvellous drugs we can treat you with right now, without side effects,” he added. “And this time next year, we’ll have another round of drugs available.”37 Moreover, the financial potential of this new product category has led to multiple competing products entering the market in quick succession, in turn placing downward pressure on prices.38 As Geoffrey Dusheiko and Charles Gore write in The Lancet, “The market has done its work for HCV treatments: after competing antiviral regimens entered the market, competition and innovative price negotiations have driven costs down from the initially high list prices in developed countries.”39 As noted previously, opponents of the current market- and IP-based system contend patents enable their holders to exploit a (temporary) market monopoly by inflating prices many multiples beyond the marginal cost of production. But rather than a conventional neoclassical analysis, an analysis based on “innovation economics” finds it is exactly this “distortion” that is required for innovation to progress. As William Baumol has pointed out, “Prices above marginal costs and price discrimination become the norm rather than the exception because … without such deviations from behaviour in the perfectly competitive model, innovation outlays and other unavoidable and repeated sunk outlays cannot be recouped.”40 Or, as the U.S. Congressional Office of Technology Assessment found, “Pharmaceutical RandD is a risky investment; therefore, high financial returns are necessary to induce companies to invest in researching new chemical entities.”41 This is also why, in 2018, the U.S. Congressional Budget Office estimated that because of high failure rates, biopharmaceutical companies would need to earn a 61.8 percent rate of return on their successful new drug RandD projects in order to match a 4.8 percent after-tax rate of return on their investments.42 Indeed, it’s the ability to recoup fixed costs, not just marginal costs, through mechanisms such as patent protection that lies at the heart of all innovation-based industries and indeed all innovation and related economic progress. If companies could not find a way to pay for their RandD costs, and could only charge for the costs of producing the compound, there would be no new drugs developed, just as there would be no new products developed in any industry. Innovating in the life sciences remains expensive, risky, difficult, and uncertain. Just 1 in 5,000 drug candidates make it all the way from discovery to market.43 A 2018 study by the Deloitte Center for Health Solutions, “Unlocking RandD productivity: Measuring the return from pharmaceutical innovation 2018,” found that “the average cost to develop an asset an innovative life-sciences drug including the cost of failure, has increased in six out of eight years,” and that the average cost to create a new drug has risen to $2.8 billion.44 Related research has found the development of new drugs requires years of painstaking, risky, and expensive research that, for a new pharmaceutical compound, takes an average of 11.5 to 15 years of research, development, and clinical trials, at a cost of $1.7 billion to $3.2 billion.45 IP rights—including patents, copyrights, and data exclusivity protections—give innovators, whether in the life sciences or other sectors, the confidence to undertake the risky and expensive process of innovation, secure in the knowledge they’ll be able to capture a share of the gains from their efforts. And these gains are often only a small fraction of the true value created. For instance, Yale University economist William Nordhaus estimated inventors capture just 4 percent of the total social gains from their innovations; the rest spill over to other companies and society as a whole.46 Without adequate IP protection, private investors would never find it viable to fund advanced research because lower-cost copiers would be in a position to undercut the legitimate prices (and profits) of innovators, even while still generating substantial profits on their own.47 As the report “Wealth, Health and International Trade in the 21st Century” concludes, “Conferring robust intellectual property rights is, in the pharmaceutical and other technological-development contexts, in the global public’s long-term interests. Without adequate mechanisms for directly and indirectly securing the private and public funding of medicines and vaccines, research and development communities across the world will lose future benefits that would far outweigh the development costs involved.”48 Put simply, the current market- and IP-based life-sciences innovation system is producing life-changing biomedical innovation. As Jack Scannell, a senior fellow at Oxford University’s Center for the Advancement of Sustainable Medical Innovation has explained, “I would guess that one can buy today, at rock bottom generic prices, a set of small-molecule drugs that has greater medical utility than the entire set available to anyone, anywhere, at any price in 1995.” He continued, “Nearly all the generic medicine chest was created by firms who invested in RandD to win future profits that they tried pretty hard to maximize; short-term financial gain building a long-term common good.”49 For example, on September 14, 2017, the FDA approved Mvasi, the first biosimilar for Roche’s Avastin, a breakthrough anticancer drug when it came out in the mid-1990s for lung, cervical, and colorectal cancer.50 In other words, a medicine to treat forms of cancer that barely existed 20 years ago is now available as a generic drug today. It’s this dynamic that enables us to imagine a situation wherein drugs to treat diseases that aren’t available anywhere at any price today (for instance, treatments for Alzheimer’s or Parkinson’s) might be available as generics in 20 years. But that will only be the case if we preserve (and improve where possible) a life-sciences innovation system that is generally working. The current system does not require wholesale replacement by a prize-based system that—notwithstanding a meaningful success here or there—has produced nowhere near a similar level of novel biomedical innovation. Trade Secrets are key to incentivize competitive Innovation – specifically key to protect start-ups. Gutfleisch 18, Georg. "Employment issues under the European Trade Secrets Directive: Promising opportunity or burden for European companies." European Company Law Journal 15 (2018): 175-181. (working as an Associate with Brandl and Talos Rechtsanwälte GmbH in Vienna, Austria, and recently studied in the LL.M. (International and European Business Law) program at Trinity College Dublin, Ireland.)Elmer The protection of trade secrets can be considered as a prerequisite for the continuous growth and success of European companies as well as the general (technological) advancement and competitiveness of the European economy.7 Trade secrets can basically be described as secret information that is of value for its owner because of its secrecy. Trade secrets must be differentiated from other (registered) intellectual property rights, such as patents, designs or trademarks. They are not publicly registered and do not grant the trade secret owner an exclusive right against third parties. Most legal systems rank trade secret protection as part of unfair-competition law rather than intellectual property law.8 However, trade secrets are nevertheless related to intellectual property rights. In particular, they could be considered as a preliminary step or by-product to the intellectual property rights creation. Further, trade secrets could also be maintained as permanent alternative to (registered) intellectual property rights. They do not involve costs for the application or subsequent prolongations with the competent authorities and do not impose risks of disclosure during such proceedings.9 Especially small- and medium-sized enterprises and start-ups in the research and engineering business often rely on the confidentiality of sensitive information as basis of their existence.10 The importance of effective trade secret protection has been acknowledged by lawmakers globally. Back in 1994, the member states of the World Trade Organisation (WTO) entered into the international Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS Agreement),11 which mandates the WTO member states to ensure the protection of undisclosed information without consent in a manner contrary to honest commercial practices. In addition, the Paris Convention on the protection of industrial property of 20 March 1883 (CUP Agreement)12 provides another international legal framework, which some scholars argue does afford protection to trade secrets.13 However, the rather vague minimum requirements of the TRIPS Agreement and the CUP Agreement resulted in significant differences in the national levels of trade secret protection, especially within the member states of the European Union (EU).14 The European Commission acknowledged this situation and started to actively engage with the issue of trade secret protection in the EU. In November 2013, the European Commission introduced its proposal for the TSD (together with an impact assessment and implementation plan).15 The TSD was then enacted in June 2016 after further input from the European Economic and Social Committee16 and the European Parliament Committee on Legal Affairs.17 The TSD has been based on two main reasons.18 On the one hand, it has been argued that the different levels of protection in Europe caused companies to refrain from exchanging confidential information across borders and hindered the proper development of research and innovation. On the other hand, European companies regularly faced competitive disadvantages when their trade secrets are misappropriated.
Pharma innovation solves Pandemics, ABR, and Bioterrorism – only Private Firms have the ability for preparedness and reaction. Marjanovic and Feijao 20 Sonja Marjanovic and Carolina Feijao May 2020 "Pharmaceutical Innovation for Infectious Disease Management" https://www.rand.org/content/dam/rand/pubs/perspectives/PEA400/PEA407-1/RAND_PEA407-1.pdf (directs RAND Europe's portfolio of research in the field of healthcare innovation, industry and policy)Re-cut by Elmer As key actors in the healthcare innovation landscape, pharmaceutical and life sci-ences companies have been called on to develop medicines, vaccines and diagnostics for pressing public health challenges. The COVID-19 crisis is one such challenge, but there are many others. For example, MERS, SARS, Ebola, Zika and avian and swine flu are also infectious diseases that represent public health threats. Infectious agents such as anthrax, smallpox and tularemia could present threats in a bioterrorism con-text.1 The general threat to public health that is posed by antimicrobial resistance is also well-recognised as an area in need of pharmaceutical innovation. Innovating in response to these challenges does not always align well with pharmaceutical industry commercial models, shareholder expectations and compe-tition within the industry. However, the expertise, networks and infrastructure that industry has within its reach, as well as public expectations and the moral imperative, make pharmaceutical companies and the wider life sciences sector an indispensable partner in the search for solutions that save lives. This perspective argues for the need to establish more sustainable and scalable ways of incentivising pharmaceu-tical innovation in response to infectious disease threats to public health. It considers both past and current examples of efforts to mobilise pharmaceutical innovation in high commercial risk areas, including in the context of current efforts to respond to the COVID-19 pandemic. In global pandemic crises like COVID-19, the urgency and scale of the crisis – as well as the spotlight placed on pharmaceutical companies – mean that contributing to the search for effective medicines, vaccines or diagnostics is essential for socially responsible companies in the sec-tor.2 It is therefore unsurprising that we are seeing indus-try-wide efforts unfold at unprecedented scale and pace. Whereas there is always scope for more activity, industry is currently contributing in a variety of ways. Examples include pharmaceutical companies donating existing com-pounds to assess their utility in the fight against COVID-19; screening existing compound libraries in-house or with partners to see if they can be repurposed; accelerating tri-als for potentially effective medicine or vaccine candidates; and in some cases rapidly accelerating in-house research and development to discover new treatments or vaccine agents and develop diagnostics tests.3,4 Pharmaceutical companies are collaborating with each other in some of these efforts and participating in global RandD partnerships (such as the Innovative Medicines Initiative effort to accel-erate the development of potential therapies for COVID-19) and supporting national efforts to expand diagnosis and testing capacity and ensure affordable and ready access to potential solutions.3,5,6 The primary purpose of such innovation is to benefit patients and wider population health. Although there are also reputational benefits from involvement that can be realised across the industry, there are likely to be rela-tively few companies that are ‘commercial’ winners. Those who might gain substantial revenues will be under pres-sure not to be seen as profiting from the pandemic. In the United Kingdom for example, GSK has stated that it does not expect to profit from its COVID-19 related activities and that any gains will be invested in supporting research and long-term pandemic preparedness, as well as in developing products that would be affordable in the world’s poorest countries.7 Similarly, in the United States AbbVie has waived intellectual property rights for an existing com-bination product that is being tested for therapeutic poten-tial against COVID-19, which would support affordability and allow for a supply of generics.8,9 Johnson and Johnson has stated that its potential vaccine – which is expected to begin trials – will be available on a not-for-profit basis during the pandemic.10 Pharma is mobilising substantial efforts to rise to the COVID-19 challenge at hand. However, we need to consider how pharmaceutical innovation for responding to emerging infectious diseases can best be enabled beyond the current crisis. Many public health threats (including those associated with other infectious diseases, bioterrorism agents and antimicrobial resistance) are urgently in need of pharmaceutical innovation, even if their impacts are not as visible to society as COVID-19 is in the imme-diate term. The pharmaceutical industry has responded to previous public health emergencies associated with infec-tious disease in recent times – for example those associated with Ebola and Zika outbreaks.11 However, it has done so to a lesser scale than for COVID-19 and with contribu-tions from fewer companies. Similarly, levels of activity in response to the threat of antimicrobial resistance are still low.12 There are important policy questions as to whether – and how – industry could engage with such public health threats to an even greater extent under improved innova-tion conditions.
9/25/21
SO - NC - Hobbes
Tournament: Valley | Round: 6 | Opponent: Walt Whitman EY | Judge: Breigh Plat 3 Framework Conceding TJFs first – that was in the 1AC FW so don’t let them shift – prefer additionally – a Frameworks are essentially T debates about the word ought which proves the better model of debate is what matters. b Turns substance – it doesn’t matter how true a philosophy is if it can’t be engaged or is impossible to learn from c Exclusionary rule – we cant engage which means all their substantive arguments should be presumed false The standard is consistency with absolute sovereignty. 1 Predictability – every individual engages within the social contract when going to school or using public infrastructure which means it’s the one political engagement everyone is aware of. 2 Political Education – politicians have to understand the social contract in order to know what powers they have and what they have to provide citizens and debating about Hobbes helps us learn about that. 3 Topic Ed – the Hobbesian approach is ideal for dealing with IP in the context of public health disaster. Ashcroft 05 Richard E. Ashcroft (MA, PhD Reader in Biomedical Ethics in the Department of Primary Health Care and General Practice at Imperial College London). “Access to essential medicines: a Hobbesian social contract approach”. Dev World Bioeth. 2005 May;5(2):121-41. Accessed 7/31/2021. https://pubmed.ncbi.nlm.nih.gov/15842722/Xu The problems I have described in these concluding remarks are serious and difficult. I do not think they are decisive. None of these problems demonstrate either the falsity or incoherence of a Hobbesian approach. Rather, they show that a Hobbesian approach needs further detailed development. I think that the merits of the Hobbesian approach are plain, so far as it takes serious notice of the features of the state of war, the instrumental nature of states and their legal and civil institutions, and the overarching objective of states to preserve their citizens from misery and disaster. More obviously ‘moral’ theories (such as utilitarian theory, or natural rights theories such as Lockean theory or modern human rights theories) are less illuminating, in that they fail to construct compelling perfect obligations lying with specific agents. The Hobbesian account I have constructed here has many loose ends, but I hope I have shown in this paper how a powerful argument for a perfect duty lying on the state to protect its citizens from public health disaster can be constructed, and the foundations of legitimate sovereign enforcement of powers of compulsory license over intellectual property. Public health takes priority over private economic interest. The only question is whether private economic interest is the only, or indeed an, effective means for promoting the public health in conditions of disaster. 4 Resource Disparities – philosophical frameworks ensure big squads don’t have a comparative advantage since debates become about quality of arguments rather than quantity and require a higher level of analytic thinking that small schools have. 5 Resolvability – other debates create a mess of weighing and link turns, but using Hobbes is easily resolvable because it becomes a question of what the sovereign believes. Offense 1 Sequencing – a sovereign can’t be obligated to do anything because they are the ones who choose what ethics and truth – the rez tries to coerce the sovereign to do something which challenges its authority. 2 IP rights are implicit in the creation of the sovereign in expressing creativity. Ghosh 04 Shubha Ghosh (B.A., Amherst College; Ph.D., University of Michigan; J.D., Stanford Law School; Professor of Law, University at Buffalo, SUNY, Law School; Visiting Professor, SMU Dedman School of Law). “PATENTS AND THE REGULATORY STATE: RETHINKING THE PATENT BARGAIN METAPHOR AFTER ELDRED”. BERKELEY TECHNOLOGY LAW JOURNAL. 2004. Accessed 9/3/21. https://lawcat.berkeley.edu/record/1119327/files/fulltext.pdfXu As illustration of the limits of social contract theory,46 particularly the malleability of the notions of consent and promise, consider a social contract theory of intellectual property based on the thoughts of Thomas Hobbes rather than that of John Locke. No scholar has expressly developed a Hobbesian theory of patent or of copyright, but as a challenge to social contract theory, it may be useful to imagine what such a theory would look like.47 For Hobbes, humans created the leviathan-the sovereign state-to protect themselves from each other in the state of nature. 48 Without the leviathan, the state of nature was not an idyllic paradise but a condition of savagery and brutality. In the state of nature, to the extent that any creative activity occurred, the objects of creation would be cannibalized, thoughtlessly copied, adapted, distributed, and performed or used, sold, offered to sell, and made by others. Thus, intellectual property law under the leviathan would protect individuals from this state of nature by making them absolute, immutable, bountiful, and unlimited. Humans would consent to these terms if they were enforced equally for all creations, and each author and inventor would promise to all others to abide by this form of the intellectual property social contract.
9/26/21
SO - NC - Kant
Tournament: Loyola | Round: 6 | Opponent: Harvard Westlake EJ | Judge: Neville Tom 4 Framework The meta-ethic is procedural moral realism. This entails that moral facts stem from procedures while substantive realism holds that moral truths exist independently of that in the empirical world. Prefer procedural realism – 1 Collapses – the only way to verify whether something is a moral fact is by using procedures to warrant it. Practical Reason is that procedure. Evaluate the debate after the 1nc for reciprocity because we both get one speech Moral law must be universal—our judgements can’t only apply to ourselves any more than 2+2=4 can be true only for me –Reject Extinction outweighs- aggregation is nonsensical since a it impedes on one persons ends for another and b assumes everyone values the same thing. No aff analytics – they are unpredictable cuz any arg can be made Thus, the standard is consistency with the categorical imperative. Prefer – 1 TJFs and they outweigh since it precludes engagement on the framework layer – prefer for Resource disparities- Our framework ensures big squads don’t have a comparative advantage since debates become about quality of arguments rather than quantity - their model crowds out small schools because they have to prep for every unique advantage under each aff, every counterplan, and every disad with carded responses to each of them Offense Reducing IP is a form of free-riding that fails the universality test, but also uses the creators of the medicine as means to an end. Dyke 18 Dyke, Raymond. “The Categorical Imperative for Innovation and Patenting - IPWatchdog.com: Patents andamp; Patent Law.” IPWatchdog.com | Patents andamp; Patent Law, 1 Oct. 2018, www.ipwatchdog.com/2018/07/17/categorical-imperative-innovation-patenting/id=99178/.dhsNJ As we shall see, applying Kantian logic entails first acknowledging some basic principles; that the people have a right to express themselves, that that expression (the fruits of their labor) has value and is theirs (unless consent is given otherwise), and that government is obligated to protect people and their property. Thus, an inventor or creator has a right in their own creation, which cannot be taken from them without their consent. So, employing this canon, a proposed Categorical Imperative (CI) is the following Statement: creators should be protected against the unlawful taking of their creation by others. Applying this Statement to everyone, i.e., does the Statement hold water if everyone does this, leads to a yes determination. Whether a child, a book or a prototype, creations of all sorts should be protected, and this CI stands. This result also dovetails with the purpose of government: to protect the people and their possessions by providing laws to that effect, whether for the protection of tangible or intangible things. However, a contrary proposal can be postulated: everyone should be able to use the creations of another without charge. Can this Statement rise to the level of a CI? This proposal, upon analysis would also lead to chaos. Hollywood, for example, unable to protect their films, television shows or any content, would either be out of business or have robust encryption and other trade secret protections, which would seriously undermine content distribution and consumer enjoyment. Likewise, inventors, unable to license or sell their innovations or make any money to cover RandD, would not bother to invent or also resort to strong trade secret. Why even create? This approach thus undermines and greatly hinders the distribution of ideas in a free society, which is contrary to the paradigm of the U.S. patent and copyright systems, which promotes dissemination. By allowing freeriding, innovation and creativity would be thwarted (or at least not encouraged) and trade secret protection would become the mainstay for society with the heightened distrust. Interpretation: affirmative teams must not read new offense in the 1ar related to a new fw, weigh aff arguments under our fw, recontextualize aff arguments under a different fw, or turn the 1nc fw if presumption affirms. To clarify – extinction outweighs violates 1 Phil Clash and Time Skew- anything else allows them to concede all our framework interactions and just go for 4 minutes of turns against our NC which o/w since phil is the only thing unique to LD Debate and time is the only quantifiable metric of abuse 2 Skew- They have an inherent advantage on the contention debate since they get 2ar spin so they can easily sway judge psychology in contention debates that don’t err towards one side. 3 Planks Solves- because if the topic doesn’t actually negate you can put defense on the contention level.
9/23/21
SO - NC - Kant v2
Tournament: Valley | Round: 1 | Opponent: Harker PG | Judge: TJ Maher 4 Framework The meta-ethic is procedural moral realism. This entails that moral facts stem from procedures while substantive realism holds that moral truths exist independently of that in the empirical world. Prefer procedural realism – 1 Collapses – the only way to verify whether something is a moral fact is by using procedures to warrant it. 2 Uncertainty – our experiences are inaccessible to others which allows people to say they don’t experience the same, however a priori principles are universally applied to all agents. 3 Is/Ought Gap – we can only perceive what is, not what ought to be. It’s impossible to derive an ought statement from descriptive facts about the world, necessitating a priori premises. Practical Reason is that procedure. To ask for why we should be reasoners concedes its authority since it uses reason – anything else is nonbinding and arbitrary. That hijacks their framework since you need reason to evaluate any relevant consequences. Moral law must be universal—our judgements can’t only apply to ourselves any more than 2+2=4 can be true only for me – any non-universalizable norm justifies someone’s ability to impede on your ends. Thus, the standard is consistency with the categorical imperative. Prefer – 1 Performativity—freedom is the key to the process of justification of arguments. Willing that we should abide by their ethical theory presupposes that we own ourselves in the first place. 2 All other frameworks collapse—non-Kantian theories source obligations in extrinsically good objects, but that presupposes the goodness of the rational will. 3 TJFs and they outweigh since it precludes engagement on the framework layer – prefer for Resource disparities- Our framework ensures big squads don’t have a comparative advantage since debates become about quality of arguments rather than quantity - their model crowds out small schools because they have to prep for every unique advantage under each aff, every counterplan, and every disad with carded responses to each of them Offense Reducing IP is a form of free-riding that fails the universality test, but also uses the creators of the medicine as means to an end. Dyke 18 Dyke, Raymond. “The Categorical Imperative for Innovation and Patenting - IPWatchdog.com: Patents andamp; Patent Law.” IPWatchdog.com | Patents andamp; Patent Law, 1 Oct. 2018, www.ipwatchdog.com/2018/07/17/categorical-imperative-innovation-patenting/id=99178/.dhsNJ As we shall see, applying Kantian logic entails first acknowledging some basic principles; that the people have a right to express themselves, that that expression (the fruits of their labor) has value and is theirs (unless consent is given otherwise), and that government is obligated to protect people and their property. Thus, an inventor or creator has a right in their own creation, which cannot be taken from them without their consent. So, employing this canon, a proposed Categorical Imperative (CI) is the following Statement: creators should be protected against the unlawful taking of their creation by others. Applying this Statement to everyone, i.e., does the Statement hold water if everyone does this, leads to a yes determination. Whether a child, a book or a prototype, creations of all sorts should be protected, and this CI stands. This result also dovetails with the purpose of government: to protect the people and their possessions by providing laws to that effect, whether for the protection of tangible or intangible things. However, a contrary proposal can be postulated: everyone should be able to use the creations of another without charge. Can this Statement rise to the level of a CI? This proposal, upon analysis would also lead to chaos. Hollywood, for example, unable to protect their films, television shows or any content, would either be out of business or have robust encryption and other trade secret protections, which would seriously undermine content distribution and consumer enjoyment. Likewise, inventors, unable to license or sell their innovations or make any money to cover RandD, would not bother to invent or also resort to strong trade secret. Why even create? This approach thus undermines and greatly hinders the distribution of ideas in a free society, which is contrary to the paradigm of the U.S. patent and copyright systems, which promotes dissemination. By allowing freeriding, innovation and creativity would be thwarted (or at least not encouraged) and trade secret protection would become the mainstay for society with the heightened distrust.
9/25/21
SO - NC - Kant v3
Tournament: Voices | Round: Doubles | Opponent: Archbishop Mitty AS | Judge: Quentin Clark, Gordon Krauss, Samantha McLoughlin 5 Framework The meta-ethic is procedural moral realism. This entails that moral facts stem from procedures while substantive realism holds that moral truths exist independently of that in the empirical world. Prefer procedural realism – 1 Collapses – the only way to verify whether something is a moral fact is by using procedures to warrant it. 2 Uncertainty – our experiences are inaccessible to others which allows people to say they don’t experience the same, however a priori principles are universally applied to all agents. 3 Is/Ought Gap – we can only perceive what is, not what ought to be. It’s impossible to derive an ought statement from descriptive facts about the world, necessitating a priori premises. Regress – I can keep asking “why should I follow this” which results in skep since obligations are predicated on ignorantly accepting rules. Only reason solves since asking “why reason?” requires reason which is self-justified. That means we must universally will maxims— any non-universalizable norm justifies someone’s ability to impede on your ends. Thus, the standard is consistency with the categorical imperative. Prefer – 1 Performativity—freedom is the key to the process of justification of arguments. Willing that we should abide by their ethical theory presupposes that we own ourselves in the first place. 2 All other frameworks collapse—non-Kantian theories source obligations in extrinsically good objects, but that presupposes the goodness of the rational will. 3 Necessity—my framework is inherent to the way we set ends. Ethics must be necessary and not contingent since otherwise its claims could be escapable. Necessary truths outweigh on probability—if a necessary truth is possible that means it’s true in a possible world, but that implies it’s true in all worlds since that’s what necessity is, so they have to prove there’s 0 risk of my framework.
Offense Reducing IP is a form of free-riding that fails the universality test, but also uses the creators of the medicine as means to an end. Dyke 18 Dyke, Raymond. “The Categorical Imperative for Innovation and Patenting - IPWatchdog.com: Patents andamp; Patent Law.” IPWatchdog.com | Patents andamp; Patent Law, 1 Oct. 2018, www.ipwatchdog.com/2018/07/17/categorical-imperative-innovation-patenting/id=99178/.dhsNJ As we shall see, applying Kantian logic entails first acknowledging some basic principles; that the people have a right to express themselves, that that expression (the fruits of their labor) has value and is theirs (unless consent is given otherwise), and that government is obligated to protect people and their property. Thus, an inventor or creator has a right in their own creation, which cannot be taken from them without their consent. So, employing this canon, a proposed Categorical Imperative (CI) is the following Statement: creators should be protected against the unlawful taking of their creation by others. Applying this Statement to everyone, i.e., does the Statement hold water if everyone does this, leads to a yes determination. Whether a child, a book or a prototype, creations of all sorts should be protected, and this CI stands. This result also dovetails with the purpose of government: to protect the people and their possessions by providing laws to that effect, whether for the protection of tangible or intangible things. However, a contrary proposal can be postulated: everyone should be able to use the creations of another without charge. Can this Statement rise to the level of a CI? This proposal, upon analysis would also lead to chaos. Hollywood, for example, unable to protect their films, television shows or any content, would either be out of business or have robust encryption and other trade secret protections, which would seriously undermine content distribution and consumer enjoyment. Likewise, inventors, unable to license or sell their innovations or make any money to cover RandD, would not bother to invent or also resort to strong trade secret. Why even create? This approach thus undermines and greatly hinders the distribution of ideas in a free society, which is contrary to the paradigm of the U.S. patent and copyright systems, which promotes dissemination. By allowing freeriding, innovation and creativity would be thwarted (or at least not encouraged) and trade secret protection would become the mainstay for society with the heightened distrust.
10/10/21
SO - NC - Nibble
Tournament: Loyola | Round: 6 | Opponent: Harvard Westlake EJ | Judge: Neville Tom 3 Permissibility and presumption negate 1 Obligations- the resolution indicates the affirmative has to prove an obligation, and permissibility would deny the existence of an obligation 2 Falsity- Statements are more often false than true because proving one part of the statement false disproves the entire statement. Presuming all statements are true creates contradictions which would be ethically bankrupt. 3 Negation Theory “to negate” means “to deny the truth of,” which means any argument that renders the resolution false is sufficient to negate. 4 Trichotomy Triple there is a trichotomy between obligation, prohibition and permissibility; proving one disproves the other two because they are three intertwined moral terms which coexist within each other. Outweighs because it interacts with each term or moral obligation. 5 Status Quo Bias – you should default to a world where you don’t make change because making change assumes that world will be better than the current world Negate -- 1 The holographic principle is the most reasonable conclusion Stromberg 15Joseph Stromberg- “Some physicists believe we're living in a giant hologram — and it's not that far-fetched” https://www.vox.com/2015/6/29/8847863/holographic-principle-universe-theory-physics Vox. June 29th 2015 War Room Debate AI Some physicists actually believe that the universe we live in might be a hologram. The idea isn't that the universe is some sort of fake simulation out of The Matrix, but rather that even though we appear to live in a three-dimensional universe, it might only have two dimensions. It's called the holographic principle. The thinking goes like this: Some distant two-dimensional surface contains all the data needed to fully describe our world — and much like in a hologram, this data is projected to appear in three dimensions. Like the characters on a TV screen, we live on a flat surface that happens to look like it has depth. It might sound absurd. But when physicists assume it's true in their calculations, all sorts of big physics problems — such as the nature of black holes and the reconciling of gravity and quantum mechanics — become much simpler to solve. In short, the laws of physics seem to make more sense when written in two dimensions than in three. "It's not considered some wild speculation among most theoretical physicists," says Leonard Susskind, the Stanford physicist who first formally defined the idea decades ago. "It's become a working, everyday tool to solve problems in physics." But there's an important distinction to be made here. There's no direct evidence that our universe actually is a two-dimensional hologram. These calculations aren't the same as a mathematical proof. Rather, they're intriguing suggestions that our universe could be a hologram. And as of yet, not all physicists believe we have a good way of testing the idea experimentally. 2 member is “a part or organ of the body, especially a limb” but an organ can’t have obligations 3 of is to “expressing an age” but the rez doesn’t delineate a length of time 4 the is “denoting a disease or affliction” but the WTO isn’t a disease 5 to is to “expressing motion in the direction of (a particular location)” but the rez doesn’t have a location 6 reduce is to “(of a person) lose weight, typically by dieting” but IP doesn’t have a body to lose weight. 7 for is “in place of” but medicines aren’t replacing IP. 8 medicine is “(especially among some North American Indian peoples) a spell, charm, or fetish believed to have healing, protective, or other power” but you can’t have IP for a spell.
9/23/21
SO - NC - Nibble v2
Tournament: Valley | Round: 4 | Opponent: Mountain House ES | Judge: Rohit Lakshman 5 The role of the ballot is to determine whether the resolution is a true or false statement – anything else moots 7 minutes of the nc – their framing collapses since you must say it is true that a world is better than another before you adopt it. They justify substantive skews since there will always be a more correct side of the issue but we compensate for flaws in the lit. Scalar methods like comparison increases intervention – the persuasion of certain DA or advantages sway decisions – T/F binary is descriptive and technical. Negate because either the aff is true meaning its bad for us to clash w/ it because it turns us into Fake News people OR it’s not meaning it’s a lie that you can’t vote on for ethics a priori's 1st – even worlds framing requires ethics that begin from a priori principles like reason or pleasure so we control the internal link to functional debates. The ballot says vote aff or neg based on a topic – five dictionaries define to negate as to deny the truth of and affirm as to prove true so it's constitutive and jurisdictional. I denied the truth of the resolution by disagreeing with the aff which means I've met my burden. 1 Merriam Webster defines ‘member’ as: PENIS 2 Merriam Webster defines ‘trade’ as: having a larger softcover format than that of a mass-market paperback and usually sold only in bookstores 3 Merriam Webster defines ‘World’ as: a distinctive class of persons or their sphere of interest or activity 4 Merriam Webster defines ‘reduce’ as: to decrease the volume and concentrate the flavor of by boiling 5 Dictionary.com defines ‘intellectual’ as: a person of superior intellect. 6 Dictionary.com defines ‘property’ as: an essential or distinctive attribute or quality of a thing 7 Merriam Webster defines ‘protections’ as: anchoring equipment placed in cracks for safety while rock climbing 8 Dictionary.com defines ‘medicine’ as: any object or practice regarded as having magical powers. 9 Context doesn’t matter in the context of the res – we cant know what topic writers wanted us to debate about.
9/26/21
SO - T - Effects
Tournament: Voices | Round: 4 | Opponent: Immaculate Heart RR | Judge: Quentin Clark 1 Interpretation– “medicines” are substances that prevent, diagnose, or treat harms MRS 20 (MAINE REVENUE SERVICE SALES, FUEL and SPECIAL TAX DIVISION) “A REFERENCE GUIDE TO THE SALES AND USE TAX LAW” https://www.maine.gov/revenue/sites/maine.gov.revenue/files/inline-files/Reference20Guide202020.pdf December 2020 SS Medicines means antibiotics, analgesics, antipyretics, stimulants, sedatives, antitoxins, anesthetics, antipruritics, hormones, antihistamines, certain “dermal fillers” (such as BoTox®), injectable contrast agents, vitamins, oxygen, vaccines and other substances that are used in the prevention, diagnosis or treatment of disease or injury and that either (1) require a prescription in order to be purchased or administered to the retail consumer or patient; or (2) are sold in packaging. Violation – CRISPR is a gene-editing tool, NOT a medicine – it’s also used in a variety of non-medical fields. NewScientist 20 "What is CRISPR" https://www.newscientist.com/definition/what-is-crispr/Elmer CRISPR is a technology that can be used to edit genes and, as such, will likely change the world. The essence of CRISPR is simple: it’s a way of finding a specific bit of DNA inside a cell. After that, the next step in CRISPR gene editing is usually to alter that piece of DNA. However, CRISPR has also been adapted to do other things too, such as turning genes on or off without altering their sequence. There were ways to edit the genomes of some plants and animals before the CRISPR method was unveiled in 2012 but it took years and cost hundreds of thousands of dollars. CRISPR has made it cheap and easy. CRISPR is already widely used for scientific research, and in the not too distant future many of the plants and animals in our farms, gardens or homes may have been altered with CRISPR. In fact, some people already are eating CRISPRed food. It's used in drug discovery but isn’t a drug – makes the Aff effects-Topical. Enzmann and Wronski 19 Brittany Enzmann and Ania Wronski 1-11-2019 "How CRISPR Is Accelerating Drug Discovery" https://www.genengnews.com/insights/how-crispr-is-accelerating-drug-discovery/ (scientific communications manager at Synthego)Elmer Subsequent cellular repair facilitates knockouts, knockins, or the exchange of nucleotides. Because these types of modifications are made endogenously, scientists can study the subsequent changes to mRNA and protein at native, physiologically relevant levels. Variations of CRISPR can be used for other modifications, including the activation and inhibition of gene expression. Due to its increased ease and versatility, CRISPR shows promise in overcoming many of the technical challenges of drug discovery. Here, we summarize some of the ways in which CRISPR is advancing the stages of preclinical drug development. Drug discovery workflow The drug discovery process often starts with basic scientific research and involves many steps before new therapeutics are approved for clinical use. While each pharmaceutical company approaches the discovery and development of new drugs differently, the major steps common to most preclinical processes are target identification and validation, high-throughput compound screening, hit validation, and lead drug candidate optimization (Figure 2). All of these steps, and the ways CRISPR is accelerating progress through them, are discussed below. Here's the burden for the Violation – Medicine must be substances that are used to treat diseases. CRISPR is a technology to find or create those substances BUT isn’t used to treat diseases itself which means it’s not Topical. Answering their Pre-empts: AT Vidyasagar – This says it’s a DNA – a that’s not a substance and b it cuts DNA, it’s not a medicine itself – which is our Effects-T offense. AT Sfera – 1 This proves our Extra-T offense – simply being able to be used as a drug doesn’t mean it’s a medicine – this identifies a singular CRISPR tool, CTX001 as a drug, but CRISPR itself isn’t one and 2 Creating medicines is distinct from being a medicine. 3 The Standard is Limits – They explode the topic to include therapies, research areas, treatments, drug discovery techniques, etc. that eviscerate a stable locus of predictability. Limits is a sequencing question to Clash and in-depth Education since we’re only able to prepare if there’s stable core controversies. Independently, massive caselists make debate inaccessible to small school debaters, whose lack of resources make writing individualized disads impossible 4 TVA Solves – reduce IP protections on gene-based medicines. 5 Paradigm Issues – A Topicality is Drop the Debater – it’s a fundamental baseline for debate-ability. B Use Competing Interps – 1 Topicality is a yes/no question, you can’t be reasonably topical and 2 Reasonability invites arbitrary judge intervention and a race to the bottom of questionable argumentation. C No RVI’s - 1 Forces the 1NC to go all-in on Theory which kills substance education, 2 Encourages Baiting since the 1AC will purposely be abusive, and 3 Illogical – you shouldn’t win for not being abusive.
10/9/21
SO - T - Effects v2
Tournament: Voices | Round: Doubles | Opponent: Archbishop Mitty AS | Judge: Quentin Clark, Gordon Krauss, Samantha McLoughlin 1 1 Interpretation – Affs must defend a reduction in intellectual property protections that protect the medicines. Medicines are physical substances. American Heritage Dictionary of Medicine 18 The American Heritage Dictionary of Medicine 2018 by Houghton Mifflin Harcourt Publishing Company https://www.yourdictionary.com/medicineElmer "A substance, especially a drug, used to treat the signs and symptoms of a disease, condition, or injury." For means “intended to” – the object of the IP Protection must be Medicines. Cambridge Dictionary No Date "For" https://dictionary.cambridge.org/us/dictionary/english/forElmer intended to be given to: 2 Violation - Data exclusivity protects clinical trial data, NOT MEDICINE. The plan doesn’t affect the actual production of Medical Substances, just the structural factors that influence it. Thrasher 5-25 Rachel Thrasher 5-25-2021 "Chart of the Week: How Data Exclusivity Laws Impact Drug Prices" https://www.bu.edu/gdp/2021/05/25/chart-of-the-week-how-data-exclusivity-laws-impact-drug-prices/sid Data exclusivity is a form of intellectual property protection that applies specifically to data from pharmaceutical clinical trials. While innovator firms run their own clinical trials to gain marketing approval, generic manufacturers typically rely on the innovator’s clinical trials for the same approval. Data exclusivity rules keep generic firms from relying on that data for 5 to 12 years, depending on the specific law. Data exclusivity operates independently of patent protection and can block generic manufacturers from gaining marketing approval even if the patent has expired or the original pharmaceutical product does not qualify for patent protection. Although data exclusivity laws are matters of domestic legislation, the United States, the EU and others increasingly demand in their free trade agreement (FTA) negotiations that their trading partners protect clinical trial data in this way. Data exclusivity is just one of a host of “TRIPS-plus” treaty provisions designed to raise the overall level of intellectual property protection for innovator firms. Although the WTO’s Agreement on Trade-Related Intellectual Property Rights (TRIPS) does require Member states to protect clinical trial and other data from “unfair commercial use,” it does not require exclusivity rules that block the registration of generic products. The Aff is both Effects and Extra-T because they effect things unrelated to Medical IP like Data – both of which are voters for Limits and Ground. 3 The Standard is Limits – allowing Affs that relate to the factors and structures surrounding Medicines allows treatments, drug discovery techniques, computer programs, and production techniques that all have IP protections to be topical which eviscerate a stable locus of predictability. Fairness and education are voters – its how judges evaluate rounds and why schools fund debate Neg theory is DTD - 1ARs control the direction of the debate because it determines what the 2NR has to go for – DTD allows us some leeway in the round by having some control in the direction Competing interps – Reasonability invites arbitrary judge intervention and a race to the bottom of questionable argumentation – it also collapses since brightlines operate on an offense-defense paradigm No RVIs – A – Going all in on theory kills substance education which outweighs on timeframe B - Discourages checking real abuse which outweighs on norm-setting C – Encourages theory baiting – outweighs because if the shell is frivolous, they can beat it quickly D – its illogical for you to win for proving you were fair – outweighs since logic is a litmus test for other arguments E - Kills norm setting since debaters can never admit they’re wrong – outweighs since norm setting is the constitutive purpose of theory F – They are the logic of criminalization that over-punish people-of-color for trying to create productive discourse
10/10/21
SO - T - FW
Tournament: Jack Howe | Round: 5 | Opponent: Modernbrain AY | Judge: Amanda Ciocca 1 Interp: The affirmative may only garner offense from the hypothetical implementation of The member nations of the World Trade Organization ought to reduce intellectual property protections for medicines. Resolved means a legislative policy Words and Phrases 64 Words and Phrases Permanent Edition. “Resolved”. 1964. ED Definition of the word “resolve,” given by Webster is “to express an opinion or determination by resolution or vote; as ‘it was resolved by the legislature;” It is of similar force to the word “enact,” which is defined by Bouvier as meaning “to establish by law”. We’ve inserted a list of the 164 members of the WTO WTO ND. Members and Observers. https://www.wto.org/english/thewto_e/whatis_e/tif_e/org6_e.htm Afghanistan — 29 July 2016 Albania — 8 September 2000 Angola — 23 November 1996 Antigua and Barbuda — 1 January 1995 Argentina — 1 January 1995 Armenia — 5 February 2003 Australia — 1 January 1995 Austria — 1 January 1995 B Bahrain, Kingdom of — 1 January 1995 Bangladesh — 1 January 1995 Barbados — 1 January 1995 Belgium — 1 January 1995 Belize — 1 January 1995 Benin — 22 February 1996 Bolivia, Plurinational State of — 12 September 1995 Botswana — 31 May 1995 Brazil — 1 January 1995 Brunei Darussalam — 1 January 1995 Bulgaria — 1 December 1996 Burkina Faso — 3 June 1995 Burundi — 23 July 1995 C Cabo Verde — 23 July 2008 Cambodia — 13 October 2004 Cameroon — 13 December 1995 Canada — 1 January 1995 Central African Republic — 31 May 1995 Chad — 19 October 1996 Chile — 1 January 1995 China — 11 December 2001 Colombia — 30 April 1995 Congo — 27 March 1997 Costa Rica — 1 January 1995 Côte d’Ivoire — 1 January 1995 Croatia — 30 November 2000 Cuba — 20 April 1995 Cyprus — 30 July 1995 Czech Republic — 1 January 1995 D Democratic Republic of the Congo — 1 January 1997 Denmark — 1 January 1995 Djibouti — 31 May 1995 Dominica — 1 January 1995 Dominican Republic — 9 March 1995 E Ecuador — 21 January 1996 Egypt — 30 June 1995 El Salvador — 7 May 1995 Estonia — 13 November 1999 Eswatini — 1 January 1995 European Union (formerly EC) — 1 January 1995 F Fiji — 14 January 1996 Finland — 1 January 1995 France — 1 January 1995 G Gabon — 1 January 1995 Gambia — 23 October 1996 Georgia — 14 June 2000 Germany — 1 January 1995 Ghana — 1 January 1995 Greece — 1 January 1995 Grenada — 22 February 1996 Guatemala — 21 July 1995 Guinea — 25 October 1995 Guinea-Bissau — 31 May 1995 Guyana — 1 January 1995 H Haiti — 30 January 1996 Honduras — 1 January 1995 Hong Kong, China — 1 January 1995 Hungary — 1 January 1995 I Iceland — 1 January 1995 India — 1 January 1995 Indonesia — 1 January 1995 Ireland — 1 January 1995 Israel — 21 April 1995 Italy — 1 January 1995 J Jamaica — 9 March 1995 Japan — 1 January 1995 Jordan — 11 April 2000 K Kazakhstan — 30 November 2015 Kenya — 1 January 1995 Korea, Republic of — 1 January 1995 Kuwait, the State of — 1 January 1995 Kyrgyz Republic — 20 December 1998 L Lao People’s Democratic Republic — 2 February 2013 Latvia — 10 February 1999 Lesotho — 31 May 1995 Liberia — 14 July 2016 Liechtenstein — 1 September 1995 Lithuania — 31 May 2001 Luxembourg — 1 January 1995 M Macao, China — 1 January 1995 Madagascar — 17 November 1995 Malawi — 31 May 1995 Malaysia — 1 January 1995 Maldives — 31 May 1995 Mali — 31 May 1995 Malta — 1 January 1995 Mauritania — 31 May 1995 Mauritius — 1 January 1995 Mexico — 1 January 1995 Moldova, Republic of — 26 July 2001 Mongolia — 29 January 1997 Montenegro — 29 April 2012 Morocco — 1 January 1995 Mozambique — 26 August 1995 Myanmar — 1 January 1995 N Namibia — 1 January 1995 Nepal — 23 April 2004 Netherlands — 1 January 1995 New Zealand — 1 January 1995 Nicaragua — 3 September 1995 Niger — 13 December 1996 Nigeria — 1 January 1995 North Macedonia — 4 April 2003 Norway — 1 January 1995 O Oman — 9 November 2000 P Pakistan — 1 January 1995 Panama — 6 September 1997 Papua New Guinea — 9 June 1996 Paraguay — 1 January 1995 Peru — 1 January 1995 Philippines — 1 January 1995 Poland — 1 July 1995 Portugal — 1 January 1995 Q Qatar — 13 January 1996 R Romania — 1 January 1995 Russian Federation — 22 August 2012 Rwanda — 22 May 1996 S Saint Kitts and Nevis — 21 February 1996 Saint Lucia — 1 January 1995 Saint Vincent and the Grenadines — 1 January 1995 Samoa — 10 May 2012 Saudi Arabia, Kingdom of — 11 December 2005 Senegal — 1 January 1995 Seychelles — 26 April 2015 Sierra Leone — 23 July 1995 Singapore — 1 January 1995 Slovak Republic — 1 January 1995 Slovenia — 30 July 1995 Solomon Islands — 26 July 1996 South Africa — 1 January 1995 Spain — 1 January 1995 Sri Lanka — 1 January 1995 Suriname — 1 January 1995 Sweden — 1 January 1995 Switzerland — 1 July 1995 T Chinese Taipei — 1 January 2002 Tajikistan — 2 March 2013 Tanzania — 1 January 1995 Thailand — 1 January 1995 Togo — 31 May 1995 Tonga — 27 July 2007 Trinidad and Tobago — 1 March 1995 Tunisia — 29 March 1995 Turkey — 26 March 1995 U Uganda — 1 January 1995 Ukraine — 16 May 2008 United Arab Emirates — 10 April 1996 United Kingdom — 1 January 1995 United States — 1 January 1995 Uruguay — 1 January 1995 V Vanuatu — 24 August 2012 Venezuela, Bolivarian Republic of — 1 January 1995 Viet Nam — 11 January 2007 Y Yemen — 26 June 2014 Z Zambia — 1 January 1995 Zimbabwe — 5 March 1995 Intellectual property protections Yinan Wang.2012 HANDLING THE U.S.-CHINA INTELLECTUAL PROPERTY RIGHTS DISPUTE – THE ROLE OF WTO’S DISPUTE SETTLEMENT SYSTEM. https://etd.ohiolink.edu/apexprod/rws_etd/send_file/send?accession=miami1336224534anddisposition=inline In short, intellectual property is “information with commercial value.”84 Primo Braga defines intellectual property rights as “a composite of ideas, inventions, and creative expressions and the public willingness to bestow the status of property on them.”85 The WTO has divided intellectual property rights into two broader areas—copyright and rights related to copyright; and industrial property. Copyright protects “the rights of authors of literary and artistic works (such as books and other writings, musical compositions, paintings, sculpture, computer programs and films)… for a minimum period of 50 years after the death of the author.”86 Copyright also covers the rights of performers, such as singers, actors, and musicians, phonograms producers, and broadcasting organizations. Industrial property consists of trademarks (as well as service marks) and patents. Maskus defines trademark as “a symbol or other identifier that conveys information to the consumer about the product.”87 Trademark is the protection of distinctive signs which identify a product, company or service. If consumers believe that the mark is a reliable indicator of desirable characteristics of a good or service, they would be willing to pay a premium for the good or service. Related to trademarks is geographic indications, “which identify a good as originating in a place where a given characteristic of the good is essentially attributable to its geographical origin”.88 Other types of industrial property include primarily patents, but also industrial designs and trade secrets. According to Mertha, “patents provide inventors with the right of exclusion from the use, production, sales, or import of the product or technology in question for a specified period of time”.89 Protection of these types of industrial properties is to “stimulate innovation, design and the creation of technology.”90 Medicine Google No Date Google. “medicine”. No Date. Accessed 8/6/21. https://www.google.com/search?q=medicines+definitionandrlz=1C1CHBF_enUS877US877andoq=medicines+andaqs=chrome.1.69i59l3j69i60.2379j0j7andsourceid=chromeandie=UTF-8Xu the science or practice of the diagnosis, treatment, and prevention of disease (in technical use often taken to exclude surgery). Violation: x 1 Limits: their model has no resolutional bound and creates the possibility for literally an infinite number of 1ACs. It allows someone to specialize in one area 4 years giving an huge edge over people who switch research focus ever 2 months, which means their arguments are presumptively false because they haven’t been subject to well-researched clash. 2 Clash---forfeiting government action sanctions retreat from controversy and forces the negative to concede solvency before winning a link -- clash is the necessary condition for distinguishing debate from discussion, but negation exists on a sliding scale -- that jumpstarts the process of critical thinking, reflexivity, and argument refinement. It’s also key to movement building—no critical testing means 3 Fairness is an impact – 1 it’s an intrinsic good – some level of competitive equity is necessary to sustain the activity – if it didn’t exist, then there wouldn’t be value to the game since judges could literally vote whatever way they wanted regardless of the competing arguments made 2 probability – your ballot can’t solve their impacts but it can solve mine – debate can’t alter subjectivity, but can rectify skews 3 internal link turns every impact – a limited topic promotes in-depth research and engagement which is necessary to access all of their education 4 comes before substance – deciding any other argument in this debate cannot be disentangled from our inability to prepare for it – any argument you think they’re winning is a link, not a reason to vote for them, since it’s just as likely that they’re winning it because we weren’t able to effectively prepare to defeat it. This means they don’t get to weigh the aff. 4 TVA – Read the aff on the neg as a counter methodology – solves their offense because they can engage in psychoanalysis. Defend an aff that claims that reducing IP rights destroys the WTO which is good under feminist psychoanalysis. Defend an aff that says that the member nations themselves implement the plan and not the WTO – solves their offense because their only link ev is about the WTO specifically, not its member nations DTD – it’s key to norm set and deter future abuse Competing interps – Reasonability invites arbitrary judge intervention and a race to the bottom of questionable argumentation – it also collapses since brightlines operate on an offense-defense paradigm No RVIs – A – Encourages theory baiting – outweighs because if the shell is frivolous, they can beat it quickly B – its illogical for you to win for proving you were fair – outweighs since logic is a litmus test for other arguments Evaluate the debate after the 1NC – key to let both of us sleep before the next round outweighs because health is a prerequisite to debating
9/23/21
SO - T - Leslie
Tournament: Valley RR | Round: 4 | Opponent: Lexington BF | Judge: Keshav Dandu, Triniti Krauss Interpretation: The aff must defend that member nations reduce intellectual property protections for all medicines The upward entailment test and adverb test determine the genericity of a bare plural Leslie and Lerner 16 Sarah-Jane Leslie, Ph.D., Princeton, 2007. Dean of the Graduate School and Class of 1943 Professor of Philosophy. Served as the vice dean for faculty development in the Office of the Dean of the Faculty, director of the Program in Linguistics, and founding director of the Program in Cognitive Science at Princeton University. Adam Lerner, PhD Philosophy, Postgraduate Research Associate, Princeton 2018. From 2018, Assistant Professor/Faculty Fellow in the Center for Bioethics at New York University. Member of the Princeton Social Neuroscience Lab. “Generic Generalizations.” Stanford Encyclopedia of Philosophy. April 24, 2016. https://plato.stanford.edu/entries/generics/ TG Generics and Logical Form In English, generics can be expressed using a variety of syntactic forms: bare plurals (e.g., “tigers are striped”), indefinite singulars (e.g., “a tiger is striped”), and definite singulars (“the tiger is striped”). However, none of these syntactic forms is dedicated to expressing generic claims; each can also be used to express existential and/or specific claims. Further, some generics express what appear to be generalizations over individuals (e.g., “tigers are striped”), while others appear to predicate properties directly of the kind (e.g., “dodos are extinct”). These facts and others give rise to a number of questions concerning the logical forms of generic statements. 1.1 Isolating the Generic Interpretation Consider the following pairs of sentences: (1)a.Tigers are striped. b.Tigers are on the front lawn. (2)a.A tiger is striped. b.A tiger is on the front lawn. (3)a.The tiger is striped. b.The tiger is on the front lawn. The sentence pairs above are prima facie syntactically parallel—both are subject-predicate sentences whose subjects consist of the same common noun coupled with the same, or no, article. However, the interpretation of first sentence of each pair is intuitively quite different from the interpretation of the second sentence in the pair. In the second sentences, we are talking about some particular tigers: a group of tigers in (1b), some individual tiger in (2b), and some unique salient or familiar tiger in (3b)—a beloved pet, perhaps. In the first sentences, however, we are saying something general. There is/are no particular tiger or tigers that we are talking about. The second sentences of the pairs receive what is called an existential interpretation. The hallmark of the existential interpretation of a sentence containing a bare plural or an indefinite singular is that it may be paraphrased with “some” with little or no change in meaning; hence the terminology “existential reading”. The application of the term “existential interpretation” is perhaps less appropriate when applied to the definite singular, but it is intended there to cover interpretation of the definite singular as referring to a unique contextually salient/familiar particular individual, not to a kind. There are some tests that are helpful in distinguishing these two readings. For example, the existential interpretation is upward entailing, meaning that the statement will always remain true if we replace the subject term with a more inclusive term. Consider our examples above. In (1b), we can replace “tiger” with “animal” salva veritate, but in (1a) we cannot. If “tigers are on the lawn” is true, then “animals are on the lawn” must be true. However, “tigers are striped” is true, yet “animals are striped” is false. (1a) does not entail that animals are striped, but (1b) entails that animals are on the front lawn (Lawler 1973; Laca 1990; Krifka et al. 1995). Another test concerns whether we can insert an adverb of quantification with minimal change of meaning (Krifka et al. 1995). For example, inserting “usually” in the sentences in (1a) (e.g., “tigers are usually striped”) produces only a small change in meaning, while inserting “usually” in (1b) dramatically alters the meaning of the sentence (e.g., “tigers are usually on the front lawn”). (For generics such as “mosquitoes carry malaria”, the adverb “sometimes” is perhaps better used than “usually” to mark off the generic reading.) It applies to “Medicines” – adding “generally” to the res doesn’t substantially change its meaning and the rez doesn’t entail reducing IP protections for all biotechnology Violation: they defend covid Net benefits - 1 Limits – 580 recognized medicines plus combinations makes negating impossible especially with no unifying disads against medicines with different policies, implementation and IP procedures 2 Precision outweighs – it determines which interps your ballot can endorse by providing the only salient focal point for debates—if their interp is not premised on the text of the resolution, its benefits are irrelevant to the question of topicality since it fails to interpret the topic 3 Ground - The aff can claim any advantage to a virtual infinite combination of affs and the lack of predictability for negatives means virtually no DAs are applicable because Affirmatives can de-link out of them.
9/25/21
SO - T - Reduce
Tournament: Loyola | Round: 3 | Opponent: Bishops AC | Judge: Abhishek Rao 1 1 Interpretation - Reduce means permanent reduction – it’s distinct from “waive” or “suspend.” Reynolds 59 (Judge (In the Matter of Doris A. Montesani, Petitioner, v. Arthur Levitt, as Comptroller of the State of New York, et al., Respondents NO NUMBER IN ORIGINAL Supreme Court of New York, Appellate Division, Third Department 9 A.D.2d 51; 189 N.Y.S.2d 695; 1959 N.Y. App. Div. LEXIS 7391 August 13, 1959, lexis) Section 83's counterpart with regard to nondisability pensioners, section 84, prescribes a reduction only if the pensioner should again take a public job. The disability pensioner is penalized if he takes any type of employment. The reason for the difference, of course, is that in one case the only reason pension benefits are available is because the pensioner is considered incapable of gainful employment, while in the other he has fully completed his "tour" and is considered as having earned his reward with almost no strings attached. It would be manifestly unfair to the ordinary retiree to accord the disability retiree the benefits of the System to which they both belong when the latter is otherwise capable of earning a living and had not fulfilled his service obligation. If it were to be held that withholdings under section 83 were payable whenever the pensioner died or stopped his other employment the whole purpose of the provision would be defeated, i.e., the System might just as well have continued payments during the other employment since it must later pay it anyway. *13 The section says "reduced", does not say that monthly payments shall be temporarily suspended; it says that the pension itself shall be reduced. The plain dictionary meaning of the word is to diminish, lower or degrade. The word "reduce" seems adequately to indicate permanency. 2 Violation – the plan waives intellectual property protections temporarily, which is an indefinite suspension. That’s 1AC Lindsay 2 – 3 Vote neg for limits and neg ground – re-instatement under any infinite number of conditions doubles aff ground – every plan becomes either temporary or permanent – you cherry-pick the best criteria and I must prep every aff while they avoid core topic discussions like reduction-based DAs which decks generics like Pharma Innovation and Bio-Tech. 4 Paradigm Issues – a Topicality is Drop the Debater – it’s a fundamental baseline for debate-ability. b Use Competing Interps – 1 Topicality is a yes/no question, you can’t be reasonably topical and 2 Reasonability invites arbitrary judge intervention and a race to the bottom of questionable argumentation. c No RVI’s - 1 Forces the 1NC to go all-in on Theory which kills substance education, 2 Encourages Baiting since the 1AC will purposely be abusive, and 3 Illogical – you shouldn’t win for not being abusive.
9/25/21
SO - T - Reduce v2
Tournament: Loyola | Round: 6 | Opponent: Harvard Westlake EJ | Judge: Neville Tom 1 1 Interpretation - Reduce means permanent reduction – it’s distinct from “waive” or “suspend.” Reynolds 59 (Judge (In the Matter of Doris A. Montesani, Petitioner, v. Arthur Levitt, as Comptroller of the State of New York, et al., Respondents NO NUMBER IN ORIGINAL Supreme Court of New York, Appellate Division, Third Department 9 A.D.2d 51; 189 N.Y.S.2d 695; 1959 N.Y. App. Div. LEXIS 7391 August 13, 1959, lexis) Section 83's counterpart with regard to nondisability pensioners, section 84, prescribes a reduction only if the pensioner should again take a public job. The disability pensioner is penalized if he takes any type of employment. The reason for the difference, of course, is that in one case the only reason pension benefits are available is because the pensioner is considered incapable of gainful employment, while in the other he has fully completed his "tour" and is considered as having earned his reward with almost no strings attached. It would be manifestly unfair to the ordinary retiree to accord the disability retiree the benefits of the System to which they both belong when the latter is otherwise capable of earning a living and had not fulfilled his service obligation. If it were to be held that withholdings under section 83 were payable whenever the pensioner died or stopped his other employment the whole purpose of the provision would be defeated, i.e., the System might just as well have continued payments during the other employment since it must later pay it anyway. *13 The section says "reduced", does not say that monthly payments shall be temporarily suspended; it says that the pension itself shall be reduced. The plain dictionary meaning of the word is to diminish, lower or degrade. The word "reduce" seems adequately to indicate permanency. 2 Violation – the plan waives intellectual property protections temporarily, which is an indefinite suspension. That’s 1AC 3 Vote neg for limits and neg ground – re-instatement under any infinite number of conditions doubles aff ground – every plan becomes either temporary or permanent – you cherry-pick the best criteria and I must prep every aff while they avoid core topic discussions like reduction-based DAs which decks generics like Pharma Innovation and Bio-Tech. Fairness is a voter and outweighs – education can be gained from research 5 TVA solves – permanently reduce COVID patents. 6 Paradigm Issues – a Topicality is Drop the Debater – it’s a fundamental baseline for debate-ability. b Use Competing Interps – 1 Topicality is a yes/no question, you can’t be reasonably topical and 2 Reasonability invites arbitrary judge intervention and a race to the bottom of questionable argumentation. c No RVI’s - 1 Forces the 1NC to go all-in on Theory which kills substance education, 2 Encourages Baiting since the 1AC will purposely be abusive, and 3 Illogical – you shouldn’t win for not being abusive.
9/23/21
SO - T - Reduce v3
Tournament: Valley | Round: 1 | Opponent: Harker PG | Judge: TJ Maher 3 Interpretation – “Reduce” means to annul. Black’s Law 90 Black’s Law Dictionary 2ND ED. “Reduce” https://dictionary.thelaw.com/reduce/Elmer In Scotch law. To rescind or annul. That means the Aff has to cancel IP protections in their entirety, they can’t just modify it. Black’s Law 90 Black’s Law Dictionary 2ND ED. “Annul” https://thelawdictionary.org/annul/ Elmer To cancel; make void ; destroy. To annul a judgment or judicial proceeding is to deprive it of all force and operation, either a6 initio or prospectively as to future transactions. Wait v. Wait, 4 Barb. (N. Y.) 205; Woodson v. Skinner, 22 Mo. 24; In re Morrow’s Estate, 204 Pa. 484, 54 Atl. 342. Violation – They don’t remove the IP, the Trade Secret still has the same protection under law, it cannot be disclosed unless disclosure is in the public interest – the Aff only shifts who has to prove that NOT the actual protection. Standards – A Limits – Allowing the Aff’s to deal with the enforcement of IP rather than the actual protection explodes the Topic – Affs can modify court proceedings, specify which courts hear the cases, how long those proceedings last, which agencies pursue legal action, etc. – it eviscerates a predictable stasis by shifting it away from IPP good/bad. B Neg Ground – Shifting the topic to enforcement means DAs like Innovation, Biotech Heg, Politics no longer apply since the Aff doesn’t have to reduce anything related to the IPP itself – proven by the fact we can’t read Trade Secrets Good vs this Aff since the 1AR will shift to the IP itself doesn’t change and if they were good, the Aff wouldn’t be enforced proving modifications are infinitely abusive. 4 TVA – eliminate Trade Secret protection of Pharma to eliminate deterrent litigation against whistle-blowers since there’s no longer a legal basis for enforcement.
9/25/21
SO - Theory - Espec
Tournament: Jack Howe | Round: 2 | Opponent: Brentwood BB | Judge: Vanessa Ngywen 1 Interpretation – the Affirmative must present a delineated enforcement mechanism for the Plan. There is no normal means since terms are negotiated contextually among member states. WTO No Date "Whose WTO is it anyway?" https://www.wto.org/english/thewto_e/whatis_e/tif_e/org1_e.htmElmer When WTO rules impose disciplines on countries’ policies, that is the outcome of negotiations among WTO members. The rules are enforced by the members themselves under agreed procedures that they negotiated, including the possibility of trade sanctions. But those sanctions are imposed by member countries, and authorized by the membership as a whole. This is quite different from other agencies whose bureaucracies can, for example, influence a country’s policy by threatening to withhold credit. Violation: they don’t Standards 1 Shiftiness- They can redefine the 1AC’s enforcement mechanism in the 1AR which allows them to recontextualize their enforcement mechanism to wriggle out of DA’s since all DA links are predicated on type of enforcement i.e. sanctions bad das, domestic politics das off of backlash, information research sharing da if they put monetary punishments, or trade das. 2 Real World - Policy makers will always specify how the mandates of the plan should be endorsed. It also means zero solvency, absent spec, states can circumvent the Aff’s policy since there is no delineated way to enforce the affirmative which means there’s no way to actualize any of their solvency arguments. ESpec isn’t regressive or arbitrary- it’s an active part of the WTO is central to any advocacy about international IP law since the only uniqueness of a reduction of IP protections is how effective its enforcement is. Fairness and education are voters – its how judges evaluate rounds and why schools fund debate DTD – it’s key to norm set and deter future abuse Competing interps – Reasonability invites arbitrary judge intervention and a race to the bottom of questionable argumentation – it also collapses since brightlines operate on an offense-defense paradigm No RVIs – A – Encourages theory baiting – outweighs because if the shell is frivolous, they can beat it quickly B – its illogical for you to win for proving you were fair – outweighs since logic is a litmus test for other arguments
9/18/21
SO - Theory - Solvency Advocate
Tournament: Jack Howe | Round: 2 | Opponent: Brentwood BB | Judge: Vanessa Ngywen 2 Interp – If the 1AC specifies further than the Resolution, they must have a Solvency Advocate specifically advocating for the actors isolated in the plan implementing the plan. Violation – theres no cards Vote Negative: a Ground – If there is not Solvency Advocate defining the Plan, then there is no relevant Topic Lit to define Negative offense solely within the confines of the Plan. That wrecks our ability to Clash since if we read generics, the 1AR will always say “not unique to us” b Limits - having a Solvency Advocate that specifically advocates the 1AC is the only way to limit an unlimited topic because under their model any 1AC that j says IP bad is topical which makes super small Affs like malaria topical but don’t have robust Neg Ground in the lit base. Solvency Advocate theory isn’t frivolous or infinitely regressive – it’s a floor to ensure the Negative has an equal ability to access relevant topic literature as the Aff – it’s a central question towards the burden of rejoinder to prove that your specific proposal grounded in literature to ensure a stasis. Our distinction matters about specificity – it’s key to precision – questions of where the plan affects/who is affected are key issues surrounding every discussion– Topic Ed outweighs since we only have 2 months to debate the topic.
9/18/21
SO - Theory - Spec Reductions
Tournament: Valley RR | Round: 1 | Opponent: Strake Jesuit KS | Judge: Spencer Orlowski, Brixz Gonzaba 1 1 Interpretation: The affirmative must specify a which intellectual property rights they reduce and b to what degree they reduce them. Intellectual Property is a vague, meaningless term – there’s no normal means. Chopra 18, Samir. “The Idea of Intellectual Property Is Nonsensical and Pernicious: Aeon Essays.” Aeon, Aeon Magazine, 12 Nov. 2018, aeon.co/essays/the-idea-of-intellectual-property-is-nonsensical-and-pernicious. Samir Choprais professor of philosophy at Brooklyn College of the City University of New York. He is the author of several books, including A Legal Theory for Autonomous Artificial Agents (2011), co-authored with Laurence White.sid In the United States, media and technology have been shaped by these laws, and indeed many artists and creators owe their livelihoods to such protections. But recently, in response to the new ways in which the digital era facilitates the creation and distribution of scientific and artistic products, the foundations of these protections have been questioned. Those calling for reform, such as the law professors Lawrence Lessig and James Boyle, free software advocates such as Richard Stallman, and law and economics scholars such as William Landes and Judge Richard Posner, ask: is ‘intellectual property’ the same kind of property as ‘tangible property’, and are legal protections for the latter appropriate for the former? And to that query, we can add: is ‘intellectual property’ an appropriate general term for the widely disparate areas of law it encompasses? The answer to all these questions is no. And answering the latter question will help to answer the former. Stallman is a computer hacker extraordinaire and the fieriest exponent of the free-software movement, which holds that computer users and programmers should be free to copy, share and distribute software source code. He has argued that the term ‘intellectual property’ be discarded in favour of the precise and directed use of ‘copyright’, ‘patents’, ‘trademarks’ or ‘trade secrets’ instead – and he’s right. This is not merely semantic quibbling. The language in which a political and cultural debate is conducted very often determines its outcome. Stallman notes that copyright, patent, trademark and trade secret law were motivated by widely differing considerations. Their intended purposes, the objects covered and the permissible constraints all vary. In fact, knowledge of one body of law rarely carries over to another. (A common confusion is to imagine that an object protected by one area of law is actually protected by another: ‘McDonald’s’ is protected by trademark law, not copyright law, as many consumers seem to think.) Such diversity renders most ‘general statements … using “intellectual property”… false,’ Stallman writes. Consider the common claim that intellectual property promotes innovation: this is actually true only of patent law. Novels are copyrighted even if they are formulaic, and copyright only incentivises the production of new works as public goods while allowing creators to make a living. These limited rights do not address innovations, which is also true of trademark and trade secret law. Crucially, ‘intellectual property’ is only partially concerned with rewarding creativity (that motivation is found in copyright law alone). Much more than creativity is ‘needed to make a patentable invention’, Stallman explains, while trademark and trade secret law are orthogonal to creativity or its encouragement. Clubbing these diversities under the term ‘intellectual property’ has induced a terrible intellectual error A general term is useful only if it subsumes related concepts in such a way that semantic value is added. If our comprehension is not increased by our chosen generalised term, then we shouldn’t use it. A common claim such as ‘they stole my intellectual property’ is singularly uninformative, since the general term ‘intellectual property’ obscures more than it illuminates. If copyright infringement is alleged, we try to identify the copyrightable concrete expression, the nature of the infringement and so on. If patent infringement is alleged, we check another set of conditions (does the ‘new’ invention replicate the design of the older one?), and so on for trademarks (does the offending symbol substantially and misleadingly resemble the protected trademark?) and trade secrets (did the enterprise attempt to keep supposedly protected information secret?) The use of the general term ‘intellectual property’ tells us precisely nothing. Furthermore, the extreme generality encouraged by ‘intellectual property’ obscures the specific areas of contention created by the varying legal regimes. Those debating copyright law wonder whether the copying of academic papers should be allowed; patent law is irrelevant here. Those debating patent law wonder whether pharmaceutical companies should have to issue compulsory licences for life-saving drugs to poor countries; copyright law is irrelevant here. ‘Fair use’ is contested in copyright litigation; there is no such notion in patent law. ‘Non-obviousness’ is contested in patent law; there is no such notion in copyright law. Clubbing these diversities under the term ‘intellectual property’ has induced a terrible intellectual error: facile and misleading overgeneralisation. Indiscriminate use of ‘intellectual property’ has unsurprisingly bred absurdity. Anything associated with a ‘creator’ – be it artistic or scientific – is often grouped under ‘intellectual property’, which doesn’t make much sense. And the widespread embrace of ‘intellectual property’ has led to historical amnesia. According to Stallman, many Americans have held that ‘the framers of the US Constitution had a principled, procompetitive attitude to intellectual property’. But Article 1, Section 8, Clause 8 of the US Constitution authorises only copyright and patent law. It does not mention trademark law or trade secret law. Why then does ‘intellectual property’ remain in use? Because it has polemical and rhetorical value. Its deployment, especially by a putative owner, is a powerful inducement to change one’s position in a policy argument. It is one thing to accuse someone of copyright infringement, and another to accuse of them of the theft of property. The former sounds like a legally resolvable technicality; the latter sounds like an unambiguously sinful act. 2 Violation: they don’t 3 Standards a Shiftiness – vague plan wording wrecks Neg Ground since it’s impossible to know which DAs link or which CPs are competitive since different IP’s have different implications – absent 1AC specification, the 1AR can squirrel out of links by saying they don’t effect a certain protection or they don’t reduce IP enough to trigger the link. b Topic Education – nuanced debates about IP requires specification since each form of IPR has specific issues related to it so generalization disincentivizes in-depth research. Topic Education is a voter since we only debate the topic for two months. Neg theory is DTD - 1ARs control the direction of the debate because it determines what the 2NR has to go for – DTD allows us some leeway in the round by having some control in the direction Competing interps – Reasonability invites arbitrary judge intervention and a race to the bottom of questionable argumentation – it also collapses since brightlines operate on an offense-defense paradigm No RVIs – A – Going all in on theory kills substance education which outweighs on timeframe B - Discourages checking real abuse which outweighs on norm-setting C – Encourages theory baiting – outweighs because if the shell is frivolous, they can beat it quickly D – its illogical for you to win for proving you were fair – outweighs since logic is a litmus test for other arguments E - Kills norm setting since debaters can never admit they’re wrong – outweighs since norm setting is the constitutive purpose of theory F – They are the logic of criminalization that over-punish people-of-color for trying to create productive discourse