A cross-modal hub for narrative processing

Stories are focused on the protagonist Excerpts from: Journal of Cognitive Neuroscience Volume30 , No. 9

Storytelling Is Intrinsically Mentalistic: A Functional Magnetic Resonance Imaging Study of Narrative Production across Modalities


my notes in: [ ]


 People utilize multiple expressive modalities for communicating narrative ideas about past events. The three major ones are speech, pantomime, and drawing. The current study used functional magnetic resonance imaging to identify common brain areas that mediate narrative communication across these three sensorimotor mechanisms. In the scanner, participants were presented with short narrative prompts akin to newspaper headlines (e.g., “Surgeon finds scissors inside of patient”). The task was to generate a representation of the event, either by describing it verbally through speech, by pantomiming it gesturally, or by drawing it on a tablet.In a control condition designed to remove sensorimotor activations, participants described the spatial properties of individual objects(e.g., “binoculars”). Each of the three modality-specific subtractions produced similar results, with activations in key components of the mentalizing network, including the TPJ,

[temporoparietal junction]

posterior STS [posterior superior temporal sulcus], and posterior cingulate cortex. Conjunction analysis revealed that these areas constitute a cross-modal “narrative hub”that transcends the three modalities of communication. The involvement of these areas in narrative production suggests that people adopt an intrinsically mentalistic and character-oriented perspective when engaging in storytelling, whether using speech,pantomime, or drawing.


Theories of language origin can be divided into“vocal” and “gestural” models (McGinn, 2015;Arbib, 2012;Armstrong & Wilcox, 2007;MacNeilage & Davis, 2005;Corballis, 2002).Gestural models posit that manually produced symbols evolved earlier than those produced vocally and that speech was a replacement for a preestablished symbolic system that was mediated by gesture alone.Importantly, the kind of gesturing that gestural models allude to is“pantomime” or iconic gesturing. Iconic gesturing through pantomime is thought to have predated symbolic gesturing, passing through an intermediate stage that Arbib (2012)refers to as “proto-symbol.”

From a neuroscientific perspective, these theories of language origin establish a fundamental contrast between two different sensorimotor routes for the conveyance of language, namely,the audiovocal route for speech and the visuo-manual route for pantomime. Language is an inherently multimodal phenomenon, not least through the gesturing that accompanies speaking (Beattie, 2016;Kendon, 2015;McNeill, 2005).Humans have yet a third means of conveying semantic ideas, and that is through the generation of images, as occurs through drawing and writing (Elkins, 2001).We have argued elsewhere that the capacity for drawing is an evolutionary offshoot of the system for producing iconic gestures such as pantomimes (Yuan & Brown, 2014).Drawing is essentially a tool-use gesture that “leaves a trail behind” in the form of a resulting image. Overall, speech,pantomime, and image generation comprise a “narrative triad,”representing the three major modalities by which humans have evolved to referentially communicate their ideas to one another.

Perhaps, the most important function of language ist he communication of narrative, conveying the actions of agents, or“who did what do whom.”


Agency is one of the primary elements that is encoded in syntactic structure (Tallerman, 2015).Although word order varies across languages, 96% of languages place the subject (the agent) before the thing that the subject acts upon(Tomlin, 1986).Hence, an “agent first” organization of sentences seems to be an ancestral feature of language grammar (Jackendoff, 1999),and gestural models of language origin highlight this type of sentence organization as well (Armstrong & Wilcox, 2007). Although language is well designed to communicate agency through syntax, it typically does so in a multimodal manner, combining speech and gesture. A basic question for the evolutionary neuroscience of human communication is whether the conveyance of narrative is linked to specific sensorimotor modalities (vocal vs. manual) or whether there are cross-modal narrative areas in the brain that transcend these modalities. This question led us to design an experiment in which we would explore for the first time whether cross-modal brain areas mediate the communication of narrative ideas using speech,pantomime, and drawing as the triad of production modalities.

Most previous neuroimaging studies of cross-modal communication are perceptual, and we are not aware of production studies that have compared any pair of functions among speech,pantomime, and drawing in healthy adults.

Evolutionary Implications

Both vocal and gestural models of language attempt to account for the origins of syntax. As mentioned in the Introduction, language grammar seems to have an intrinsically narrative structure to it,being efficient at describing who did what to whom—in other words,agency. Standard subject–verb–object models of syntactic structure (Tallerman, 2015)essentially encapsulate the kinds of transitive actions that we examined in our headlines. A large majority of languages operate on an agent-first basis, putting the actor before either the action or the target of the action. To the extent that agency is one of the most fundamental things that is conveyed in grammars (and which is lacking in so-called proto-languages; Bickerton, 1995),then our results have application to evolutionary models of language.In particular, the imaging results that were obtained in the most purely linguistic condition (speech) were replicated almost identically in the nonlinguistic conditions of pantomime and drawing.This cross-modal similarity suggests that the capacity of syntax to represent agency can be achieved through nonlinguistic means employing essentially the same brain network.

A number of biological theories of language propose that syntax emerged from basic processes of motor sequencing (Arbib, 2012;Fitch, 2011;Jackendoff, 2011).Although this might account for grammar’s connection with object-directed actions—in other words, the gestural level of representation—it may not do justice to the sense of agency that is well contained in syntactic structure. Hence, we suggest that another important evolutionary ingredient in the emergence of syntax—beyond the “plot” elements contained in motor sequencing—would be the incorporation of circuits that mediate the sense of agency, not least“other” agency. To be clear, we are not arguing that the TPJ and pSTS are syntax areas. We are simply suggesting that, whereas circuits in the IFG [inferior frontal gyrus] more typically associated with syntax (Zaccarella & Friederici, 2017)might mediate the gestural level of language, the TPJ might have a stronger connection with agents in the overall scheme of language,discourse, and narrative. Agency can be conveyed linguistically through speech and sign, but it can also be conveyed nonlinguistically through pantomime (iconic gesturing) and drawing.


In this first three-modality fMRI study of narrative production, we observed results that suggest that people generate stories in an intrinsically mentalistic fashion focused on the protagonist, rather than in a purely gestural manner related to the observable action sequence. The same set of mentalizing and social cognition areas came up with each of the three modalities of production that make up the narrative triad, pointing to a common set of cognitive operations across modalities. These operations are most likely rooted in character processing, as related to a character’s intentions, motivations, beliefs, emotions, and actions. Hence,narratives—whether spoken, pantomimed, or drawn—seem to be rooted in the communication of “other-agency.”


LHC 07/2018

After running the magnets through the restart cycle, Schaumann and her colleagues tried again, this time with only six bunches. They kept the beam circulating for two hours before intentionally dumping it.

 Physicists are doing these tests to see if the LHC could one day operate as a gamma-ray factory. In this scenario, scientists would shoot the circulating “atoms” with a laser, causing the electron to jump into a higher energy level. As the electron falls back down,it spits out a particle of light. In normal circumstances, this particle of light would not be very energetic, but because the “atom”is already moving at close to the speed of light, the energy of the emitted photon is boosted and its wavelength is squeezed (due to the Doppler effect).

 These gamma rays would have sufficient energy to produce normal“matter” particles, such as quarks, electrons and even muons.Because matter and energy are two sides of the same coin, these high-energy gamma rays would transform into massive particles and could even morph into new kinds of matter, such as dark matter. They could also be the source for new types of particle beams, such as a muon beam.  

LHC collides ions at new record energy

 The accelerator is colliding leads ions at an energy about twice as high as that of any previous collider experiment

25 November, 2015

 After the successful restart of the LargeHadron Collider (LHC) and its first months of data taking with proton collisions at a new energy frontier, the LHC is moving to a new phase, with the first lead-ion collisions of season 2 at an energy about twice as high as that of any previous collider experiment. Following a period of intense activity to re-configure the LHC and its chain of accelerators for heavy-ion beams, CERN’s accelerator specialists put the beams into collision for the first time in the early morning of 17 November2015 and ‘stable beams’ were declared at 10.59am today, marking the start of a one-month run with positively charged lead ions: lead atoms stripped of electrons. The four large LHC experiments will all take data over this campaign, including LHCb,which will record this kind of collision for the first time.Colliding lead ions allows the LHC experiments to study a state of matter that existed shortly after the big bang, reaching a temperature of several trillion degrees.

 Increasing the energy of collisions will increase the volume and the temperature of the quark and gluon plasma, allowing for significant advances in understanding the strongly-interacting medium formed in lead-ion collisions at the LHC. As an example, in season 1 the LHC experiments confirmed the perfect liquid nature of the quark-gluon plasma and the existence of “jet quenching” in ion collisions, a phenomenon in which generated particles lose energy through the quark-gluon plasma.The high abundance of such phenomena will provide the experiments with tools to characterize the behaviour of this quark-gluon plasma. Measurements to higher jet energies will thus allow new and more detailed characterization of this very interesting state of matter.

 “The heavy-ion run will provide a great complement to the proton-proton data we’ve taken this year,” said ATLAS collaboration spokesperson Dave Charlton. “We are looking forward to extending ATLAS’ studies of how energetic objects such as jets and W and Z bosons behave in the quark gluon plasma.”

<!– @page { margin: 0.79in } P { margin-bottom: 0.08in } –>
By MissMJ – Own work by uploader, PBSNOVA [1], Fermilab, Office of Science, United States Department ofEnergy, Particle Data Group, Public Domain,https://commons.wikimedia.org/w/index.php?curid=4286964


From: Encyclopedia Britannica

Metalogic: Semiotic

Originally, the word “semiotic” meant the medical theory of symptoms; however, an empiricist, John Locke, used the term in the 17th century for a science of signs and significations. The current usage was recommended especially by Rudolf Carnap—see his Introduction to Semantics (1942) and his reference there to Charles William Morris, who suggested a threefold distinction. According to this usage, semiotic is the general science of signs and languages, consisting of three parts: (1) pragmatics (in which reference is made to the user of the language), (2) semantics (in which one abstracts from the user and analyzes only the expressions and their meanings), and (3) syntax (in which one abstracts also from the meanings and studies only the relations between expressions).

Considerable effort since the 1970s has gone into the attempt to formalize some of the pragmatics of natural languages. The use of indexical expressions to incorporate reference to the speaker, his or her location, or the time of either the utterance or the events mentioned was of little importance to earlier logicians, who were primarily interested in universal truths or mathematics. With the increased interest in linguistics there has come an increased effort to formalize pragmatics.

At first Carnap exclusively emphasized syntax. But gradually he came to realize the importance of semantics, and the door was thus reopened to many difficult philosophical problems.

Certain aspects of metalogic have been instrumental in the development of the approach to philosophy commonly associated with the label of logical positivism. In his Tractatus Logico-Philosophicus (1922; originally published under another title, 1921), Ludwig Wittgenstein, a seminal thinker in the philosophy of language, presented an exposition of logical truths as sentences that are true in all possible worlds. One may say, for example, “It is raining or it is not raining,” and in every possible world one of the disjuncts is true. On the basis of this observation and certain broader developments in logic, Carnap tried to develop formal treatments of science and philosophy.

It has been thought that the success that metalogic had achieved in the mathematical disciplines could be carried over into physics and even into biology or psychology. In so doing, the logician gives a branch of science a formal language in which there are logically true sentences having universal logical ranges and factually true sentences having universal logical ranges and factually true ones having more restricted ranges. (Roughly speaking, the logical range of a sentence is the set of all possible worlds in which it is true.)

A formal solution of the problem of meaning has also been proposed for these disciplines. Given the formal language of a science, it is possible to define a notion of truth. Such a truth definition determines the truth condition for every sentence—i.e., the necessary and sufficient conditions for its truth. The meaning of a sentence is then identified with its truth condition because, as Carnap wrote:

To understand a sentence, to know what is asserted by it, is the same as to know under what conditions it would be true. . . . To know the truth condition of a sentence is (in most cases) much less than to know its truth-value, but it is the necessary starting point for finding outits truth-value.Influences in other directions

Metalogic has led to a great deal of work of a mathematical nature in axiomatic set theory, model theory, and recursion theory (in which functions that are computable in a finite number of steps are studied).

In a different direction, the devising of Turing computing machines, involving abstract designs for the explication of mechanical logical procedures, has led to the investigation of idealized computers, with ramifications in the theory of finite automata and mathematical linguistics.

Among philosophers of language, there is a widespread tendency to stress the philosophy of logic. The contrast, for example, between intensional concepts and extensional concepts; the role of meaning in natural languages as providing truth conditions; the relation between formal and natural logic (i.e., the logic of natural languages); and the relation of ontology, the study of the kinds of entities that exist, to the use of quantifiers—all these areas are receiving extensive consideration and discussion. There are also efforts to produce formal systems for empirical sciences such as physics, biology, and even psychology. Many scholars have doubted, however, whether these latter efforts have been fruitful.

Axon guidance

Axon guidance

eLife digest

Neurons communicate with each other by forming intricate webs that link cells together according to a precise pattern. A neuron can connect to another by growing a branch-like structure known as the axon. To contact the correct neuron, the axon must develop and thread its way to exactly the right place in the brain. Scientists know that the tip of the axon is extraordinarily sensitive to gradients of certain molecules in its surroundings, which guide the budding structure towards its final destination.

In particular, two molecules seem to play an important part in this process: netrin-1, which is a protein found outside cells that attracts a growing axon, and shootin1a, which is present inside neurons. Previous studies have shown that netrin-1 can trigger a cascade of reactions that activates shootin1a. In turn, activated shootin1a molecules join the internal skeleton of the cell with L1-CAM, a molecule that attaches the neuron to its surroundings. If the internal skeleton is the engine of the axon, L1-CAMs are the wheels, and shootin1a the clutch. However, it is not clear whether shootin1a is involved in guiding growing axons, and how it could help neurons ‘understand’ and react to gradients of netrin-1.

Here, Baba et al. discover that when shootin1a is absent in mice, the axons do not develop properly. Further experiments in rat neurons show that if there is a little more netrin-1 on one side of the tip of an axon, this switches on the shootin1a molecules on that edge. Activated shootin1a promote interactions between the internal skeleton and L1-CAM, helping the axon curve towards the area that has more netrin-1. In fact, if the activated shootin1a is present everywhere on the axon, and not just on one side, the structure can develop, but not turn. Taken together, the results suggest that shootin1a can read the gradients of netrin-1 and then coordinate the turning of a growing axon in response.

Wound healing, immune responses or formation of organs are just a few examples of processes that rely on cells moving in an orderly manner through the body. Dissecting how axons are guided through their development may shed light on the migration of cells in general. Ultimately, this could help scientists to understand disorders such as birth abnormalities or neurological disabilities, which arise when this process goes awry.

Neutron stars

Opinion Space 22 January 2018

Colliding neutron stars prove equality before the law of gravity

The neutron star explosion confirmed the equivalence principle: gravitational waves and light travelled 130 million years and arrived at virtually the same time, writes Katie Mack

The scene: Pisa, Italy, late 16th century. Galileo Galilei enters the famous Leaning Tower. He climbs the steps, trailed by his students, carrying two metal balls of different weights. He steps out onto the top balcony, 50 metres above the ground, and holds the balls out over the tilted rail. He lets go. According to Aristotle’s theory of gravity, the heavier ball should fall faster. Galileo has set out to prove this wrong. The collected crowd watch as the two balls fall through the air – and hit the ground, simultaneously.

Galileo’s legendary experiment is considered one of the first demonstrations of the ‘equivalence principle’ – the idea that gravitational fields don’t discriminate. On Earth this means all falling objects will fall the same way. In the cosmos – combined with Einstein’s general relativity – it explains the near-simultaneous arrival of two signals from an explosion that happened a long time ago in a galaxy far far away.

The scene: a distant galaxy, 130 million years ago. Two neutron stars – mind-bendingly dense remnants of stars long dead – are locked in an orbit so tight that gravity warps them into teardrop shapes. Whirling around their common centre of mass, they stretch toward each other. Space itself is caught up in the motion, sending powerful ripples of distortion outward. The stars spiral in. At the instant of contact they create a spacetime tsunami, which spreads like a spherical shock front from a detonation. The stars merge, and within seconds the newly combined star collapses on itself, driving a jet of hard radiation with such incredible ferocity it punches through the stellar carcass and begins tearing across the galaxy.

The gravitational distortion from this event was detected by the LIGO and Virgo observatories, and the gamma-ray flash by the Fermi space telescope. The signals came within two seconds of each other.

The near-simultaneous detection of the signals is another confirmation of a principle as old as Galileo, yet it has huge implications for our theories of gravity, and possibly for dark matter and dark energy.

Gravitational waves, the kind of spacetime distortions created by the neutron star collision, were first predicted by Einstein in 1915 and first detected at LIGO 100 years later. Central to Einstein’s picture of gravity is the idea that everything with mass warps the ‘fabric’ of space, so every planet, star or galaxy creates a kind of dent. When massive objects orbit each other they create ripples in this fabric: gravitational waves. Einstein predicted these waves would travel at the speed of light. We already had evidence of this but the neutron star explosion was a direct confirmation, since the gravitational signal and the light travelled 130 million years and arrived at virtually the same time.

This simultaneous arrival wasn’t guaranteed, even if the speeds were the same. The space between us and that distant galaxy is warped with gravitational divots due to all the masses along the way, including the originating galaxy and our own. The equivalence principle states that gravitational waves and light should both follow the curve of space, diving in and out of these dents, being delayed a little by each diversion. In this case the delay might have been months or even years; but, whatever it was, it was exactly the same for both.

The implication? Lots of theories just died a spectacular death.

New theories of gravity that break the equivalence principle have been proposed to solve problems like dark matter and dark energy. Instead of invisible matter making galaxies rotate too quickly, or mysterious stuff making the universe expand faster, some alternatives conjecture that gravity acts differently than we thought. These theories often have light and gravity following different paths through space, to explain differences between what we see and what general relativity predicts without dark matter and dark energy.

Now we know that doesn’t work. It may be possible to find a new theory of gravity but, at least in regard to the equivalence principle, it has to act exactly the way Einstein proposed.

There are still things we don’t know. Exactly what delayed the gamma rays those two seconds is still up for debate. And whether Galileo really climbed the famous tower himself is lost to history. But both experiments were spectacular demonstrations of the radical universality of gravity, and each expands the edges of our understanding of the universe.

Standard Model (Wikipedia)

At present, matter and energy are best understood in terms of the kinematics and interactions of elementary particles. To date, physics has reduced the laws governing the behavior and interaction of all known forms of matter and energy to a small set of fundamental laws and theories. A major goal of physics is to find the “common ground” that would unite all of these theories into one integrated theory of everything, of which all the other known laws would be special cases, and from which the behavior of all matter and energy could be derived (at least in principle)
Particle content
The Standard Model includes members of several classes of elementary particles (fermions, gauge bosons, and the Higgs boson), which in turn can be distinguished by other characteristics, such as color charge.
All particles can be summarized as follows:
Elementary particles
Generations: quarks
Up-type               Down-type
1. Up (u),            Down (d)
2. Charm (c),    Strange (s)
3. Top (t),           Bottom (b)
Generation: leptons
Charged                     Neutral
1. Electron (e−),    Electron neutrino (νe)
2. Muon (μ−),           Muon neutrino (νμ)
3. Tau (τ−),                Tau neutrino (ντ)
Four kinds (four fundamental interactions)
1. Photon (γ, Electromagnetic interaction)
2. W and Z bosons (W+, W−, Z, weak interaction)
3. Eight types of gluons (g, Strong interaction)
4. Graviton (G, Gravity, hypothetical)
Higgs boson
1. The antielectron (e+) is traditionally called positron
2. The known force carrier bosons all have spin = 1 and are therefore vector bosons. The hypothetical graviton has spin = 2 and is a tensor boson; if it is a gauge boson as well is unknown.
Summary of interactions between particles described by the Standard Model.
The Standard Model includes 12 elementary particles of spin  1⁄2 known as fermions. According to the spin-statistics theorem, fermions respect the Pauli exclusion principle. Each fermion has a corresponding antiparticle.
The fermions of the Standard Model are classified according to how they interact (or equivalently, by what charges they carry). There are six quarks (up, down, charm, strange, top, bottom), and six leptons (electron, electron neutrino, muon, muon neutrino, tau, tau neutrino). Pairs from each classification are grouped together to form a generation, with corresponding particles exhibiting similar physical behavior (see table).
The defining property of the quarks is that they carry color charge, and hence, interact via the strong interaction. A phenomenon called color confinement results in quarks being very strongly bound to one another, forming color-neutral composite particles (hadrons) containing either a quark and an antiquark (mesons) or three quarks (baryons). The familiar proton and neutron are the two baryons having the smallest mass. Quarks also carry electric charge and weak isospin. Hence, they interact with other fermions both electromagnetically and via the weak interaction.
The remaining six fermions do not carry colour charge and are called leptons. The three neutrinos do not carry electric charge either, so their motion is directly influenced only by the weak nuclear force, which makes them notoriously difficult to detect.
However, by virtue of carrying an electric charge, the electron, muon, and tau all interact electromagnetically.
Each member of a generation has greater mass than the corresponding particles of lower generations. The first generation charged particles do not decay; hence all ordinary (baryonic) matter is made of such particles. Specifically, all atoms consist of electrons orbiting around atomic nuclei, ultimately constituted of up and down quarks. Second and third generation charged particles, on the other hand, decay with very short half lives, and are observed only in very high-energy environments. Neutrinos of all generations also do not decay, and pervade the universe, but rarely interact with baryonic matter.

Generations of matter

Type First Second Third
up-type up charm top
down-type down strange bottom
charged electron muon tau
neutral electron neutrino muon neutrino tau neutrino

Fermions and bosons

Those particles with half-integer spins, such as 1/2, 3/2, 5/2, are known as fermions, while those particles with integer spins, such as 0, 1, 2, are known as bosons. The two families of particles obey different rules and broadly have different roles in the world around us. A key distinction between the two families is that fermions obey the Pauli exclusion principle; that is, there cannot be two identical fermions simultaneously having the same quantum numbers (meaning, roughly, having the same position, velocity and spin direction). In contrast, bosons obey the rules of Bose–Einstein statistics and have no such restriction, so they may “bunch together” even if in identical states. Also, composite particles can have spins different from their component particles. For example, a helium atom in the ground state has spin 0 and behaves like a boson, even though the quarks and electrons which make it up are all fermions.

This has profound consequences:

  • Quarks and leptons (including electrons and neutrinos), which make up what is classically known as matter, are all fermions with spin 1/2. The common idea that “matter takes up space” actually comes from the Pauli exclusion principle acting on these particles to prevent the fermions that make up matter from being in the same quantum state. Further compaction would require electrons to occupy the same energy states, and therefore a kind of pressure (sometimes known as degeneracy pressure of electrons) acts to resist the fermions being overly close.
Elementary fermions with other spins (3/2, 5/2, etc.) are not known to exist.
Elementary bosons with other spins (0, 2, 3 etc.) were not historically known to exist, although they have received considerable theoretical treatment and are well established within their respective mainstream theories. In particular, theoreticians have proposed the graviton (predicted to exist by some quantum gravity theories) with spin 2, and the Higgs boson (explaining electroweak symmetry breaking) with spin 0. Since 2013, the Higgs boson with spin 0 has been considered proven to exist. It is the first scalar elementary particle (spin 0) known to exist in nature.


A Wikipédiából, a szabad enciklopédiából
Szokásos és egzotikus hadronok

A részecskefizikában hadronnak nevezzük az olyan összetett szubatomi részecskéket, amelyeknek összetevői kvarkok és gluonok

A „hagyományos” hadronok a Gell-Mann kvarkmodelljének megfelelő, azaz 3 kvarkból vagy kvark-antikvark párból álló hadronok.
Ezek között:

1/ A barionok három kvarkból (az antibarionok pedig három antikvarkból) álló feles spinű részecskék, azaz fermionok.
Fő példái a nukleonok: a proton és a neutron

2/A mezonok egy kvarkból és egy antikvarkból állnak, mint a pionok, kaonok és egy csomó más részecske. Egyes spinű részecskék, azaz bozonok.

Egyfajta sematikus ábrázolásuk is elérhető volt. L. alább:





The secret life of Higgs bosons


By Sarah Charley

Are these mass-giving particles hanging out with dark matter?

The Higgs boson has existed since the earliest moments of our universe. Its directionless field permeates all of space and entices transient particles to slow down and burgeon with mass. Without the Higgs field, there could be no stable structures; the universe would be cold, dark and lifeless.

Many scientists are hoping that the Higgs boson will help them understand phenomena not predicted by the Standard Model, physicists’ field guide to the subatomic world. While the Standard Model is an ace at predicting the the properties of all known subatomic particles, it falls short on things like gravity, the accelerating expansion of the universe, the supernatural speeds of spinning galaxies, the absurd excess of matter over antimatter, and beyond.

“We can use the Higgs boson as a tool to look for new physics that might not readily interact with our standard set of particles,” says Darin Acosta, a physicist at the University of Florida.

In particular, there’s hope that the Higgs boson might interact with dark matter, thought to be a widespread but never directly detected kind of matter that outnumbers regular matter five to one. This theoretical massive particle makes itself known through its gravitational attraction. Physicists see its fingerprint all over the cosmos in the rotational speed of galaxies, the movements of galaxy clusters and the bending of distant light. Even though dark matter appears to be everywhere, scientists have yet to find a tool that can bridge the light and dark sectors.

If the Higgs field is the only vendor of mass in the cosmos, then dark matter must be a client. This means that the Higgs boson, the spokesparticle of the Higgs field, must have some relationship with dark matter particles.

“It could be that dark matter aids in the production of Higgs bosons, or that Higgs bosons can transform into dark matter particles as they decay,” Acosta says. “It’s simple on paper, but the challenge is finding evidence of it happening, especially when so many parts of the equation are completely invisible.”

The particle that wasn’t there

To find evidence of the Higgs boson flirting with dark matter, scientists must learn how to see the invisible. Scientists never see the Higgs boson directly; in fact, they discovered the Higgs boson by tracing the particles it produces as it decays. Now, they want to precisely measure how frequently the Higgs boson transforms into different types of particles. It’s not easy.

“All we can see with our detector is the last step of the decay, which we call the final state,” says Will Buttinger, a CERN research fellow. “In many cases, the Higgs is not the parent of the particles we see in the final state, but the grandparent.”

The Standard Model not only predicts all the different possible decays of Higgs bosons, but how favorable each decay is. For instance, it predicts that about 60 percent of Higgs bosons will transform into a pair of bottom quarks, whereas only 0.2 percent will transform into a pair of photons. If the experimental results show Higgs bosons decaying into certain particles more or less often than predicted, it could mean that a few Higgs bosons are sneaking off and transforming into dark matter.

Of course, these kinds of precision measurements cannot tell scientists if the Higgs is evolving into dark matter as part of its decay path—only that it is behaving strangely. To catch the Higgs in the act, scientists need irrefutable evidence of the Higgs schmoozing with dark matter.

“How do we see invisible things?” asks Buttinger. “By the influence it has on what we can see.”

For example, humans cannot see the wind, but we can look outside our windows and immediately know if it’s windy based whether or not trees are swaying. Scientists can look for dark matter particles in a similar way.

“For every action, there is an equal and opposite reaction,” Buttinger says. “If we see particles shooting off in one direction, we know that there must be something shooting off in the other direction.”

If a Higgs boson transforms into a visible particle paired with a dark matter particle, the solitary tracks of the visible particles will have an odd and inexplicable trajectory—an indication that, perhaps, a dark matter particle is escaping.

The Higgs boson is the newest tool scientists have to explore the uncharted terrain within and beyond the Standard Model. The continued research at the LHC and its future upgrades will enable scientists to characterize this reticent particle and learn its close-held secrets.



UPI Mon, Feb 26 3:51 PM GMT+1


The four fundamental interactions of nature (Wikipedia)


Gravitation Weak Electromagnetic  


(Electroweak) Fundamental Residual
Acts on: Mass – Energy Flavor Electric charge Color charge Atomic nuclei
Particles experiencing: All Quarks, leptons Electrically charged Quarks, Gluons Hadrons
Particles mediating: Not yet observed
(Graviton hypothesised)
W+, W and Z0 γ (photon) Gluons π, ρ and ω mesons
Strength at the scale of quarks: 10−41 10−4 1 60 Not applicable
to quarks
Strength at the scale of
10−36 10−7 1 Not applicable
to hadrons


By MissMJ – Own work by uploader, PBS NOVA [1], Fermilab, Office of Science, United States Department of Energy, Particle Data Group, Public Domain, https://commons.wikimedia.org/w/index.php?curid=4286964

Being and Nothing in Hegel

(Pertaining to the Heidegger texts cited before…)

From The Science of Logic

§ 111

Further, in the beginning, being and nothing are present as distinguished from each other; for the beginning points to something else — it is a non-being which carries a reference to being as to an other; that which begins, as yet is not, it is only on the way to being.

That which begins, as yet is not, it is only on the way to being. The being contained in the beginning is, therefore, a being which removed itself from non-being or sublates it as something opposed to it.

But again, that which begins already is, but equally, too, is not as yet. The opposites, being and non-being are therefore directly united in it, or, otherwise expressed, it is their undifferentiated unity.

§ 112

The analysis of the beginning would thus yield the notion of the unity of being and nothing — or, in a more reflected form, the unity of differentiatedness and non-differentiatedness, or the identity of identity and non-identity. This concept could be regarded as the first, purest, that is, most abstract definition of the absolute — as it would in fact be if we were at all concerned with the form of definitions and with the name of the absolute. In this sense, that abstract concept would be the first definition of this absolute and all further determinations and developments only more specific and richer definitions of it. But let those who are dissatisfied with being as a beginning because it passes over into nothing and so gives rise to the unity of being and nothing, let them see whether they find this beginning which begins with the general idea of a beginning and with its analysis (which, though of course correct, likewise leads to the unity of being and nothing), more satisfactory than the beginning with being.




Chapter 1 Being

A Being

§ 132

Being, pure being, without any further determination. In its indeterminate immediacy it is equal only to itself. It is also not unequal relatively to an other; it has no diversity within itself nor any with a reference outwards. It would not be held fast in its purity if it contained any determination or content which could be distinguished in it or by which it could be distinguished from an other. It is pure indeterminateness and emptiness. There is nothing to be intuited in it, if one can speak here of intuiting; or, it is only this pure intuiting itself. Just as little is anything to be thought in it, or it is equally only this empty thinking. Being, the indeterminate immediate, is in fact nothing, and neither more nor less than nothing.

B Nothing

§ 133

Nothing, pure nothing: it is simply equality with itself, complete emptiness, absence of all determination and content — undifferentiatedness in itself. In so far as intuiting or thinking can be mentioned here, it counts as a distinction whether something or nothing is intuited or thought. To intuit or think nothing has, therefore, a meaning; both are distinguished and thus nothing is (exists) in our intuiting or thinking; or rather it is empty intuition and thought itself, and the same empty intuition or thought as pure being. Nothing is, therefore, the same determination, or rather absence of determination, and thus altogether the same as, pure being.

C Becoming

1. Unity of Being and Nothing

§ 134

Pure Being and pure nothing are, therefore, the same. What is the truth is neither being nor nothing, but that being — does not pass over but has passed over — into nothing, and nothing into being. But it is equally true that they are not undistinguished from each other, that, on the contrary, they are not the same, that they are absolutely distinct, and yet that they are unseparated and inseparable and that each immediately vanishes in its opposite. Their truth is therefore, this movement of the immediate vanishing of the one into the other: becoming, a movement in which both are distinguished, but by a difference which has equally immediately resolved itself.

Remark 1: The Opposition of Being and Nothing in Ordinary Thinking

§ 135

Nothing is usually opposed to something; but the being of something is already determinate and is distinguished from another something; and so therefore the nothing which is opposed to the something is also the nothing of a particular something, a determinate nothing. Here, however, nothing is to be taken in its indeterminate simplicity. Should it be held more correct to oppose to being, non-being instead of nothing, there would be no objection to this so far as the result is concerned, for in non-being the relation to being is contained: both being and its negation are enunciated in a single term, nothing, as it is in becoming. But we are concerned first of all not with the form of opposition (with the form, that is, also of relation) but with the abstract, immediate negation: nothing, purely on its own account, negation devoid of any relations — what could also be expressed if one so wished merely by ‘not’.

§ 136

It was the Eleatics, above all Parmenides, who first enunciated the simple thought of pure being as the absolute and sole truth: only being is, and nothing absolutely is not, and in the surviving fragments of Parmenides this is enunciated with the pure enthusiasm of thought which has for the first time apprehended itself in its absolute abstraction. As we know, in the oriental systems, principally in Buddhism, nothing, the void, is the absolute principle. Against that simple and one-sided abstraction the deep-thinking Heraclitus brought forward the higher, total concept of becoming and said: being as little is, as nothing is, or, all flows, which means, all is a becoming. The popular, especially oriental proverbs, that all that exists has the germ of death in its very birth, that death, on the other hand, is the entrance into new life, express at bottom the same union of being and nothing. But these expressions have a substratum in which the transition takes place; being and nothing are held apart in time, are conceived as alternating in it, but are not thought in their abstraction and consequently, too, not so that they are in themselves absolutely the same.

§ 137

Ex nihilo nihil fit — is one of those propositions to which great importance was ascribed in metaphysics. In it is to be seen either only the empty tautology: nothing is nothing; or, if becoming is supposed to possess an actual meaning in it, then, since from nothing only nothing becomes, the proposition does not in fact contain becoming, for in it nothing remains nothing. Becoming implies that nothing does not remain nothing but passes into its other, into being. Later, especially Christian, metaphysics whilst rejecting the proposition that out of nothing comes nothing, asserted a transition from nothing into being; although it understood this proposition synthetically or merely imaginatively, yet even in the most imperfect union there is contained a point in which being and nothing coincide and their distinguishedness vanishes. The proposition: out of nothing comes nothing, nothing is just nothing, owes its peculiar importance to its opposition to becoming generally, and consequently also to its opposition to the creation of the world from nothing. Those who maintain the proposition: nothing is just nothing, and even grow heated in its defence, are unaware that in so doing they are subscribing to the abstract pantheism of the Eleatics, and also in principle to that of Spinoza. The philosophical view for which ‘being is only being, nothing is only nothing’, is a valid principle, merits the name of ‘system of identity’; this abstract identity is the essence of pantheism.

From the Organon to the computer


Chris Dixon



THE HISTORY Of computers is often told as a history of objects, from the abacus to the Babbage engine up through the code-breaking machines of World War II. In fact, it is better understood as a history of ideas, mainly ideas that emerged from mathematical logic, an obscure and cult-like discipline that first developed in the 19th century. Mathematical logic was pioneered by philosopher-mathematicians, most notably George Boole and Gottlob Frege, who were themselves inspired by Leibniz’s dream of a universal “concept language,” and the ancient logical system of Aristotle.

Mathematical logic was initially considered a hopelessly abstract subject with no conceivable applications. As one computer scientist commented: “If, in 1901, a talented and sympathetic outsider had been called upon to survey the sciences and name the branch which would be least fruitful in [the] century ahead, his choice might well have settled upon mathematical logic.” And yet, it would provide the foundation for a field that would have more impact on the modern world than any other.

The evolution of computer science from mathematical logic culminated in the 1930s, with two landmark papers: Claude Shannon’s “A Symbolic Analysis of Switching and Relay Circuits,” and Alan Turing’s “On Computable Numbers, With an Application to the Entscheidungsproblem.” In the history of computer science, Shannon and Turing are towering figures, but the importance of the philosophers and logicians who preceded them is frequently overlooked.

A well-known history of computer science describes Shannon’s paper as “possibly the most important, and also the most noted, master’s thesis of the century.” Shannon wrote it as an electrical engineering student at MIT. His adviser, Vannevar Bush, built a prototype computer known as the Differential Analyzer that could rapidly calculate differential equations. The device was mostly mechanical, with subsystems controlled by electrical relays, which were organized in an ad hoc manner as there was not yet a systematic theory underlying circuit design. Shannon’s thesis topic came about when Bush recommended he try to discover such a theory.

Mathematics may be defined as the subject in which we never know what we are talking about.”

Shannon’s paper is in many ways a typical electrical-engineering paper, filled with equations and diagrams of electrical circuits. What is unusual is that the primary reference was a 90-year-old work of mathematical philosophy, George Boole’s The Laws of Thought.

Today, Boole’s name is well known to computer scientists (many programming languages have a basic data type called a Boolean), but in 1938 he was rarely read outside of philosophy departments. Shannon himself encountered Boole’s work in an undergraduate philosophy class. “It just happened that no one else was familiar with both fields at the same time,” he commented later.

Boole is often described as a mathematician, but he saw himself as a philosopher, following in the footsteps of Aristotle. The Laws of Thought begins with a description of his goals, to investigate the fundamental laws of the operation of the human mind:

The design of the following treatise is to investigate the fundamental laws of those operations of the mind by which reasoning is performed; to give expression to them in the symbolical language of a Calculus, and upon this foundation to establish the science of Logic … and, finally, to collect … some probable intimations concerning the nature and constitution of the human mind.

He then pays tribute to Aristotle, the inventor of logic, and the primary influence on his own work:

In its ancient and scholastic form, indeed, the subject of Logic stands almost exclusively associated with the great name of Aristotle. As it was presented to ancient Greece in the partly technical, partly metaphysical disquisitions of The Organon, such, with scarcely any essential change, it has continued to the present day.

Trying to improve on the logical work of Aristotle was an intellectually daring move. Aristotle’s logic, presented in his six-part book The Organon, occupied a central place in the scholarly canon for more than 2,000 years. It was widely believed that Aristotle had written almost all there was to say on the topic. The great philosopher Immanuel Kant commented that, since Aristotle, logic had been “unable to take a single step forward, and therefore seems to all appearance to be finished and complete.”

Aristotle’s central observation was that arguments were valid or not based on their logical structure, independent of the non-logical words involved. The most famous argument schema he discussed is known as the syllogism:

  • All men are mortal.
  • Socrates is a man.
  • Therefore, Socrates is mortal.

You can replace “Socrates” with any other object, and “mortal” with any other predicate, and the argument remains valid. The validity of the argument is determined solely by the logical structure. The logical words — “all,” “is,” are,” and “therefore” — are doing all the work.

Aristotle also defined a set of basic axioms from which he derived the rest of his logical system:

  • An object is what it is (Law of Identity)
  • No statement can be both true and false (Law of Non-contradiction)
  • Every statement is either true or false (Law of the Excluded Middle)

These axioms weren’t meant to describe how people actually think (that would be the realm of psychology), but how an idealized, perfectly rational person ought to think.

Aristotle’s axiomatic method influenced an even more famous book, Euclid’s Elements, which is estimated to be second only to the Bible in the number of editions printed.

Although ostensibly about geometry, the Elements became a standard textbook for teaching rigorous deductive reasoning. (Abraham Lincoln once said that he learned sound legal argumentation from studying Euclid.) In Euclid’s system, geometric ideas were represented as spatial diagrams. Geometry continued to be practiced this way until René Descartes, in the 1630s, showed that geometry could instead be represented as formulas. His Discourse on Method was the first mathematics text in the West to popularize what is now standard algebraic notation — x, y, z for variables, a, b, c for known quantities, and so on.

Descartes’s algebra allowed mathematicians to move beyond spatial intuitions to manipulate symbols using precisely defined formal rules. This shifted the dominant mode of mathematics from diagrams to formulas, leading to, among other things, the development of calculus, invented roughly 30 years after Descartes by, independently, Isaac Newton and Gottfried Leibniz.

Boole’s goal was to do for Aristotelean logic what Descartes had done for Euclidean geometry: free it from the limits of human intuition by giving it a precise algebraic notation. To give a simple example, when Aristotle wrote:

All men are mortal.

Boole replaced the words “men” and “mortal” with variables, and the logical words “all” and “are” with arithmetical operators:

x = x * y

Which could be interpreted as “Everything in the set x is also in the set y.”

The Laws of Thought created a new scholarly field—mathematical logic—which in the following years became one of the most active areas of research for mathematicians and philosophers. Bertrand Russell called the Laws of Thought “the work in which pure mathematics was discovered.”

Shannon’s insight was that Boole’s system could be mapped directly onto electrical circuits. At the time, electrical circuits had no systematic theory governing their design. Shannon realized that the right theory would be “exactly analogous to the calculus of propositions used in the symbolic study of logic.”

He showed the correspondence between electrical circuits and Boolean operations in a simple chart:


Shannon’s mapping from electrical circuits to symbolic logic (University of Virginia)

This correspondence allowed computer scientists to import decades of work in logic and mathematics by Boole and subsequent logicians. In the second half of his paper, Shannon showed how Boolean logic could be used to create a circuit for adding two binary digits.

Shannon’s adder circuit (University of Virginia)

By stringing these adder circuits together, arbitrarily complex arithmetical operations could be constructed. These circuits would become the basic building blocks of what are now known as arithmetical logic units, a key component in modern computers.

Another way to characterize Shannon’s achievement is that he was first to distinguish between the logical and the physical layer of computers. (This distinction has become so fundamental to computer science that it might seem surprising to modern readers how insightful it was at the time—a reminder of the adage that “the philosophy of one century is the common sense of the next.”)

Since Shannon’s paper, a vast amount of progress has been made on the physical layer of computers, including the invention of the transistor in 1947 by William Shockley and his colleagues at Bell Labs. Transistors are dramatically improved versions of Shannon’s electrical relays — the best known way to physically encode Boolean operations. Over the next 70 years, the semiconductor industry packed more and more transistors into smaller spaces. A 2016 iPhone has about 3.3 billion transistors, each one a “relay switch” like those pictured in Shannon’s diagrams.

While Shannon showed how to map logic onto the physical world, Turing showed how to design computers in the language of mathematical logic. When Turing wrote his paper, in 1936, he was trying to solve “the decision problem,” first identified by the mathematician David Hilbert, who asked whether there was an algorithm that could determine whether an arbitrary mathematical statement is true or false. In contrast to Shannon’s paper, Turing’s paper is highly technical. Its primary historical significance lies not in its answer to the decision problem,  but in the template for computer design it provided along the way.

Turing was working in a tradition stretching back to Gottfried Leibniz, the philosophical giant who developed calculus independently of Newton. Among Leibniz’s many contributions to modern thought, one of the most intriguing was the idea of a new language he called the “universal characteristic” that, he imagined, could represent all possible mathematical and scientific knowledge. Inspired in part by the 13th-century religious philosopher Ramon Llull, Leibniz postulated that the language would be ideographic like Egyptian hieroglyphics, except characters would correspond to “atomic” concepts of math and science. He argued this language would give humankind an “instrument” that could enhance human reason “to a far greater extent than optical instruments” like the microscope and telescope.

He also imagined a machine that could process the language, which he called the calculus ratiocinator.

If controversies were to arise, there would be no more need of disputation between two philosophers than between two accountants. For it would suffice to take their pencils in their hands, and say to each other: Calculemus—Let us calculate.

Leibniz didn’t get the opportunity to develop his universal language or the corresponding machine (although he did invent a relatively simple calculating machine, the stepped reckoner). The first credible attempt to realize Leibniz’s dream came in 1879, when the German philosopher Gottlob Frege published his landmark logic treatise Begriffsschrift. Inspired by Boole’s attempt to improve Aristotle’s logic, Frege developed a much more advanced logical system. The logic taught in philosophy and computer-science classes today—first-order or predicate logic—is only a slight modification of Frege’s system.

Frege is generally considered one of the most important philosophers of the 19th century. Among other things, he is credited with catalyzing what noted philosopher Richard Rorty called the “linguistic turn” in philosophy. As Enlightenment philosophy was obsessed with questions of knowledge, philosophy after Frege became obsessed with questions of language. His disciples included two of the most important philosophers of the 20th century—Bertrand Russell and Ludwig Wittgenstein.

The major innovation of Frege’s logic is that it much more accurately represented the logical structure of ordinary language. Among other things, Frege was the first to use quantifiers (“for every,” “there exists”) and to separate objects from predicates. He was also the first to develop what today are fundamental concepts in computer science like recursive functions and variables with scope and binding.

Frege’s formal language — what he called his “concept-script” — is made up of meaningless symbols that are manipulated by well-defined rules. The language is only given meaning by an interpretation, which is specified separately (this distinction would later come to be called syntax versus semantics). This turned logic into what the eminent computer scientists Allan Newell and Herbert Simon called “the symbol game,” “played with meaningless tokens according to certain purely syntactic rules.”

All meaning had been purged. One had a mechanical system about which various things could be proved. Thus progress was first made by walking away from all that seemed relevant to meaning and human symbols.

As Bertrand Russell famously quipped: “Mathematics may be defined as the subject in which we never know what we are talking about, nor whether what we are saying is true.”

An unexpected consequence of Frege’s work was the discovery of weaknesses in the foundations of mathematics. For example, Euclid’s Elements — considered the gold standard of logical rigor for thousands of years — turned out to be full of logical mistakes. Because Euclid used ordinary words like “line” and “point,” he — and centuries of readers — deceived themselves into making assumptions about sentences that contained those words. To give one relatively simple example, in ordinary usage, the word “line” implies that if you are given three distinct points on a line, one point must be between the other two. But when you define “line” using formal logic, it turns out “between-ness” also needs to be defined—something Euclid overlooked. Formal logic makes gaps like this easy to spot.

This realization created a crisis in the foundation of mathematics. If the Elements — the bible of mathematics — contained logical mistakes, what other fields of mathematics did too? What about sciences like physics that were built on top of mathematics?

The good news is that the same logical methods used to uncover these errors could also be used to correct them. Mathematicians started rebuilding the foundations of mathematics from the bottom up. In 1889, Giuseppe Peano developed axioms for arithmetic, and in 1899, David Hilbert did the same for geometry. Hilbert also outlined a program to formalize the remainder of mathematics, with specific requirements that any such attempt should satisfy, including:

  • Completeness: There should be a proof that all true mathematical statements can be proved in the formal system.
  • Decidability: There should be an algorithm for deciding the truth or falsity of any mathematical statement. (This is the “Entscheidungsproblem” or “decision problem” referenced in Turing’s paper.)

Rebuilding mathematics in a way that satisfied these requirements became known as Hilbert’s program. Up through the 1930s, this was the focus of a core group of logicians including Hilbert, Russell, Kurt Gödel, John Von Neumann, Alonzo Church, and, of course, Alan Turing.

In science, novelty emerges only with difficulty.”

Hilbert’s program proceeded on at least two fronts. On the first front, logicians created logical systems that tried to prove Hilbert’s requirements either satisfiable or not.

On the second front, mathematicians used logical concepts to rebuild classical mathematics. For example, Peano’s system for arithmetic starts with a simple function called the successor function which increases any number by one. He uses the successor function to recursively define addition, uses addition to recursively define multiplication, and so on, until all the operations of number theory are defined. He then uses those definitions, along with formal logic, to prove theorems about arithmetic.

The historian Thomas Kuhn once observed that “in science, novelty emerges only with difficulty.” Logic in the era of Hilbert’s program was a tumultuous process of creation and destruction. One logician would build up an elaborate system and another would tear it down.

The favored tool of destruction was the construction of self-referential, paradoxical statements that showed the axioms from which they were derived to be inconsistent. A simple form of this  “liar’s paradox” is the sentence:

This sentence is false.

If it is true then it is false, and if it is false then it is true, leading to an endless loop of self-contradiction.

Russell made the first notable use of the liar’s paradox in mathematical logic. He showed that Frege’s system allowed self-contradicting sets to be derived:

Let R be the set of all sets that are not members of themselves. If R is not a member of itself, then its definition dictates that it must contain itself, and if it contains itself, then it contradicts its own definition as the set of all sets that are not members of themselves.

This became known as Russell’s paradox and was seen as a serious flaw in Frege’s achievement. (Frege himself was shocked by this discovery. He replied to Russell: “Your discovery of the contradiction caused me the greatest surprise and, I would almost say, consternation, since it has shaken the basis on which I intended to build my arithmetic.”)

Russell and his colleague Alfred North Whitehead put forth the most ambitious attempt to complete Hilbert’s program with the Principia Mathematica, published in three volumes between 1910 and 1913. The Principia’s method was so detailed that it took over 300 pages to get to the proof that 1+1=2.

Russell and Whitehead tried to resolve Frege’s paradox by introducing what they called type theory. The idea was to partition formal languages into multiple levels or types. Each level could make reference to levels below, but not to their own or higher levels. This resolved self-referential paradoxes by, in effect, banning self-reference. (This solution was not popular with logicians, but it did influence computer science — most modern computer languages have features inspired by type theory.)

Self-referential paradoxes ultimately showed that Hilbert’s program could never be successful. The first blow came in 1931, when Gödel published his now famous incompleteness theorem, which proved that any consistent logical system powerful enough to encompass arithmetic must also contain statements that are true but cannot be proven to be true. (Gödel’s incompleteness theorem is one of the few logical results that has been broadly popularized, thanks to books like Gödel, Escher, Bach and The Emperor’s New Mind).

The final blow came when Turing and Alonzo Church independently proved that no algorithm could exist that determined whether an arbitrary mathematical statement was true or false. (Church did this by inventing an entirely different system called the lambda calculus, which would later inspire computer languages like Lisp.) The answer to the decision problem was negative.

Turing’s key insight came in the first section of his famous 1936 paper, “On Computable Numbers, With an Application to the Entscheidungsproblem.” In order to rigorously formulate the decision problem (the “Entscheidungsproblem”), Turing first created a mathematical model of what it means to be a computer (today, machines that fit this model are known as “universal Turing machines”). As the logician Martin Davis describes it:

Turing knew that an algorithm is typically specified by a list of rules that a person can follow in a precise mechanical manner, like a recipe in a cookbook. He was able to show that such a person could be limited to a few extremely simple basic actions without changing the final outcome of the computation.

Then, by proving that no machine performing only those basic actions could determine whether or not a given proposed conclusion follows from given premises using Frege’s rules, he was able to conclude that no algorithm for the Entscheidungsproblem exists.

As a byproduct, he found a mathematical model of an all-purpose computing machine.

Next, Turing showed how a program could be stored inside a computer alongside the data upon which it operates. In today’s vocabulary, we’d say that he invented the “stored-program” architecture that underlies most modern computers:

Before Turing, the general supposition was that in dealing with such machines the three categories — machine, program, and data — were entirely separate entities. The machine was a physical object; today we would call it hardware. The program was the plan for doing a computation, perhaps embodied in punched cards or connections of cables in a plugboard. Finally, the data was the numerical input. Turing’s universal machine showed that the distinctness of these three categories is an illusion.

This was the first rigorous demonstration that any computing logic that could be encoded in hardware could also be encoded in software. The architecture Turing described was later dubbed the “Von Neumann architecture” — but modern historians generally agree it came from Turing, as, apparently, did Von Neumann himself.

Although, on a technical level, Hilbert’s program was a failure, the efforts along the way demonstrated that large swaths of mathematics could be constructed from logic. And after Shannon and Turing’s insights—showing the connections between electronics, logic and computing—it was now possible to export this new conceptual machinery over to computer design.

During World War II, this theoretical work was put into practice, when government labs conscripted a number of elite logicians. Von Neumann joined the atomic bomb project at Los Alamos, where he worked on computer design to support physics research. In 1945, he wrote the specification of the EDVAC—the first stored-program, logic-based computer—which is generally considered the definitive source guide for modern computer design.

Turing joined a secret unit at Bletchley Park, northwest of London, where he helped design computers that were instrumental in breaking German codes. His most enduring contribution to practical computer design was his specification of the ACE, or Automatic Computing Engine.

As the first computers to be based on Boolean logic and stored-program architectures, the ACE and the EDVAC were similar in many ways. But they also had interesting differences, some of which foreshadowed modern debates in computer design. Von Neumann’s favored designs were similar to modern CISC (“complex”) processors, baking rich functionality into hardware. Turing’s design was more like modern RISC (“reduced”) processors, minimizing hardware complexity and pushing more work to software.

Von Neumann thought computer programming would be a tedious, clerical job. Turing, by contrast, said computer programming “should be very fascinating. There need be no real danger of it ever becoming a drudge, for any processes that are quite mechanical may be turned over to the machine itself.”

Since the 1940s, computer programming has become significantly more sophisticated. One thing that hasn’t changed is that it still primarily consists of programmers specifying rules for computers to follow. In philosophical terms, we’d say that computer programming has followed in the tradition of deductive logic, the branch of logic discussed above, which deals with the manipulation of symbols according to formal rules.

In the past decade or so, programming has started to change with the growing popularity of machine learning, which involves creating frameworks for machines to learn via statistical inference. This has brought programming closer to the other main branch of logic, inductive logic, which deals with inferring rules from specific instances.

Today’s most promising machine learning techniques use neural networks, which were first invented in 1940s by Warren McCulloch and Walter Pitts, whose idea was to develop a calculus for neurons that could, like Boolean logic, be used to construct computer circuits. Neural networks remained esoteric until decades later when they were combined with statistical techniques, which allowed them to improve as they were fed more data. Recently, as computers have become increasingly adept at handling large data sets, these techniques have produced remarkable results. Programming in the future will likely mean exposing neural networks to the world and letting them learn.

This would be a fitting second act to the story of computers. Logic began as a way to understand the laws of thought. It then helped create machines that could reason according to the rules of deductive logic. Today, deductive and inductive logic are being combined to create machines that both reason and learn. What began, in Boole’s words, with an investigation “concerning the nature and constitution of the human mind,” could result in the creation of new minds—artificial minds—that might someday match or even exceed our own.”




Kulcsszavak: egzisztencia*, szorongás, önmegértés, tudományfilozófia

Jelenvalólétünket – a kutatók, tanárok és tanulmányaikat folytatók közösségében – a tudomány határozza meg. Minden tudományban, amikor annak legsajátabb célját követjük, magához a létezőhöz viszonyulunk. Ez a létezőre irányuló kitüntetett világvonatkozás az emberi egzisztencia szabadon választott magatartásán nyugszik, ez a magatartás vezérli. Persze, az ember tudomány előtti és tudományon kívüli cselekvése és viselkedése is a létezőhöz viszonyul. A tudomány abban tűnik ki, hogy sajátos módon kifejezetten és egyedül magát a dolgot engedi szóhoz jutni.

Csak a létezőt kell kutatni és – semmi mást; egyedül a létezőt és – semmi egyebet; csakis a létezőt és rajta kívül – semmit. Hogyan is áll a dolog ezzel a Semmivel? A Semmit a tudomány kifejezetten elutasítja és feladja mint semmis-jelentéktelent. Csakhogy: nem éppen akkor ismerjük-e el a Semmit, amikor így feladjuk? A tudomány semmit sem akar tudni a Semmiről. De ugyanilyen biztos ez is: ott, ahol a tudomány kísérletet tesz rá, hogy megfogalmazza a maga lényegét, a Semmit hívja segítségül. Azt veszi igénybe, amit elvet. Miféle meghasonlott lényeg lepleződik itt le? Amikor pillanatnyi egzisztenciánkon – amelyet a tudomány határoz meg – elmélkedünk, egy összeütközés kellős közepébe jutunk. Ez a vita már elindított egy kérdezést. Most már csak arra van szükség, hogy a kérdést megfogalmazzuk: hogyan áll a dolog a Semmivel?

Már az első nekifutás a kérdésnek valami szokatlant mutat. Ebben a kérdezésben eleve úgy tesszük fel a Semmit, mint valamit, ami így és így „van” – mint létezőt. Csakhogy pontosan a létező az, amitől a Semmi teljességgel különbözik. Ennek megfelelően eleve nem lehetséges semmiféle válasz a kérdésre. Hiszen a válasz szükségképpen ilyen formájú: a Semmi ez és ez („van”)*. A Semmit illetően kérdés és felelet egyformán értelmetlen. A gondolkodás rendszerint felemlegetett alapszabálya, az elkerülendő ellentmondás tétele, az általános „logika” elveti a kérdést. Mert a gondolkodás, amely mindig valamiről való gondolkodás, a Semmiről való gondolkodásként saját lényege ellen kellene hogy cselekedjék. De vajon kikezdhetjük-e a „logika” uralmát? Vajon nem az értelem-e az úr a Semmire vonatkozó kérdésben?

Hiszen egyáltalában csak az ő segítségével tudjuk a Semmit meghatározni és mint problémát

megközelíteni, még ha e probléma fel is emészti önmagát. Mert a Semmi a létező mindenségének tagadása, az éppenséggel nem-létező. A tagadás azonban a „logika” uralkodó és kikezdhetetlen tanítása szerint egy sajátos értelmi tevékenység. Csak azért van a Semmi, mert van a Nem, azaz a tagadás? Vagy fordítva áll a helyzet? Csak azért van a tagadás és a Nem, mert a Semmi van? Ez nincsen eldöntve, sőt, kifejezett kérdésként sem fogalmazódott még meg. Mi azt állítottuk: a Semmi eredendőbb, mint a Nem és a tagadás. Ha ez az állítás helyes, akkor a tagadásnak mint értelmi tevékenységnek a lehetősége s ezzel maga az értelem is valamilyen módon a Semmitől függ. Hogy is akarhat akkor az értelem a Semmi felől dönteni?

Talán csak az derül ki a végén, hogy a Semmire vonatkozó kérdés és válasz látszólagos értelmetlensége a 67 – nyughatatlan értelem vak csökönyösségén nyugszik? A Semmi a létező mindenségének teljes tagadása. Vajon a Semminek ez a jellemzése nem mutat-e végül is olyan irányba, amelyből és csakis ebből utunkba kerülhet? Ahhoz, hogy mint olyan teljességgel tagadhatóvá váljék a létező mindensége, mely tagadásban azután maga a Semmi megmutatkozhatnék, a létező mindenségének előzőleg adottnak kell lennie.

Amilyen biztos az, hogy a létező egészét önmagában soha abszolút módon nem ragadhatjuk meg, olyan bizonyos az is, hogy valahogyan mégiscsak a maga egészében lelepleződött létezőbe állítottan lelünk önmagunkra. A hangoltság, melyben valaki így vagy úgy „van”, lehetővé teszi, hogy általa áthangolva az egészben vett létezőben tartózkodjunk. A hangulatnak ez a diszpozíciója* nemcsak hogy a maga módján minden esetben leleplezi az egészében vett létezőt, hanem egyszersmind ez a leleplezés jelenvalólétünk* alaptörténése. Amit ily módon „érzéseknek” nevezünk, az nem egyszerűen gondolkodó és akaró magatartásunk futólagos kísérő jelensége.

Csakhogy éppen amikor a hangulatok ily módon az egészében vett létező elé vezetnek el bennünket, akkor rejtik el előlünk a Semmit, amelyet keresünk. Most még kevésbé lesz az a véleményünk, hogy a hangulatilag megnyilvánult egészében vett létezőnek a tagadása a Semmi elé állít bennünket. Ilyesmi megfelelő módon eredendően csak egy olyan hangulatban történhet, amely legsajátabb leleplezési értelme szerint a Semmit nyilvánítja meg.

Megtörténik-e az ember jelenvalólétében egy olyan hangoltság, melyben ő maga a Semmivel szembesül? Ez a történés csak pillanatokra lehetséges, és – jóllehet elég ritkán – valóságos is a szorongás alaphangulatában. A szorongásban – azt mondjuk – „otthontalannak érezzük magunkat”. Nem tudjuk megmondani, mitől otthontalan, valaki egészében érzi így magát. Minden dolog, mi magunk is, közömbösségbe süllyedünk. Az egészében vett létezőnek ez az elmozdulása, ami a szorongásban körülvesz bennünket, szorongat minket. Nem marad támaszunk. Ami marad és ami ránk tör – midőn a létező elsiklik – az ez a „nincs”. A szorongás megnyilvánítja a Semmit. A szorongásban az egészében vett létező talajtalanná válik. Milyen értelemben történik ez meg? Azt talán mégsem akarjuk állítani, hogy a szorongás megsemmisíti a létezőt, hogy ily módon meghagyja számunkra a Semmit. Hogyan is tehetné ezt, amikor a szorongás éppen az egészében vett létezővel szembeni tehetetlenségben találtatik.

A Semmi sajátosan a létezővel és a létezőn mint valami elsikló egészen mutatkozik meg, ezt a létezőt mint a teljességgel másikat nyilvánítja meg – a Semmivel szemben. Csak a szorongás Semmijének világos éjszakájában keletkezik a létező mint olyan eredendő nyitottsága: hogy az létező – és nem Semmi. De ez az általunk beszédünkben hozzámondott „és nem Semmi” nem valami járulékos magyarázat, hanem egyáltalán a létező megnyilvánulásának előzetes lehetővé tétele. Csak a Semmi eredendő megnyilvánulásának alapján képes az ember jelenvalóléte hozzáférni a létezőhöz és beléhatolni.

Jelenvalólét annyit tesz: beletartottság a Semmibe. A jelenvalólét, minthogy beletartja magát a Semmibe, már eleve túl van az egészében vett létezőn. Ezt a létezőn való túllétet nevezzük

transzcendenciának*. Ha a jelenvalólét létezése alapjában nem transzcendálna, s ez itt azt jelenti, hogy nem tartaná bele magát eleve a Semmibe, akkor sohasem viszonyulhatna a létezőhöz, tehát önmagához sem. A Semmi eredendő megnyilvánulása nélkül nincs Önmagalét és nincsen szabadság. A Semmi közvetlenül és többnyire a maga eredetiségében elleplezett a számunkra. Mi által van elleplezve? Azáltal, hogy meghatározott módon teljesen bele vagyunk veszve a létezőbe. Minél inkább a létező felé fordulunk tevés-vevésünk során, annál kevésbé hagyjuk azt mint olyant elsiklani, s annál inkább elfordulunk a Semmitől. S annál biztosabb, hogy a jelenvalólét nyilvános felszínére tolakszunk.

Mi tanúsítaná behatóbban a Semmi állandó és kiterjedt, bár elleplezett megnyilvánulását jelenvalólétünkben, mint a tagadás? A Semmi a tagadás eredete, nem pedig megfordítva. Ha pedig így a Semmire és a létre irányuló kérdezés mezején megtöretett az értelem hatalma, akkor ezzel eldőlt a logika” sorsa is a filozófián belül. A „logika” eszméje feloldódik egy eredendőbb kérdezés örvényében.

A jelenvalólétnek a rejtett szorongás alapján való beletartottsága a Semmibe az egészében vett létező meghaladása: a transzcendencia. A Semmire irányuló kérdezésünk magát a metafizikát állítja elénk. A metafizika a létezőn túlra kérdez, méghozzá azért, hogy a létezőt mint olyant a maga egészében a megértés számára visszanyerje. A Semmi nem lesz többé a létező meghatározatlan szembenálló párja, hanem mint a létező létéhez tartozó lepleződik le. Mert maga a lét lényegében véges, és csak a Semmibe beletartott jelenvalólét transzendenciájában nyilvánul meg.

A tudományos jelenvalólét egyszerűsége és ereje abban áll, hogy kitüntetett módon viszonyul magához a létezőhöz és egyedül ahhoz viszonyul. A Semmit a tudomány egy fölényes gesztussal fel szeretné adni. Most azonban a Semmire vonatkozó kérdezésben világossá válik, hogy ez a tudományos jelenvalólét csak akkor lehetséges, ha eleve beletartja magát a Semmibe. Csak akkor érti meg magát abban, ami, ha nem 68 – adja fel a Semmit. A tudomány állítólagos józansága és fölénye nevetségessé lesz, ha nem veszi komolyan a Semmit. Csak azért teheti a tudomány vizsgálódás tárgyává magát a létezőt, mert a Semmi megnyilvánul.

A létező a maga egész furcsaságában csak azért tör ránk, mert a létező alapjaiban a Semmi megnyilvánul. Csak ha gyötör bennünket a létező furcsasága, csak akkor ébreszti fel bennünk és vonja magára csodálkozásunkat. Csak a csodálkozás alapján – azaz a Semmi megnyilvánulásának az alapján – jön elő a „Miért?”. Az okokra bizonyos módon rákérdezni, s valamit megokolni csak azért tudunk, mert lehetséges a Miért mint olyan. S csak azért van egzisztenciánk kezébe adva a kutató sorsa, mert kérdezni és megokolni tudunk. A Semmire vonatkozó kérdés bennünket, a kérdezőket tesz kérdésessé. Ez metafizikai kérdés.

Az emberi jelenvalólét csak akkor tud létezőhöz viszonyulni, ha beletartja magát a Semmibe. A létezőn való túllépés a jelenvalólét létezésében történik meg. Ez a túllépés azonban maga a metafizika. Ebben a következő rejlik: a metafizika „az ember természetéhez” tartozik. Sem nem az iskolás filozófia egyik ága, sem pedig az önkényes ötletek mezeje. A metafizika az alaptörténés a jelenvalólétben. Amennyiben az ember egzisztál, bizonyos módon megtörténik a filozofálás. A filozófia a metafizika beindítása, az, amiben a metafizika eljut önmagához és kifejezett faladataihoz. A filozófia csak azáltal indul be, ha saját egzisztenciánk sajátlagos módon beugrik az egészében vett jelenvalólét alaplehetőségeibe. Ezen beugrás szempontjából a következő a döntő: először teret adni az egészében vett létezőnek; azután átengedni magunkat a Semminek, azaz megszabadulni azoktól a bálványoktól, amelyekkel mindenki rendelkezik, s amelyekhez oda szokott lopódzni; s végezetül hagyni, hogy szabadon lebegjünk, hogy állandóan visszalendüljünk a metafizika alapkérdéséhez, amely magát a Semmit kényszeríti ki: Miért van egyáltalán létező, nem pedig inkább Semmi?


egzisztencia – az ember mint önmegértő, véges létező

van” – a magyar nyelv jelen idő 3. személyben nem

használja a létigei állítmányt

diszpozíció – Heidegger kifejezése: hangoltság, melyben

kifejeződik az ember világra való ráutaltsága

jelenvalólét – Heidegger kifejezése: emberi egzisztencia

transzcendencia – tapasztalat feletti


Másolat. Részletek. Forrás: az alábbi. A ténylegesen beszkennelt állapotban csak egy jelet javítottam 120 szöveghelyen ő-re.Cím eredeti írásmódban:









ISBN 978 963 7181 43 6