Being and Nothing in Hegel

(Pertaining to the Heidegger texts cited before…)

From The Science of Logic

§ 111

Further, in the beginning, being and nothing are present as distinguished from each other; for the beginning points to something else — it is a non-being which carries a reference to being as to an other; that which begins, as yet is not, it is only on the way to being.

That which begins, as yet is not, it is only on the way to being. The being contained in the beginning is, therefore, a being which removed itself from non-being or sublates it as something opposed to it.

But again, that which begins already is, but equally, too, is not as yet. The opposites, being and non-being are therefore directly united in it, or, otherwise expressed, it is their undifferentiated unity.

§ 112

The analysis of the beginning would thus yield the notion of the unity of being and nothing — or, in a more reflected form, the unity of differentiatedness and non-differentiatedness, or the identity of identity and non-identity. This concept could be regarded as the first, purest, that is, most abstract definition of the absolute — as it would in fact be if we were at all concerned with the form of definitions and with the name of the absolute. In this sense, that abstract concept would be the first definition of this absolute and all further determinations and developments only more specific and richer definitions of it. But let those who are dissatisfied with being as a beginning because it passes over into nothing and so gives rise to the unity of being and nothing, let them see whether they find this beginning which begins with the general idea of a beginning and with its analysis (which, though of course correct, likewise leads to the unity of being and nothing), more satisfactory than the beginning with being.

I.

1.

i.

Chapter 1 Being

A Being

§ 132

Being, pure being, without any further determination. In its indeterminate immediacy it is equal only to itself. It is also not unequal relatively to an other; it has no diversity within itself nor any with a reference outwards. It would not be held fast in its purity if it contained any determination or content which could be distinguished in it or by which it could be distinguished from an other. It is pure indeterminateness and emptiness. There is nothing to be intuited in it, if one can speak here of intuiting; or, it is only this pure intuiting itself. Just as little is anything to be thought in it, or it is equally only this empty thinking. Being, the indeterminate immediate, is in fact nothing, and neither more nor less than nothing.

B Nothing

§ 133

Nothing, pure nothing: it is simply equality with itself, complete emptiness, absence of all determination and content — undifferentiatedness in itself. In so far as intuiting or thinking can be mentioned here, it counts as a distinction whether something or nothing is intuited or thought. To intuit or think nothing has, therefore, a meaning; both are distinguished and thus nothing is (exists) in our intuiting or thinking; or rather it is empty intuition and thought itself, and the same empty intuition or thought as pure being. Nothing is, therefore, the same determination, or rather absence of determination, and thus altogether the same as, pure being.

C Becoming

1. Unity of Being and Nothing

§ 134

Pure Being and pure nothing are, therefore, the same. What is the truth is neither being nor nothing, but that being — does not pass over but has passed over — into nothing, and nothing into being. But it is equally true that they are not undistinguished from each other, that, on the contrary, they are not the same, that they are absolutely distinct, and yet that they are unseparated and inseparable and that each immediately vanishes in its opposite. Their truth is therefore, this movement of the immediate vanishing of the one into the other: becoming, a movement in which both are distinguished, but by a difference which has equally immediately resolved itself.

Remark 1: The Opposition of Being and Nothing in Ordinary Thinking

§ 135

Nothing is usually opposed to something; but the being of something is already determinate and is distinguished from another something; and so therefore the nothing which is opposed to the something is also the nothing of a particular something, a determinate nothing. Here, however, nothing is to be taken in its indeterminate simplicity. Should it be held more correct to oppose to being, non-being instead of nothing, there would be no objection to this so far as the result is concerned, for in non-being the relation to being is contained: both being and its negation are enunciated in a single term, nothing, as it is in becoming. But we are concerned first of all not with the form of opposition (with the form, that is, also of relation) but with the abstract, immediate negation: nothing, purely on its own account, negation devoid of any relations — what could also be expressed if one so wished merely by ‘not’.

§ 136

It was the Eleatics, above all Parmenides, who first enunciated the simple thought of pure being as the absolute and sole truth: only being is, and nothing absolutely is not, and in the surviving fragments of Parmenides this is enunciated with the pure enthusiasm of thought which has for the first time apprehended itself in its absolute abstraction. As we know, in the oriental systems, principally in Buddhism, nothing, the void, is the absolute principle. Against that simple and one-sided abstraction the deep-thinking Heraclitus brought forward the higher, total concept of becoming and said: being as little is, as nothing is, or, all flows, which means, all is a becoming. The popular, especially oriental proverbs, that all that exists has the germ of death in its very birth, that death, on the other hand, is the entrance into new life, express at bottom the same union of being and nothing. But these expressions have a substratum in which the transition takes place; being and nothing are held apart in time, are conceived as alternating in it, but are not thought in their abstraction and consequently, too, not so that they are in themselves absolutely the same.

§ 137

Ex nihilo nihil fit — is one of those propositions to which great importance was ascribed in metaphysics. In it is to be seen either only the empty tautology: nothing is nothing; or, if becoming is supposed to possess an actual meaning in it, then, since from nothing only nothing becomes, the proposition does not in fact contain becoming, for in it nothing remains nothing. Becoming implies that nothing does not remain nothing but passes into its other, into being. Later, especially Christian, metaphysics whilst rejecting the proposition that out of nothing comes nothing, asserted a transition from nothing into being; although it understood this proposition synthetically or merely imaginatively, yet even in the most imperfect union there is contained a point in which being and nothing coincide and their distinguishedness vanishes. The proposition: out of nothing comes nothing, nothing is just nothing, owes its peculiar importance to its opposition to becoming generally, and consequently also to its opposition to the creation of the world from nothing. Those who maintain the proposition: nothing is just nothing, and even grow heated in its defence, are unaware that in so doing they are subscribing to the abstract pantheism of the Eleatics, and also in principle to that of Spinoza. The philosophical view for which ‘being is only being, nothing is only nothing’, is a valid principle, merits the name of ‘system of identity’; this abstract identity is the essence of pantheism.

From the Organon to the computer

Excerpts

Chris Dixon

https://www.theatlantic.com/technology/archive/2017/03/aristotle-computer/518697/?utm_source=msn

 

THE HISTORY Of computers is often told as a history of objects, from the abacus to the Babbage engine up through the code-breaking machines of World War II. In fact, it is better understood as a history of ideas, mainly ideas that emerged from mathematical logic, an obscure and cult-like discipline that first developed in the 19th century. Mathematical logic was pioneered by philosopher-mathematicians, most notably George Boole and Gottlob Frege, who were themselves inspired by Leibniz’s dream of a universal “concept language,” and the ancient logical system of Aristotle.

Mathematical logic was initially considered a hopelessly abstract subject with no conceivable applications. As one computer scientist commented: “If, in 1901, a talented and sympathetic outsider had been called upon to survey the sciences and name the branch which would be least fruitful in [the] century ahead, his choice might well have settled upon mathematical logic.” And yet, it would provide the foundation for a field that would have more impact on the modern world than any other.

The evolution of computer science from mathematical logic culminated in the 1930s, with two landmark papers: Claude Shannon’s “A Symbolic Analysis of Switching and Relay Circuits,” and Alan Turing’s “On Computable Numbers, With an Application to the Entscheidungsproblem.” In the history of computer science, Shannon and Turing are towering figures, but the importance of the philosophers and logicians who preceded them is frequently overlooked.

A well-known history of computer science describes Shannon’s paper as “possibly the most important, and also the most noted, master’s thesis of the century.” Shannon wrote it as an electrical engineering student at MIT. His adviser, Vannevar Bush, built a prototype computer known as the Differential Analyzer that could rapidly calculate differential equations. The device was mostly mechanical, with subsystems controlled by electrical relays, which were organized in an ad hoc manner as there was not yet a systematic theory underlying circuit design. Shannon’s thesis topic came about when Bush recommended he try to discover such a theory.

Mathematics may be defined as the subject in which we never know what we are talking about.”

Shannon’s paper is in many ways a typical electrical-engineering paper, filled with equations and diagrams of electrical circuits. What is unusual is that the primary reference was a 90-year-old work of mathematical philosophy, George Boole’s The Laws of Thought.

Today, Boole’s name is well known to computer scientists (many programming languages have a basic data type called a Boolean), but in 1938 he was rarely read outside of philosophy departments. Shannon himself encountered Boole’s work in an undergraduate philosophy class. “It just happened that no one else was familiar with both fields at the same time,” he commented later.

Boole is often described as a mathematician, but he saw himself as a philosopher, following in the footsteps of Aristotle. The Laws of Thought begins with a description of his goals, to investigate the fundamental laws of the operation of the human mind:

The design of the following treatise is to investigate the fundamental laws of those operations of the mind by which reasoning is performed; to give expression to them in the symbolical language of a Calculus, and upon this foundation to establish the science of Logic … and, finally, to collect … some probable intimations concerning the nature and constitution of the human mind.

He then pays tribute to Aristotle, the inventor of logic, and the primary influence on his own work:

In its ancient and scholastic form, indeed, the subject of Logic stands almost exclusively associated with the great name of Aristotle. As it was presented to ancient Greece in the partly technical, partly metaphysical disquisitions of The Organon, such, with scarcely any essential change, it has continued to the present day.

Trying to improve on the logical work of Aristotle was an intellectually daring move. Aristotle’s logic, presented in his six-part book The Organon, occupied a central place in the scholarly canon for more than 2,000 years. It was widely believed that Aristotle had written almost all there was to say on the topic. The great philosopher Immanuel Kant commented that, since Aristotle, logic had been “unable to take a single step forward, and therefore seems to all appearance to be finished and complete.”

Aristotle’s central observation was that arguments were valid or not based on their logical structure, independent of the non-logical words involved. The most famous argument schema he discussed is known as the syllogism:

  • All men are mortal.
  • Socrates is a man.
  • Therefore, Socrates is mortal.

You can replace “Socrates” with any other object, and “mortal” with any other predicate, and the argument remains valid. The validity of the argument is determined solely by the logical structure. The logical words — “all,” “is,” are,” and “therefore” — are doing all the work.

Aristotle also defined a set of basic axioms from which he derived the rest of his logical system:

  • An object is what it is (Law of Identity)
  • No statement can be both true and false (Law of Non-contradiction)
  • Every statement is either true or false (Law of the Excluded Middle)

These axioms weren’t meant to describe how people actually think (that would be the realm of psychology), but how an idealized, perfectly rational person ought to think.

Aristotle’s axiomatic method influenced an even more famous book, Euclid’s Elements, which is estimated to be second only to the Bible in the number of editions printed.

Although ostensibly about geometry, the Elements became a standard textbook for teaching rigorous deductive reasoning. (Abraham Lincoln once said that he learned sound legal argumentation from studying Euclid.) In Euclid’s system, geometric ideas were represented as spatial diagrams. Geometry continued to be practiced this way until René Descartes, in the 1630s, showed that geometry could instead be represented as formulas. His Discourse on Method was the first mathematics text in the West to popularize what is now standard algebraic notation — x, y, z for variables, a, b, c for known quantities, and so on.

Descartes’s algebra allowed mathematicians to move beyond spatial intuitions to manipulate symbols using precisely defined formal rules. This shifted the dominant mode of mathematics from diagrams to formulas, leading to, among other things, the development of calculus, invented roughly 30 years after Descartes by, independently, Isaac Newton and Gottfried Leibniz.

Boole’s goal was to do for Aristotelean logic what Descartes had done for Euclidean geometry: free it from the limits of human intuition by giving it a precise algebraic notation. To give a simple example, when Aristotle wrote:

All men are mortal.

Boole replaced the words “men” and “mortal” with variables, and the logical words “all” and “are” with arithmetical operators:

x = x * y

Which could be interpreted as “Everything in the set x is also in the set y.”

The Laws of Thought created a new scholarly field—mathematical logic—which in the following years became one of the most active areas of research for mathematicians and philosophers. Bertrand Russell called the Laws of Thought “the work in which pure mathematics was discovered.”

Shannon’s insight was that Boole’s system could be mapped directly onto electrical circuits. At the time, electrical circuits had no systematic theory governing their design. Shannon realized that the right theory would be “exactly analogous to the calculus of propositions used in the symbolic study of logic.”

He showed the correspondence between electrical circuits and Boolean operations in a simple chart:

SHANNON

Shannon’s mapping from electrical circuits to symbolic logic (University of Virginia)
SHANNON2

This correspondence allowed computer scientists to import decades of work in logic and mathematics by Boole and subsequent logicians. In the second half of his paper, Shannon showed how Boolean logic could be used to create a circuit for adding two binary digits.

Shannon’s adder circuit (University of Virginia)

By stringing these adder circuits together, arbitrarily complex arithmetical operations could be constructed. These circuits would become the basic building blocks of what are now known as arithmetical logic units, a key component in modern computers.

Another way to characterize Shannon’s achievement is that he was first to distinguish between the logical and the physical layer of computers. (This distinction has become so fundamental to computer science that it might seem surprising to modern readers how insightful it was at the time—a reminder of the adage that “the philosophy of one century is the common sense of the next.”)

Since Shannon’s paper, a vast amount of progress has been made on the physical layer of computers, including the invention of the transistor in 1947 by William Shockley and his colleagues at Bell Labs. Transistors are dramatically improved versions of Shannon’s electrical relays — the best known way to physically encode Boolean operations. Over the next 70 years, the semiconductor industry packed more and more transistors into smaller spaces. A 2016 iPhone has about 3.3 billion transistors, each one a “relay switch” like those pictured in Shannon’s diagrams.

While Shannon showed how to map logic onto the physical world, Turing showed how to design computers in the language of mathematical logic. When Turing wrote his paper, in 1936, he was trying to solve “the decision problem,” first identified by the mathematician David Hilbert, who asked whether there was an algorithm that could determine whether an arbitrary mathematical statement is true or false. In contrast to Shannon’s paper, Turing’s paper is highly technical. Its primary historical significance lies not in its answer to the decision problem,  but in the template for computer design it provided along the way.

Turing was working in a tradition stretching back to Gottfried Leibniz, the philosophical giant who developed calculus independently of Newton. Among Leibniz’s many contributions to modern thought, one of the most intriguing was the idea of a new language he called the “universal characteristic” that, he imagined, could represent all possible mathematical and scientific knowledge. Inspired in part by the 13th-century religious philosopher Ramon Llull, Leibniz postulated that the language would be ideographic like Egyptian hieroglyphics, except characters would correspond to “atomic” concepts of math and science. He argued this language would give humankind an “instrument” that could enhance human reason “to a far greater extent than optical instruments” like the microscope and telescope.

He also imagined a machine that could process the language, which he called the calculus ratiocinator.

If controversies were to arise, there would be no more need of disputation between two philosophers than between two accountants. For it would suffice to take their pencils in their hands, and say to each other: Calculemus—Let us calculate.

Leibniz didn’t get the opportunity to develop his universal language or the corresponding machine (although he did invent a relatively simple calculating machine, the stepped reckoner). The first credible attempt to realize Leibniz’s dream came in 1879, when the German philosopher Gottlob Frege published his landmark logic treatise Begriffsschrift. Inspired by Boole’s attempt to improve Aristotle’s logic, Frege developed a much more advanced logical system. The logic taught in philosophy and computer-science classes today—first-order or predicate logic—is only a slight modification of Frege’s system.

Frege is generally considered one of the most important philosophers of the 19th century. Among other things, he is credited with catalyzing what noted philosopher Richard Rorty called the “linguistic turn” in philosophy. As Enlightenment philosophy was obsessed with questions of knowledge, philosophy after Frege became obsessed with questions of language. His disciples included two of the most important philosophers of the 20th century—Bertrand Russell and Ludwig Wittgenstein.

The major innovation of Frege’s logic is that it much more accurately represented the logical structure of ordinary language. Among other things, Frege was the first to use quantifiers (“for every,” “there exists”) and to separate objects from predicates. He was also the first to develop what today are fundamental concepts in computer science like recursive functions and variables with scope and binding.

Frege’s formal language — what he called his “concept-script” — is made up of meaningless symbols that are manipulated by well-defined rules. The language is only given meaning by an interpretation, which is specified separately (this distinction would later come to be called syntax versus semantics). This turned logic into what the eminent computer scientists Allan Newell and Herbert Simon called “the symbol game,” “played with meaningless tokens according to certain purely syntactic rules.”

All meaning had been purged. One had a mechanical system about which various things could be proved. Thus progress was first made by walking away from all that seemed relevant to meaning and human symbols.

As Bertrand Russell famously quipped: “Mathematics may be defined as the subject in which we never know what we are talking about, nor whether what we are saying is true.”

An unexpected consequence of Frege’s work was the discovery of weaknesses in the foundations of mathematics. For example, Euclid’s Elements — considered the gold standard of logical rigor for thousands of years — turned out to be full of logical mistakes. Because Euclid used ordinary words like “line” and “point,” he — and centuries of readers — deceived themselves into making assumptions about sentences that contained those words. To give one relatively simple example, in ordinary usage, the word “line” implies that if you are given three distinct points on a line, one point must be between the other two. But when you define “line” using formal logic, it turns out “between-ness” also needs to be defined—something Euclid overlooked. Formal logic makes gaps like this easy to spot.

This realization created a crisis in the foundation of mathematics. If the Elements — the bible of mathematics — contained logical mistakes, what other fields of mathematics did too? What about sciences like physics that were built on top of mathematics?

The good news is that the same logical methods used to uncover these errors could also be used to correct them. Mathematicians started rebuilding the foundations of mathematics from the bottom up. In 1889, Giuseppe Peano developed axioms for arithmetic, and in 1899, David Hilbert did the same for geometry. Hilbert also outlined a program to formalize the remainder of mathematics, with specific requirements that any such attempt should satisfy, including:

  • Completeness: There should be a proof that all true mathematical statements can be proved in the formal system.
  • Decidability: There should be an algorithm for deciding the truth or falsity of any mathematical statement. (This is the “Entscheidungsproblem” or “decision problem” referenced in Turing’s paper.)

Rebuilding mathematics in a way that satisfied these requirements became known as Hilbert’s program. Up through the 1930s, this was the focus of a core group of logicians including Hilbert, Russell, Kurt Gödel, John Von Neumann, Alonzo Church, and, of course, Alan Turing.

In science, novelty emerges only with difficulty.”

Hilbert’s program proceeded on at least two fronts. On the first front, logicians created logical systems that tried to prove Hilbert’s requirements either satisfiable or not.

On the second front, mathematicians used logical concepts to rebuild classical mathematics. For example, Peano’s system for arithmetic starts with a simple function called the successor function which increases any number by one. He uses the successor function to recursively define addition, uses addition to recursively define multiplication, and so on, until all the operations of number theory are defined. He then uses those definitions, along with formal logic, to prove theorems about arithmetic.

The historian Thomas Kuhn once observed that “in science, novelty emerges only with difficulty.” Logic in the era of Hilbert’s program was a tumultuous process of creation and destruction. One logician would build up an elaborate system and another would tear it down.

The favored tool of destruction was the construction of self-referential, paradoxical statements that showed the axioms from which they were derived to be inconsistent. A simple form of this  “liar’s paradox” is the sentence:

This sentence is false.

If it is true then it is false, and if it is false then it is true, leading to an endless loop of self-contradiction.

Russell made the first notable use of the liar’s paradox in mathematical logic. He showed that Frege’s system allowed self-contradicting sets to be derived:

Let R be the set of all sets that are not members of themselves. If R is not a member of itself, then its definition dictates that it must contain itself, and if it contains itself, then it contradicts its own definition as the set of all sets that are not members of themselves.

This became known as Russell’s paradox and was seen as a serious flaw in Frege’s achievement. (Frege himself was shocked by this discovery. He replied to Russell: “Your discovery of the contradiction caused me the greatest surprise and, I would almost say, consternation, since it has shaken the basis on which I intended to build my arithmetic.”)

Russell and his colleague Alfred North Whitehead put forth the most ambitious attempt to complete Hilbert’s program with the Principia Mathematica, published in three volumes between 1910 and 1913. The Principia’s method was so detailed that it took over 300 pages to get to the proof that 1+1=2.

Russell and Whitehead tried to resolve Frege’s paradox by introducing what they called type theory. The idea was to partition formal languages into multiple levels or types. Each level could make reference to levels below, but not to their own or higher levels. This resolved self-referential paradoxes by, in effect, banning self-reference. (This solution was not popular with logicians, but it did influence computer science — most modern computer languages have features inspired by type theory.)

Self-referential paradoxes ultimately showed that Hilbert’s program could never be successful. The first blow came in 1931, when Gödel published his now famous incompleteness theorem, which proved that any consistent logical system powerful enough to encompass arithmetic must also contain statements that are true but cannot be proven to be true. (Gödel’s incompleteness theorem is one of the few logical results that has been broadly popularized, thanks to books like Gödel, Escher, Bach and The Emperor’s New Mind).

The final blow came when Turing and Alonzo Church independently proved that no algorithm could exist that determined whether an arbitrary mathematical statement was true or false. (Church did this by inventing an entirely different system called the lambda calculus, which would later inspire computer languages like Lisp.) The answer to the decision problem was negative.

Turing’s key insight came in the first section of his famous 1936 paper, “On Computable Numbers, With an Application to the Entscheidungsproblem.” In order to rigorously formulate the decision problem (the “Entscheidungsproblem”), Turing first created a mathematical model of what it means to be a computer (today, machines that fit this model are known as “universal Turing machines”). As the logician Martin Davis describes it:

Turing knew that an algorithm is typically specified by a list of rules that a person can follow in a precise mechanical manner, like a recipe in a cookbook. He was able to show that such a person could be limited to a few extremely simple basic actions without changing the final outcome of the computation.

Then, by proving that no machine performing only those basic actions could determine whether or not a given proposed conclusion follows from given premises using Frege’s rules, he was able to conclude that no algorithm for the Entscheidungsproblem exists.

As a byproduct, he found a mathematical model of an all-purpose computing machine.

Next, Turing showed how a program could be stored inside a computer alongside the data upon which it operates. In today’s vocabulary, we’d say that he invented the “stored-program” architecture that underlies most modern computers:

Before Turing, the general supposition was that in dealing with such machines the three categories — machine, program, and data — were entirely separate entities. The machine was a physical object; today we would call it hardware. The program was the plan for doing a computation, perhaps embodied in punched cards or connections of cables in a plugboard. Finally, the data was the numerical input. Turing’s universal machine showed that the distinctness of these three categories is an illusion.

This was the first rigorous demonstration that any computing logic that could be encoded in hardware could also be encoded in software. The architecture Turing described was later dubbed the “Von Neumann architecture” — but modern historians generally agree it came from Turing, as, apparently, did Von Neumann himself.

Although, on a technical level, Hilbert’s program was a failure, the efforts along the way demonstrated that large swaths of mathematics could be constructed from logic. And after Shannon and Turing’s insights—showing the connections between electronics, logic and computing—it was now possible to export this new conceptual machinery over to computer design.

During World War II, this theoretical work was put into practice, when government labs conscripted a number of elite logicians. Von Neumann joined the atomic bomb project at Los Alamos, where he worked on computer design to support physics research. In 1945, he wrote the specification of the EDVAC—the first stored-program, logic-based computer—which is generally considered the definitive source guide for modern computer design.

Turing joined a secret unit at Bletchley Park, northwest of London, where he helped design computers that were instrumental in breaking German codes. His most enduring contribution to practical computer design was his specification of the ACE, or Automatic Computing Engine.

As the first computers to be based on Boolean logic and stored-program architectures, the ACE and the EDVAC were similar in many ways. But they also had interesting differences, some of which foreshadowed modern debates in computer design. Von Neumann’s favored designs were similar to modern CISC (“complex”) processors, baking rich functionality into hardware. Turing’s design was more like modern RISC (“reduced”) processors, minimizing hardware complexity and pushing more work to software.

Von Neumann thought computer programming would be a tedious, clerical job. Turing, by contrast, said computer programming “should be very fascinating. There need be no real danger of it ever becoming a drudge, for any processes that are quite mechanical may be turned over to the machine itself.”

Since the 1940s, computer programming has become significantly more sophisticated. One thing that hasn’t changed is that it still primarily consists of programmers specifying rules for computers to follow. In philosophical terms, we’d say that computer programming has followed in the tradition of deductive logic, the branch of logic discussed above, which deals with the manipulation of symbols according to formal rules.

In the past decade or so, programming has started to change with the growing popularity of machine learning, which involves creating frameworks for machines to learn via statistical inference. This has brought programming closer to the other main branch of logic, inductive logic, which deals with inferring rules from specific instances.

Today’s most promising machine learning techniques use neural networks, which were first invented in 1940s by Warren McCulloch and Walter Pitts, whose idea was to develop a calculus for neurons that could, like Boolean logic, be used to construct computer circuits. Neural networks remained esoteric until decades later when they were combined with statistical techniques, which allowed them to improve as they were fed more data. Recently, as computers have become increasingly adept at handling large data sets, these techniques have produced remarkable results. Programming in the future will likely mean exposing neural networks to the world and letting them learn.

This would be a fitting second act to the story of computers. Logic began as a way to understand the laws of thought. It then helped create machines that could reason according to the rules of deductive logic. Today, deductive and inductive logic are being combined to create machines that both reason and learn. What began, in Boole’s words, with an investigation “concerning the nature and constitution of the human mind,” could result in the creation of new minds—artificial minds—that might someday match or even exceed our own.”

 

MARTIN HEIDEGGER (1889-1976): MI A METAFIZIKA? (1929)

(részletek)

Kulcsszavak: egzisztencia*, szorongás, önmegértés, tudományfilozófia

Jelenvalólétünket – a kutatók, tanárok és tanulmányaikat folytatók közösségében – a tudomány határozza meg. Minden tudományban, amikor annak legsajátabb célját követjük, magához a létezőhöz viszonyulunk. Ez a létezőre irányuló kitüntetett világvonatkozás az emberi egzisztencia szabadon választott magatartásán nyugszik, ez a magatartás vezérli. Persze, az ember tudomány előtti és tudományon kívüli cselekvése és viselkedése is a létezőhöz viszonyul. A tudomány abban tűnik ki, hogy sajátos módon kifejezetten és egyedül magát a dolgot engedi szóhoz jutni.

Csak a létezőt kell kutatni és – semmi mást; egyedül a létezőt és – semmi egyebet; csakis a létezőt és rajta kívül – semmit. Hogyan is áll a dolog ezzel a Semmivel? A Semmit a tudomány kifejezetten elutasítja és feladja mint semmis-jelentéktelent. Csakhogy: nem éppen akkor ismerjük-e el a Semmit, amikor így feladjuk? A tudomány semmit sem akar tudni a Semmiről. De ugyanilyen biztos ez is: ott, ahol a tudomány kísérletet tesz rá, hogy megfogalmazza a maga lényegét, a Semmit hívja segítségül. Azt veszi igénybe, amit elvet. Miféle meghasonlott lényeg lepleződik itt le? Amikor pillanatnyi egzisztenciánkon – amelyet a tudomány határoz meg – elmélkedünk, egy összeütközés kellős közepébe jutunk. Ez a vita már elindított egy kérdezést. Most már csak arra van szükség, hogy a kérdést megfogalmazzuk: hogyan áll a dolog a Semmivel?

Már az első nekifutás a kérdésnek valami szokatlant mutat. Ebben a kérdezésben eleve úgy tesszük fel a Semmit, mint valamit, ami így és így „van” – mint létezőt. Csakhogy pontosan a létező az, amitől a Semmi teljességgel különbözik. Ennek megfelelően eleve nem lehetséges semmiféle válasz a kérdésre. Hiszen a válasz szükségképpen ilyen formájú: a Semmi ez és ez („van”)*. A Semmit illetően kérdés és felelet egyformán értelmetlen. A gondolkodás rendszerint felemlegetett alapszabálya, az elkerülendő ellentmondás tétele, az általános „logika” elveti a kérdést. Mert a gondolkodás, amely mindig valamiről való gondolkodás, a Semmiről való gondolkodásként saját lényege ellen kellene hogy cselekedjék. De vajon kikezdhetjük-e a „logika” uralmát? Vajon nem az értelem-e az úr a Semmire vonatkozó kérdésben?

Hiszen egyáltalában csak az ő segítségével tudjuk a Semmit meghatározni és mint problémát

megközelíteni, még ha e probléma fel is emészti önmagát. Mert a Semmi a létező mindenségének tagadása, az éppenséggel nem-létező. A tagadás azonban a „logika” uralkodó és kikezdhetetlen tanítása szerint egy sajátos értelmi tevékenység. Csak azért van a Semmi, mert van a Nem, azaz a tagadás? Vagy fordítva áll a helyzet? Csak azért van a tagadás és a Nem, mert a Semmi van? Ez nincsen eldöntve, sőt, kifejezett kérdésként sem fogalmazódott még meg. Mi azt állítottuk: a Semmi eredendőbb, mint a Nem és a tagadás. Ha ez az állítás helyes, akkor a tagadásnak mint értelmi tevékenységnek a lehetősége s ezzel maga az értelem is valamilyen módon a Semmitől függ. Hogy is akarhat akkor az értelem a Semmi felől dönteni?

Talán csak az derül ki a végén, hogy a Semmire vonatkozó kérdés és válasz látszólagos értelmetlensége a 67 – nyughatatlan értelem vak csökönyösségén nyugszik? A Semmi a létező mindenségének teljes tagadása. Vajon a Semminek ez a jellemzése nem mutat-e végül is olyan irányba, amelyből és csakis ebből utunkba kerülhet? Ahhoz, hogy mint olyan teljességgel tagadhatóvá váljék a létező mindensége, mely tagadásban azután maga a Semmi megmutatkozhatnék, a létező mindenségének előzőleg adottnak kell lennie.

Amilyen biztos az, hogy a létező egészét önmagában soha abszolút módon nem ragadhatjuk meg, olyan bizonyos az is, hogy valahogyan mégiscsak a maga egészében lelepleződött létezőbe állítottan lelünk önmagunkra. A hangoltság, melyben valaki így vagy úgy „van”, lehetővé teszi, hogy általa áthangolva az egészben vett létezőben tartózkodjunk. A hangulatnak ez a diszpozíciója* nemcsak hogy a maga módján minden esetben leleplezi az egészében vett létezőt, hanem egyszersmind ez a leleplezés jelenvalólétünk* alaptörténése. Amit ily módon „érzéseknek” nevezünk, az nem egyszerűen gondolkodó és akaró magatartásunk futólagos kísérő jelensége.

Csakhogy éppen amikor a hangulatok ily módon az egészében vett létező elé vezetnek el bennünket, akkor rejtik el előlünk a Semmit, amelyet keresünk. Most még kevésbé lesz az a véleményünk, hogy a hangulatilag megnyilvánult egészében vett létezőnek a tagadása a Semmi elé állít bennünket. Ilyesmi megfelelő módon eredendően csak egy olyan hangulatban történhet, amely legsajátabb leleplezési értelme szerint a Semmit nyilvánítja meg.

Megtörténik-e az ember jelenvalólétében egy olyan hangoltság, melyben ő maga a Semmivel szembesül? Ez a történés csak pillanatokra lehetséges, és – jóllehet elég ritkán – valóságos is a szorongás alaphangulatában. A szorongásban – azt mondjuk – „otthontalannak érezzük magunkat”. Nem tudjuk megmondani, mitől otthontalan, valaki egészében érzi így magát. Minden dolog, mi magunk is, közömbösségbe süllyedünk. Az egészében vett létezőnek ez az elmozdulása, ami a szorongásban körülvesz bennünket, szorongat minket. Nem marad támaszunk. Ami marad és ami ránk tör – midőn a létező elsiklik – az ez a „nincs”. A szorongás megnyilvánítja a Semmit. A szorongásban az egészében vett létező talajtalanná válik. Milyen értelemben történik ez meg? Azt talán mégsem akarjuk állítani, hogy a szorongás megsemmisíti a létezőt, hogy ily módon meghagyja számunkra a Semmit. Hogyan is tehetné ezt, amikor a szorongás éppen az egészében vett létezővel szembeni tehetetlenségben találtatik.

A Semmi sajátosan a létezővel és a létezőn mint valami elsikló egészen mutatkozik meg, ezt a létezőt mint a teljességgel másikat nyilvánítja meg – a Semmivel szemben. Csak a szorongás Semmijének világos éjszakájában keletkezik a létező mint olyan eredendő nyitottsága: hogy az létező – és nem Semmi. De ez az általunk beszédünkben hozzámondott „és nem Semmi” nem valami járulékos magyarázat, hanem egyáltalán a létező megnyilvánulásának előzetes lehetővé tétele. Csak a Semmi eredendő megnyilvánulásának alapján képes az ember jelenvalóléte hozzáférni a létezőhöz és beléhatolni.

Jelenvalólét annyit tesz: beletartottság a Semmibe. A jelenvalólét, minthogy beletartja magát a Semmibe, már eleve túl van az egészében vett létezőn. Ezt a létezőn való túllétet nevezzük

transzcendenciának*. Ha a jelenvalólét létezése alapjában nem transzcendálna, s ez itt azt jelenti, hogy nem tartaná bele magát eleve a Semmibe, akkor sohasem viszonyulhatna a létezőhöz, tehát önmagához sem. A Semmi eredendő megnyilvánulása nélkül nincs Önmagalét és nincsen szabadság. A Semmi közvetlenül és többnyire a maga eredetiségében elleplezett a számunkra. Mi által van elleplezve? Azáltal, hogy meghatározott módon teljesen bele vagyunk veszve a létezőbe. Minél inkább a létező felé fordulunk tevés-vevésünk során, annál kevésbé hagyjuk azt mint olyant elsiklani, s annál inkább elfordulunk a Semmitől. S annál biztosabb, hogy a jelenvalólét nyilvános felszínére tolakszunk.

Mi tanúsítaná behatóbban a Semmi állandó és kiterjedt, bár elleplezett megnyilvánulását jelenvalólétünkben, mint a tagadás? A Semmi a tagadás eredete, nem pedig megfordítva. Ha pedig így a Semmire és a létre irányuló kérdezés mezején megtöretett az értelem hatalma, akkor ezzel eldőlt a logika” sorsa is a filozófián belül. A „logika” eszméje feloldódik egy eredendőbb kérdezés örvényében.

A jelenvalólétnek a rejtett szorongás alapján való beletartottsága a Semmibe az egészében vett létező meghaladása: a transzcendencia. A Semmire irányuló kérdezésünk magát a metafizikát állítja elénk. A metafizika a létezőn túlra kérdez, méghozzá azért, hogy a létezőt mint olyant a maga egészében a megértés számára visszanyerje. A Semmi nem lesz többé a létező meghatározatlan szembenálló párja, hanem mint a létező létéhez tartozó lepleződik le. Mert maga a lét lényegében véges, és csak a Semmibe beletartott jelenvalólét transzendenciájában nyilvánul meg.

A tudományos jelenvalólét egyszerűsége és ereje abban áll, hogy kitüntetett módon viszonyul magához a létezőhöz és egyedül ahhoz viszonyul. A Semmit a tudomány egy fölényes gesztussal fel szeretné adni. Most azonban a Semmire vonatkozó kérdezésben világossá válik, hogy ez a tudományos jelenvalólét csak akkor lehetséges, ha eleve beletartja magát a Semmibe. Csak akkor érti meg magát abban, ami, ha nem 68 – adja fel a Semmit. A tudomány állítólagos józansága és fölénye nevetségessé lesz, ha nem veszi komolyan a Semmit. Csak azért teheti a tudomány vizsgálódás tárgyává magát a létezőt, mert a Semmi megnyilvánul.

A létező a maga egész furcsaságában csak azért tör ránk, mert a létező alapjaiban a Semmi megnyilvánul. Csak ha gyötör bennünket a létező furcsasága, csak akkor ébreszti fel bennünk és vonja magára csodálkozásunkat. Csak a csodálkozás alapján – azaz a Semmi megnyilvánulásának az alapján – jön elő a „Miért?”. Az okokra bizonyos módon rákérdezni, s valamit megokolni csak azért tudunk, mert lehetséges a Miért mint olyan. S csak azért van egzisztenciánk kezébe adva a kutató sorsa, mert kérdezni és megokolni tudunk. A Semmire vonatkozó kérdés bennünket, a kérdezőket tesz kérdésessé. Ez metafizikai kérdés.

Az emberi jelenvalólét csak akkor tud létezőhöz viszonyulni, ha beletartja magát a Semmibe. A létezőn való túllépés a jelenvalólét létezésében történik meg. Ez a túllépés azonban maga a metafizika. Ebben a következő rejlik: a metafizika „az ember természetéhez” tartozik. Sem nem az iskolás filozófia egyik ága, sem pedig az önkényes ötletek mezeje. A metafizika az alaptörténés a jelenvalólétben. Amennyiben az ember egzisztál, bizonyos módon megtörténik a filozofálás. A filozófia a metafizika beindítása, az, amiben a metafizika eljut önmagához és kifejezett faladataihoz. A filozófia csak azáltal indul be, ha saját egzisztenciánk sajátlagos módon beugrik az egészében vett jelenvalólét alaplehetőségeibe. Ezen beugrás szempontjából a következő a döntő: először teret adni az egészében vett létezőnek; azután átengedni magunkat a Semminek, azaz megszabadulni azoktól a bálványoktól, amelyekkel mindenki rendelkezik, s amelyekhez oda szokott lopódzni; s végezetül hagyni, hogy szabadon lebegjünk, hogy állandóan visszalendüljünk a metafizika alapkérdéséhez, amely magát a Semmit kényszeríti ki: Miért van egyáltalán létező, nem pedig inkább Semmi?

Jegyzetek

egzisztencia – az ember mint önmegértő, véges létező

van” – a magyar nyelv jelen idő 3. személyben nem

használja a létigei állítmányt

diszpozíció – Heidegger kifejezése: hangoltság, melyben

kifejeződik az ember világra való ráutaltsága

jelenvalólét – Heidegger kifejezése: emberi egzisztencia

transzcendencia – tapasztalat feletti

 

Másolat. Részletek. Forrás: az alábbi. A ténylegesen beszkennelt állapotban csak egy jelet javítottam 120 szöveghelyen ő-re.Cím eredeti írásmódban:

FILOZÓFIAI

SZÖVEGGYŐJTEMÉNY”

EGYETEMI JEGYZET

Szerkesztette:

KOVÁCS ZOLTÁN

LISZT FERENC ZENEMŐVÉSZETI EGYETEM

2009

 

ISBN 978 963 7181 43 6

A KIADÁSÉRT FELELİS:

A LISZT FERENC ZENEMŐVÉSZETI EGYETEM REKTORA

PÉLDÁNYSZÁM: 100

NYOMDA: MESTERPRINT KFT., BUDAPEST

Another view on human origins

“Masters of the Planet” by Ian Tattersall:

Tattersall maintains that the notion of human evolution as a linear trudge from primitivism to perfection is incorrect. Whereas the Darwinian approach to evolution may be viewed as a fine-tuning of characteristics guided by natural selection, Tattersall takes a more generalist view. Tattersall claims that individual organisms are mind-bogglingly complex and integrated mechanisms; they succeed or fail as the sum of their parts, and not because of a particular characteristic. In terms of human evolution, Tattersall believes the process was more a matter of evolutionary experimentation in which a new species entered the environment, competed with other life forms, and either succeeded, failed, or became extinct within that environment: “To put it in perspective, consider the fact that the history of diversity and competition among human species began some five million years ago when there were at least four different human species living on the same landscape. Yet as a result of evolutionary experimentation, only one species has prospered and survived. One human species is now the only twig on what was once a big branching bush of different species.” This idea differs from the typical view that homo sapiens is the pinnacle of an evolutionary ladder that humanity’s ancestors laboriously climbed.
(Source:Citizendiukm,Wikipedia)

Niels Bohr (1885-1962)

One of the foremost scientists of the 20th century, Niels Henrik David Bohr was the first to apply the quantum theory, which restricts the energy of a system to certain discrete values, to the problem of atomic and molecular structure. He was a guiding spirit and major contributor to the development of quantum physics.

Bohr distinguished himself at the University of Copenhagen, winning a gold medal from the Royal Danish Academy of Sciences and Letters for his theoretical analysis of and precise experiments on the vibrations of water jets as a way of determining surface tension. …Bohr moved to Manchester in March 1912 and joined Ernest Rutherford’s group studying the structure of the atom.

At Manchester Bohr worked on the theoretical implications of the nuclear model of the atom recently proposed by Rutherford. Bohr was among the first to see the importance of the atomic number, which indicates the position of an element in the periodic table and is equal to the number of natural units of electric charge on the nuclei of its atoms. He recognized that the various physical and chemical properties of the elements depend on the electrons moving around the nuclei of their atoms and that only the atomic weight and possible radioactive behaviour are determined by the small but massive nucleus itself. Rutherford’s nuclear atom was both mechanically and electromagnetically unstable, but Bohr imposed stability on it by introducing the new and not yet clarified ideas of the quantum theory being developed by Max Planck, Albert Einstein, and other physicists. Departing radically from classical physics, Bohr postulated that any atom could exist only in a discrete set of stable or stationary states, each characterized by a definite value of its energy.

The most impressive result of Bohr’s essay at a quantum theory of the atom was the way it accounted for the series of lines observed in the spectrum of light emitted by atomic hydrogen. He was able to determine the frequencies of these spectral lines to considerable accuracy from his theory, expressing them in terms of the charge and mass of the electron and Planck’s constant (the quantum of action, designated by the symbol h). To do this, Bohr also postulated that an atom would not emit radiation while it was in one of its stable states but rather only when it made a transition between states. The frequency of the radiation so emitted would be equal to the difference in energy between those states divided by Planck’s constant. This meant that the atom could neither absorb nor emit radiation continuously but only in finite steps or quantum jumps. It also meant that the various frequencies of the radiation emitted by an atom were not equal to the frequencies with which the electrons moved within the atom, a bold idea that some of Bohr’s contemporaries found particularly difficult to accept. The consequences of Bohr’s theory, however, were confirmed by new spectroscopic measurements and other experiments.

Through the early 1920s, Bohr concentrated his efforts on two interrelated sets of problems. …As Bohr put it in 1923, “notwithstanding the fundamental departure from the ideas of the classical theories of mechanics and electrodynamics involved in these postulates, it has been possible to trace a connection between the radiation emitted by the atom and the motion of the particles which exhibits a far-reaching analogy to that claimed by the classical ideas of the origin of radiation.” Indeed, in a suitable limit the frequencies calculated by the two very different methods would agree exactly.

His work on atomic theory was recognized by the Nobel Prize for Physics in 1922.

..During the next few years, a genuine quantum mechanics was created, the new synthesis that Bohr had been expecting. The new quantum mechanics required more than just a mathematical structure

of calculating; it required a physical interpretation. That physical interpretation came out of the intense discussions between Bohr and the steady stream of visitors to his world capital of atomic physics, discussions on how the new mathematical description of nature was to be linked with the procedures and the results of experimental physics.

Bohr expressed the characteristic feature of quantum physics in his principle of complementarity, which “implies the impossibility of any sharp separation between the behaviour of atomic objects and the interaction with the measuring instruments which serve to define the conditions under which the phenomena appear.” As a result, “evidence obtained under different experimental conditions cannot be comprehended within a single picture, but must be regarded as complementary in the sense that only the totality of the phenomena exhausts the possible information about the objects.” This interpretation of the meaning of quantum physics, which implied an altered view of the meaning of physical explanation, gradually came to be accepted by the majority of physicists. The most famous and most outspoken dissenter, however, was Einstein.

..In his account of these discussions, however, Bohr emphasized how important Einstein’s challenging objections had been to the evolution of his own ideas and what a deep and lasting impression they had made on him.

During the 1930s Bohr continued to work on the epistemological problems raised by the quantum theory and also contributed to the new field of nuclear physics. His concept of the atomic nucleus, which he likened to a liquid droplet, was a key step in the understanding of many nuclear processes. In particular, it played an essential part in 1939 in the understanding of nuclear fission (the splitting of a heavy nucleus into two parts, almost equal in mass, with the release of a tremendous amount of energy).

To cite this page:
“Niels Bohr”
Britannica Online.
<http://www.eb.com:180/cgi-bin/g?DocF=macro/5000/79.html>
[Accessed 10 May 1998].

Rutherford

Ernest Rutherford (1871-1937)

Ernest Rutherford, Baron Rutherford of Nelson, nuclear physicist and Nobel Prize winner, is to be ranked in fame with Sir Isaac Newton and Michael Faraday. Indeed, just as Faraday is called the “father of electricity,” so a similar description might be applied to Rutherford in relation to nuclear energy. He contributed substantially to the understanding of the disintegration and transmutation of the radioactive elements, discovered and named the particles expelled from radium, identified the alpha particle as a helium atom and with its aid evolved the nuclear theory of atomic structure, and used that particle to produce the first artificial disintegration of elements. In the universities of McGill, Manchester, and Cambridge he led and inspired two generations of physicists who–to use his own words–“turned out the facts of Nature,” and in the Cavendish Laboratory his “boys” discovered the neutron and artificial disintegration by accelerated particles. (see also Index: nuclear physics)

.. A scholarship allowed him to enroll in Canterbury College, Christchurch, from where he graduated with a B.A. in 1892 and an M.A. in 1893 with first-class honours in mathematics and physics. Financing himself by part-time teaching, he stayed for a fifth year to do research in physics, studying the properties of iron in high-frequency alternating magnetic fields. He found that he could detect the electromagnetic waves–wireless waves–newly discovered by the German physicist Heinrich Hertz, even after they had passed through brick walls.

On his arrival in Cambridge in 1895, Rutherford began to work under J.J. Thomson, professor of experimental physics at the university’s Cavendish Laboratory.

Rutherford made a great impression on colleagues in the Cavendish Laboratory, and Thomson held him in high esteem. He also aroused jealousies in the more conservative members of the Cavendish fraternity, as is clear from his letters to Mary. In December 1895, when Röntgen discovered X rays, Thomson asked Rutherford to join him in a study of the effects of passing a beam of X rays through a gas. They discovered that the X rays produced large quantities of electrically charged particles, or carriers of positive and negative electricity, and that these carriers, or ionized atoms, recombined to form neutral molecules. Working on his own, Rutherford then devised a technique for measuring the velocity and rate of recombination of these positive and negative ions. The published papers on this subject remain classics to the present day.

In 1896 the French physicist Henri Becquerel discovered that uranium emitted rays that could fog a photographic plate as did X rays. Rutherford soon showed that they also ionized air but that they were different from X rays, consisting of two distinct types of radiation. He named them alpha rays, highly powerful in producing ionization but easily absorbed, and beta rays, which produced less radiation but had more penetrating ability. He thought they must be extremely minute particles of matter.

Toward the end of the 19th century many scientists thought that no new advances in physics remained to be made. Yet within three years Rutherford succeeded in marking out an entirely new branch of physics called radioactivity. He soon discovered that thorium or its compounds disintegrated into a gas that in turn disintegrated into an unknown “active deposit,” likewise radioactive. Rutherford and a young chemist, Frederick Soddy, then investigated three groups of radioactive elements–radium, thorium, and actinium. They concluded in 1902 that radioactivity was a process in which atoms of one element spontaneously disintegrated into atoms of an entirely different element, which also remained radioactive.

Rutherford’s outstanding work won him recognition by the Royal Society, which elected him a fellow in 1903 and awarded him the Rumford medal in 1904. In his book Radio-activity he summarized in 1904 the results of research in that subject. The evidence he marshaled for radioactivity was that it is unaffected by external conditions, such as temperature and chemical change; that more heat is produced than in an ordinary chemical reaction; that new types of matter are produced at a rate in equilibrium with the rate of decay; and that the new products possess distinct chemical properties.

With the ingenious apparatus that he and his research assistant, Hans Geiger, had invented, they counted the particles as they were emitted one by one from a known amount of radium; and they also measured the total charge collected from which the charge on each particle could be detected. Combining this result with the rate of production of helium from radium, determined by Rutherford and the American chemist Bertram Borden Boltwood, Rutherford was able to deduce Avogadro’s number (the constant number of molecules in the molecular weight in grams of any substance) in the most direct manner conceivable. With his student Thomas D. Royds he proved in 1908 that the alpha particle really is a helium atom.

In 1911 Rutherford made his greatest contribution to science with his nuclear theory of the atom. He had observed in Montreal that fast-moving alpha particles on passing through thin plates of mica produced diffuse images on photographic plates, whereas a sharp image was produced when there was no obstruction to the passage of the rays. He considered that the particles must be deflected through small angles as they passed close to atoms of the mica, but calculation showed that an electric field of 100,000,000 volts per centimetre was necessary to deflect such particles traveling at 20,000 kilometres per second, a most astonishing conclusion. This phenomenon of scattering was found in the counting experiments with Geiger; Rutherford suggested to Geiger and another student, Ernest Marsden, that it would be of interest to examine whether any particles were scattered backward–i.e., deflected through an angle of more than 90 degrees. To their astonishment, a few particles in every 10,000 were indeed so scattered, emerging from the same side of a gold foil as that on which they had entered. After a number of calculations, Rutherford came to the conclusion that the requisite intense electric field to cause such a large deflection could occur only if all the positive charge in the atom, and therefore almost all the mass, were concentrated on a very small central nucleus some 10,000 times smaller in diameter than that of the entire atom. The positive charge on the nucleus would therefore be balanced by an equal charge on all the electrons distributed somehow around the nucleus.

To cite this page:
“Ernest Rutherford”
Britannica Online.
<http://www.eb.com:180/cgi-bin/g?DocF=macro/5005/61.html>
[Accessed 10 May 1998](T.E.A./Ed.)

Maxwell

James Clerk Maxwell (1831-1879)

is regarded by most modern physicists as the scientist of the 19th century who had the greatest influence on 20th-century physics; he is ranked with Sir Isaac Newton and Albert Einstein for the fundamental nature of his contributions. In 1931, at the 100th anniversary of Maxwell’s birth, Einstein described the change in the conception of reality in physics that resulted from Maxwell’s work as “the most profound and the most fruitful that physics has experienced since the time of Newton.” The concept of electromagnetic radiation originated with Maxwell, and his field equations, based on Michael Faraday’s observations of the electric and magnetic lines of force, paved the way for Einstein’s special theory of relativity, which established the equivalence of mass and energy. Maxwell’s ideas also ushered in the other major innovation of 20th-century physics, the quantum theory. His description of electromagnetic radiation led to the development (according to classical theory) of the ultimately unsatisfactory law of heat radiation, which prompted Max Planck’s formulation of the quantum hypothesis–i.e., the theory that radiant-heat energy is emitted only in finite amounts, or quanta. The interaction between electromagnetic radiation and matter, integral to Planck’s hypothesis, in turn has played a central role in the development of the theory of the structure of atoms and molecules.

During the most fruitful of his career. this period his two classic papers on the electromagnetic field were published, and his demonstration of colour photography took place. He was elected to the Royal Society in 1861. His theoretical and experimental work on the viscosity of gases also was undertaken during these years and culminated in a lecture to the Royal Society in 1866. He supervised the experimental determination of electrical units for the British Association for the Advancement of Science, and this work in measurement and standardization led to the establishment of the National Physical Laboratory. He also measured the ratio of electromagnetic and electrostatic units of electricity and confirmed that it was in satisfactory agreement with the velocity of light as predicted by his theory.

It was Maxwell’s research on electromagnetism that established him among the great scientists of history. In the preface to his Treatise on Electricity and Magnetism (1873), the best exposition of his theory, Maxwell stated that his major task was to convert Faraday’s physical ideas into mathematical form. In attempting to illustrate Faraday’s law of induction (that a changing magnetic field gives rise to an induced electromagnetic field), Maxwell constructed a

mechanical model. He found that the model gave rise to a corresponding “displacement current” in the dielectric medium, which could then be the seat of transverse waves. On calculating the velocity of these waves, he found that they were very close to the velocity of light. Maxwell concluded that he could “scarcely avoid the inference that light consists in the

transverse undulations of the same medium which is the cause of electric and magnetic phenomena.”

Maxwell’s theory suggested that electromagnetic waves could be generated in a laboratory, a possibility first demonstrated by Heinrich Hertz in 1887, eight years after Maxwell’s death. The resulting radio industry with its many applications thus has its origin in Maxwell’s publications.

The Maxwell relations of equality between different partial derivatives of thermodynamic functions are included in every standard textbook on thermodynamics ( THERMODYNAMICS, PRINCIPLES OF). Though Maxwell did not originate the modern kinetic theory of gases, he was the first to apply the methods of probability and statistics in describing the properties of an assembly of molecules. Thus he was able to demonstrate that the velocities of molecules in a gas, previously assumed to be equal, must follow a statistical distribution (known subsequently as the Maxwell-Boltzmann distribution law). In later papers Maxwell investigated the transport properties of gases–i.e., the effect of changes in temperature and pressure on viscosity, thermal conductivity, and diffusion.

Newton, Einstein, gravitation

Newton, Sir Isaac
(b. Dec. 25, 1642 [Jan. 4, 1643, New Style], Woolsthorpe, Lincolnshire, Eng.–d. March 20 [March 31], 1727, London), English physicist and mathematician who invented the infinitesimal calculus, laid the foundations of modern physical optics, and formulated three laws of motion that became basic principles of modern physics and led to his theory of universal gravitation. He is regarded as one of the greatest scientists of all time.
Newton received a bachelor’s degree at Trinity College, Cambridge, in 1665. During the next two years while the university was closed because of plague, Newton returned home, where he thought deeply about how certain natural phenomena might be explained and formulated the bases of his first major discoveries. He returned in 1667 as a fellow to Trinity College, where he became Lucasian professor of mathematics in 1669.
In 1666 Newton discovered the nature of white light by passing a beam of sunlight through a prism. He invented the calculus about 1669 but did not formally publish his ideas until 35 years later. He built the first reflecting telescope in 1668. Newton’s most famous publication, Philosophiae Naturalis Principia Mathematica (1687; Mathematical Principles of Natural Philosophy), contains his work on the laws of motion, the theory of tides, and the theory of gravitation. His laws of motion laid the basis for classical mechanics, and the theory of gravity was particularly important in working out the motions of the planets. The Principia has been called one of the most important works of science ever written. In another book, Opticks (1704), Newton described his theory of light as well as the calculus and other mathematical researches.
Newton served as warden of the Royal Mint from 1696 and became president of the Royal Society in 1703, holding this office until his death. In 1705 he became the first British scientist ever to receive a knighthood for his researches.
To cite this page:
“Newton, Sir Isaac”
Britannica Online.
<http://www.eb.com:180/cgi-bin/g?DocF=micro/715/27.html>
[Accessed 13 May 1998].

Gravitation

Gravitation is a universal force of attraction acting between all matter. It is by far the weakest known force in nature and thus plays no role in determining the internal properties of everyday matter. Due to its long reach and universality, however, gravity shapes the structure and evolution of stars, galaxies, and the entire universe. The trajectories of bodies in the solar system are determined by the laws of gravity, while on Earth all bodies have a weight, or downward force of gravity, proportional to their mass, which the Earth’s mass exerts on them. Gravity is measured by the acceleration that it gives to freely falling objects. At the Earth’s surface, the acceleration of gravity is about 9.8 metres (32 feet) per second per second. Thus, for every second an object is in free fall, its speed increases by about 9.8 metres per second.
The works of Isaac Newton and Albert Einstein dominate the development of gravitational theory. Newton’s classical theory of gravitational force held sway from his Principia, published in 1687, until Einstein’s work in the early 20th century. Even today, Newton’s theory is of sufficient accuracy for all but the most precise applications. Einstein’s modern field theory of general relativity predicts only minute quantitative differences from the Newtonian theory except in a few special cases. The major significance of Einstein’s theory is its radical conceptual departure from classical theory and its implications for further growth in physical thought.
To cite this page:
“Gravitation”
Britannica Online.
<http://www.eb.com:180/cgi-bin/g?DocF=macro/5002/69.html>
[Accessed 10 May 1998].

From Africa to Asia

Genomic analysis of Andamanese provides insights into ancient human migration into Asia and adaptation ( summary copy)

  • Nature Genetics
  • (2016)
(2016)
doi:10.1038/ng.3621
Published online
25 July 2016

To shed light on the peopling of South Asia and the origins of the morphological adaptations found there, we analyzed whole-genome sequences from 10 Andamanese individuals and compared them with sequences for 60 individuals from mainland Indian populations with different ethnic histories and with publicly available data from other populations. We show that all Asian and Pacific populations share a single origin and expansion out of Africa, contradicting an earlier proposal of two independent waves of migration. We also show that populations from South and Southeast Asia harbor a small proportion of ancestry from an unknown extinct hominin, and this ancestry is absent from Europeans and East Asians. The footprints of adaptive selection in the genomes of the Andamanese show that the characteristic distinctive phenotypes of this population (including very short stature) do not reflect an ancient African origin but instead result from strong natural selection on genes related to human body size.

populgen1

populgen2

populgen3

 

Vallásos gondolkodó a tudományról

Karl Jaspers (1883-1969)

On My Philosophy (1941)

(Excerpts on science)

It shook my faith in the representatives of science, though not in science itself, to discover that famous scientists propounded many things in their textbooks which they passed off as the results of scientific investigation although they were by no means proven. I perceived the endless babble, the supposed “knowledge”. In school already I was astonished, rightly or wrongly, when the teachers’ answers to objections remained unsatisfactory… I observed the pathos of historians when they conclude a series of explications with the words “Now things necessarily had to happen in this way”, while actually this statement was merely suggestive ex post facto, but not at all convincing in itself: alternatives seemed equally possible, and there was always the element of chance… As a physician and psychiatrist I saw the precarious foundation of so many statements and actions -[ed.:see my comment below] – and realised with horror how, in our expert opinions, we based ourselves on positions which were far from certain, because we had always to come to a conclusion even when we did not know, in order that science might provide a cover, however unproved, for decisions the state found necessary.

Man is reduced to a condition of perplexity by confusing the knowledge that he can prove with the convictions by which he lives.

If science, with its limitation to cogent and universally valid knowledge, can do so little, failing as it does in the essentials, in the eternal problems: why then science at all?

Firstly, there is an irrepressible urge to know the knowable, to view the facts as they are, to learn about the events that happen to us: for example, mental illnesses how they manifest themselves in association with those that harbour them, or how mental illness might be connected with mental creativity. The force of the original quest for knowledge disappears in the grand anticipatory gestures of seeming total knowledge and increases in mastering what is concretely knowable.

Secondly, science has had tremendously far-reaching effects. The state of our whole world, especially for the last one hundred years, is conditioned by science and its technical consequences: the inner attitude of all humanity is determined by the way and content of its knowledge. I can grasp the fate of the world only if I can grasp science. There is a fundamental question: why, although there is rationalism and intellectualisation wherever there are humans, has science emerged only in the Occident, taking former worlds off their hinges in its consequences and forcing humanity to obey it or perish? Only through science and face-to-face with science can 1 acquire an intensified consciousness of the historical situation, can I truly live in the spiritual situation of my time.

Thirdly, I have to turn to science in order to learn what it is, in all science, that impels and guides, without itself being cogent knowledge. The ideas that master infinity, the selection of what is essential, the comprehension of knowledge in the totality of the sciences; all this is not scientific insight, but reaches clear consciousness only through the pursuit of the sciences. Only by way of the sciences can I free myself from the bondage of a limited, dogmatic view of the world in order to arrive at the totality of the world and its reality.

The experience of the indispensability and compelling power of science caused me to regard throughout my life the following demands as valid for all philosophising: there must be freedom for all sciences, so that there may be freedom from scientific superstition, i.e. from false absolutes and pseudoknowledge. By freely espousing the sciences I become receptive to that which is beyond science but which can only become clear by way of it. Although I should pursue one science thoroughly, I should nevertheless turn to all the others as well, not in order to amass encyclopedic knowledge, but rather in order to become familiar with the fundamental possibilities, principles of knowledge, and the multiplicity of methods. The ultimate objective is to work out a methodology, which arises from the ground of a universal consciousness of Being and points up and illuminates Being.