27 Semantics of Words and Sentences

Dr. Neeru Tandon

epgp books

 

 

 

Learning Outcome:

Introduction: Language may be divided into its components of phonology, morphology, syntax and semantics. First the number of phonemes and their allophones and the supra segmental features are determined, and then the morphemes and their organization into words are identified. The next step is to analyze the structure of sentences and find out how words are organized into phrases, phrases into clauses and clauses into sentences.

WHAT IS SEMANTIC:

Semantics is the study of meaning in language. The term is taken from the Greek seme, meaning sign. The word meaning can be defined in many ways, but the definition most pertinent to linguistics and the one we will use is that meaning is “the function of signs in language.” This understanding of meaning corresponds to  German philosopher Ludwig Wittgenstein’s definition: ‘the meaning of a word is its use in the language’ (in other words, the role a word plays in the language).

The term semantics was only invented in the 19th century, but the subject of meaning has interested philosophers for thousands of years. The Greek philosophers were the first people known to have debated the nature of meaning. They held two opposing views on the subject.

As semantics is the study of the meaning of linguistic expressions, the language can be a natural language, such as English or an artificial language, like a computer programming language. Linguists mainly study meaning in natural languages. In fact, semantics is one of the main branches of contemporary linguistics. Theoretical computer scientists and logicians think about artificial languages. In some areas of computer science, these divisions are crossed. In machine translation, for instance, computer scientists may want to relate natural language texts to abstract representations of their meanings; to do this, they have to design artificial languages for representing meanings.

There are strong connections to philosophy. Earlier in this century, philosophers did much work in semantics, and some important work is still done by philosophers.

Meaning and its Types: Meaning has been a topic of interest for linguists as well as philosophers and conversation analysts. The term meaning can refer to several things, so it is important to note its implications in language sciences. In linguistics the term meaning can refer to implication, entailment, intention, etc. and it is arbitrary that is to say that there may not be an explanation for why a certain entity has got a certain name. Meaning can be of several kinds such as denotative, connotative, literal, figurative and idiomatic. The denotative meaning is the explicit or surface meaning whereas the connotative meaning refers to the extended meaning that may be figurative or idiomatic. The figurative meaning is somewhat different from the literal meaning. The literal meaning of the individual words in a figurative language use  may indicate some concepts, but the overall interpretation requires additional links with entities not available in the lexical contents. Additionally, the term meaning can also refer to the dictionary meaning and encyclopaedic meaning. The dictionary meaning usually refers to the lexical meaning that the users may find in a dictionary. This meaning is often short and depends on the speakers for exactness. In contrast, the encyclopaedic meaning covers all the information available about the word. It is important to note that the site for storing, access and recall of all kinds of meaning is the mental lexicon. Mental lexicon is an abstract entity assumed to be present in the minds of the speakers.

Anyone who speaks a language has a truly amazing capacity to reason about the meanings of texts. Take, for instance, the sentence

(S) I can’t untie that knot with one hand.

Even though you have probably never seen this sentence, you can easily see things like the following:

  1. The sentence is about the abilities of whoever spoke or wrote it. (Call this person the speaker.)
  2. It’s also about a knot, maybe one that the speaker is pointing at.
  3. The sentence denies that the speaker has a certain ability. (This is the contribution of the word ‘can’t’.)
  4. Untying is a way of making something not tied.
  5. The sentence doesn’t mean that the knot has one hand; it has to do with how many hands are used to do the untying.

Arrangements of Meanings in contemporary Semantics: Linguists who study semantics look for general rules that bring out the relationship between form, which is the observed arrangement of words in sentences, and meaning. This is interesting and challenging, because these relationships are so complex.

This idea that meaningful units combine systematically to form larger meaningful units, and understanding sentences is a way of working out these combinations, has probably been the most important theme in contemporary semantics. The meaning of a sentence is not just an unordered heap of the meanings of its words. If that were true, then ‘Cowboys ride horses’ and ‘Horses ride cowboys’ would mean the same thing. So we need to think about arrangements of meanings.

Here is an arrangement that seems to bring out the relationships of the meanings in sentence (S).

Not [ I [ Able [ [ [Make [Not [Tied]]] [That knot ] ] [With One Hand] ] ] ]

The unit [Make [Not [Tied]] here corresponds to the act of untying; it contains a subunit corresponding to the state of being untied. Larger units correspond to the act of untying-that-knot and to the act to-untie-that-knot-with-one-hand. Then this act combines with Able to make a larger unit, corresponding to the state of being-able-to- untie-that-knot-with-one-hand. This unit combines with I to make the thought that I have this state — that is, the thought that I-am-able-to-untie-that-knot-with-one-hand. Finally, this combines with Not and we get the denial of that thought.

A semantic rule for English might say that a simple sentence involving the word ‘can’t’ always corresponds to a meaning arrangement like

Not [ Able … ],

but never to one like

Able [ Not … ].

For instance, ‘I can’t dance’ means that I’m unable to dance; it doesn’t mean that I’m able not to dance.

Semantics of Sentences: To assign meanings to the sentences of a language, you need to know what they are. It is the job of another area of linguistics, called syntax, to answer this question, by providing rules that show how sentences and other expressions are built up out of smaller parts, and eventually out of words. The meaning of a sentence depends not only on the words it contains, but on its syntactic makeup: the sentence

(S) That can hurt you,

for instance, is ambiguous — it has two distinct meanings. These correspond to two distinct syntactic structures. In one structure ‘That’ is the subject and ‘can’ is an auxiliary verb (meaning “able”), and in the other ‘That can’ is the subject and ‘can’ is a noun (indicating a sort of container).

Because the meaning of a sentence depends so closely on its syntactic structure, linguists have given a lot of thought to the relations between syntactic structure and meaning; in fact, evidence about ambiguity is one way of testing ideas about syntactic structure.

Natural Language vs. Artificial Languages: Since sentences can be either true or false, the meanings of sentences usually involve the two truth-values true and false.

You can make up artificial languages for talking about these objects; some semanticists claim that these languages can be used to capture inner cognitive representations. Though “truth values” may seem artificial as components of meaning, they are very handy in talking about the meaning of things like negation; the semantic rule for negative sentences says that their meanings are like that of the corresponding positive sentences, except that the truth value is switched, false for true and true for false. ‘It isn’t raining’ is true if ‘It is raining’ is false, and false if ‘It is raining’ is true. This interest in valid reasoning provides a strong connection to work in the semantics of artificial languages, since these languages are usually designed with some reasoning task in mind. Logical languages are designed to model theoretical reasoning such as mathematical proofs, while computer languages are intended to model a variety of general and special purpose reasoning tasks. Validity is useful in working with proofs because it gives us a criterion for correctness. It is useful in much the same way with computer programs, where it can sometimes be used to either prove a program correct, or (if the proof fails) to discover flaws in programs.

These ideas (which really come from logic) have proved to be very powerful in providing a theory of how the meanings of natural-language sentences depend on the meanings of the words they contain and their syntactic structure. Over the last forty years or so, there has been a lot of progress in working this out, not only for English, but for a wide variety of languages. This is made much easier by the fact that human languages are very similar in the kinds of rules that are needed for projecting meanings from words to sentences; they mainly differ in their words, and in the details of their syntactic rules.

Lexical Semantics or Semantics of Words: Recently, there has been more interest in lexical semantics — that is, in the semantics of words. Lexical semantics deals with a language’s lexicon, or the collection of words in a language. It is concerned with individual words (unlike compositional semantics, which is concerned with meanings of sentences.) Lexical semantics focuses on meanings in isolation,  that is, without attention to their contribution to reference or truth conditions. Lexical semantics is not so much a matter of trying to write an “ideal dictionary”. (Dictionaries contain a lot of useful information, but don’t really provide a theory of meaning or good representations of meanings.) Rather, lexical semantics is concerned with systematic relations in the meanings of words, and in recurring patterns among different meanings of the same word. It is no accident, for instance, that you can say ‘Sam ate a grape’ and ‘Sam ate’, the former saying what Sam ate and the latter merely saying that Sam ate something. This same pattern occurs with many verbs. The naturalist view, held by Plato and his followers, maintained that there was an intrinsic motivation between a word and its meaning. The meaning of a word flows directly from its sound. The Greek word thalassa, sea, in its classical pronunciation, supposedly sounded like the waves rushing up onto the beach. If the naturalist view were entirely correct for all words, we would be able to tell the meaning of any word just by hearing it. In reality only a few onomotopoeic words in each  language actually sound  something  like  what  they  mean:  swoosh,  splash,  bow  wow, meow. Poets can skillfully use words with sound features that heighten the meaning intended. But poetic sound imagery represents a rare, highly clever use of language, so the naturalist approach is applicable to only a tiny portion of any language.

Logic is help in lexical semantics, but lexical semantics is full of cases in which meanings depend subtly on context, and there are exceptions to many generalizations. (To undermine something is to mine under it; but to understand something is not to stand under it.) So logic doesn’t carry us as far here as it seems to carry us in the semantics of sentences.

Linguists who study meaning (semanticists) often divide the meaning of a word into semantic components based on real world concepts, such as human/ live/ dead/ animal/ plant/ thing/ etc. Discussing the meaning of words by breaking it down into smaller semantic components such as is called componential analysis.

Semantics probably won’t help you find out the meaning of a word you don’t understand, though it does have a lot to say about the patterns of meaningfulness that you find in words. It certainly can’t help you understand the meaning of one of Shakespeare’s sonnets, since poetic meaning is so different from literal meaning. But as we learn more about semantics, we are finding out a lot about how the world’s languages match forms to meanings. And in doing that, we are learning a lot about ourselves and how we think, as well as acquiring knowledge that is useful in many different fields and applications.

How meaning affects word associations in language

The purely linguistic side of meaning is equally evident when examining how words combine with one another to produce phrases. The set of restrictions on how a word may combine with other words of a single syntactic category is referred to as the word’s collocability. Two words may have the same referent, and yet differ in their ability to combine with particular words.

In English, the word flock collocates with sheep ; and school with fish, although both flock and school mean group.

Also, addled combines only with brains or eggs (one must steam rice and boil water), blond collocates with hair, while red may collocate with hair as well as other objects.

Idiosyncratic restrictions on the collocability of words result in set phrases: green with jealousy; white table vs. white lie. On can get or grow old, but only get drunk, get ready, not *grow drunk, *grow ready.

Every language has its own peculiar stock of set phrases. In English we face problems and interpret dreams, but in Modern Hebrew we stand in front of problems and solve dreams. In English we drink water but eat soup. In Japanese the verb for drink collocates not only with water and soup, but also with tablets and cigarettes.

From the point of view of etymology, set phrases are of two types.

  1. The first type of set phrase, the collocation, may be defined as “a set phrase which still makes sense”: make noise, make haste. One simply doesn’t say to produce noise or make swiftness, even though such phrases would be perfectly understandable. Since collocations still may be taken literally, they can be paraphrased using regular syntactic transformations: Haste was made by me, noise was made by the children.
  2. Phrases whose words no longer make sense when taken literally are called idioms. The semantic relations between words in idiomatic set phrases may be illogical to varying degrees: white elephant sale, soap opera, to see red, break a leg, small voice, loud tie, wee hours of the night.

Thus, meaning involves real-world concepts and logic but it is at the same time a linguistic category. The semantic structure of a language is the language’s special system of conveying extra linguistic relations by idiosyncratic linguistic means.

Semantic relationships between words

Sense relations: All words in a language may not be discrete. They may show some kind of relationships at the level of meaning. These relationships are known as sense relations. Modern studies of semantics are interested in meaning primarily in terms of word and sentence relationships. Let’s examine some semantic relationships between words.

Following are some commonly occurring relationships between words:

  • Synonymy shows the relationship of sameness or similarity of meaning as in big and large.
  • Antonymy shows the relationship of oppositeness of meaning as in day and night.
  • Polysemy signifies the relationship of multiple related meanings of a word as in house and home.
  • Homonymy refers to the multiple unrelated meanings of a word. For example, consider the word bank. It may refer to the monetary institution as well as the water body.
  • Meronymy: The term meronymy signifies the part-whole relationship between concepts. For example, the concepts finger and hand share the relationship of meronymy. In this case, finger is a meronym of hand and hand is the holonym of finger. It is important to note that finger is part of the hand, not a type of hand.

There are a few other minor semantic relations that may pertain between words. The first involves the distinction between a category vs. a particular type or example of that category. For example, a tiger is a type of feline, so feline is a category containing lion, tiger, etc.; color is a category containing red, green, etc, red, green are types of colors. Thus, feline and color are hyponyms, or cover words, and red, green, lion, tiger are their taxonyms. The second involves a whole vs. part of the whole. For example, Similarly, family is the holonym of child, mother or father.

Since words often originate from other words, a word very often has some historical reason for being the shape it is. Sometimes the origin (or etymology) of a word is completely transparent, as in the case of unknown from known, or discomfort from comfort. At other times the origin of a word is less immediately obvious but nevertheless present in the form of a word, as in the case of acorn < oak + ornThe conventionalist view of Aristotle holds that the connection between sound and meaning is completely arbitrary. It is true that the form of most words is arbitrary from an extra-linguistic point of view.

Philologists (people who study language as well as anything created with language) often make a distinction between meaning and concept. It is possible to know the meaning of the word without knowing everything about the concept referred to by that meaning. For example, one can know the meaning of a word like diamond without knowing the chemical composition of the stone or that carbon and pencil lead are, chemically speaking, composed of the same substance.

Linguists have a second way of looking at the distinction between linguistic and real-world knowledge. They often discuss the difference between a word’s sense and its reference. A word’s sense is how the word relates to other words in a language (Wittgenstein’s “meaning”); its reference is how it relates to real world concepts. The French word mouton refers to a sheep as well as to the meat of the animal as used for food (the sense of the word combines two references). In English we have two separate words for each extra-linguistic reference. The sense of the English word sheep is limited by the presence of the word mutton in English.

Thus, the sense of a word concerns its linguistic boundaries in a particular language. The reference of a word concerns which concepts it refers to in the real world.

Noting how semantics is based on extra-linguistic categories, a group of linguists (including the Polish-born Australian linguist Anna Wierzbicka) have tried to reduce all meaning in language to a set of universal core concepts, such as tall, short, male, female, etc. These finite sets of concepts are then used universally, to describe the meanings of all words in all languages. This semantic approach to language structure has various problems,which are as follows:

  • To decide which concepts are basic and which are derived.
  • How to distinguish between sense and reference?
  • Meaning is more than simply a reflection of real world categories.
Summary: Idiosyncratic semantic constraints in the grammar result in reference being made using one form instead of another. Logical constraints result in reference not being made at all. Meaning is not merely a reference to concepts in the real  world. It depends on linguistic factors in part unique to each individual language; meaning depends not only on the logical combination of real world concepts. The system of language cannot be described only in terms of extra-linguistic logic.
you can view video on Semantics of Words and Sentences
Reference
  • Carter, Ronald (ed.) Language and Literature: An Introductory Reader in Stylistics. Allen and Unwin Ltd: London, 1982.
  • Chastain. K. Developing Second Language Skills: Theory to Practice. 2nd ed. Rand Mc Nally College Publishing Co.: Chicago, 1976.
  • Crystal, David and Derek Davy. Investigating English Style. OUP: London, 1968.
  • Crystal, David. What is Linguistics? Edward Arnold: London, 1983.
  • ……………… A First Dictionary of Linguistics and Phonetics. Select Book Service Syndicate: New Delhi, 1985.
  • Enkvist, E. J. Spencer and M. Gregory. Linguistics and Style. OUP: London, 1964.
  • Freeman Donald,C. Linguistics and Literary Style. Holt, Rinehart and Winston, Inc: New York.
  • Fowler, R. Linguistics and the Novel. Methuen and Company: London, 1977.
  • Leech, Geoffrey, N. Michael H Short. Style In Fictions: A Linguistics Introduction to English Fictional Prose. Longman Press: London, 1981.