CLICK HERE FOR THOUSANDS OF FREE BLOGGER TEMPLATES »

Saturday, March 29, 2008

FOURTH POSTING..

Assalamualaikum and happy day everyone....

Have you ever wanted to sort out the terminology of content analysis? Did you ever think how wonderful it would be if you could see all occurrences of one term in its various contexts? If so, then you are no doubt familiar with the task of sitting down, reading through the source text, finding the "complicated" words and writing them down so you can look them up. In this week, we have to do pair works and together we completed this task based on OTL Book and other references.

According to paper work written by Arshad Abd. Samad, UPM (OTL, 70), concordance program such as Wordsmith, Monoconc Pro and Microconcord is used to analyze the language data as well as to show how words and grammatical constructions are used. According to Schmitt (2002:34), the benefit of using this program is to help students to look at the systematicity of language.

In Malaysian context, the use and analysis of language CORPORA has been limited because it is unavailable. Therefore, there is an effort compiled by researchers from University Teknologi Malaysia (UTM) in this matter. Two more efforts are collected by University Malaya (UM) and the English of Malaysian school students or EMAS corpus by researchers from University Putra Malaysia (UPM).

EMAS CORPORA was collected in 2002 and consists of close half a million words (Arshad et al., 2002). It contains written data that consists of three essays written by 800 students from year 5, form 1 and form 4 students. The first essay was based on picture series of a fishing trip event, and the second essay was entitled ‘The Happiest Day of My Life’ while the third essay was selected by teachers from the students as usual schoolwork.

Investigating Development

In this investigation, its start from three age groups based on their language activity and vocabulary works. Below are the gists of the result that we can see which is in term of:

1. Language productivity:

Productivity is indicated by the number of sentences per essay and the words per sentence.

Productivity = numbers of sentences and words per essay and words per sentence

Results = older student use more complex’s sentences.

2. Range of vocabulary

The diversity of the vocabulary used in corpus is often determined by calculating the type to token ratio (Schmitt, 2002). It should also be noted that the EMAS corpus is learner corpus that retained the students’ spelling and grammatical error.

Sophistication of Vocabulary

The sophistication of the vocabulary can be determined by using specialized software such as RANGE (Nation, 20002). It indicates of the kind of vocabulary used and compared to several base lists of frequently used words.

In brief, the researchers used many corpuses like The Emas Corpora included in this research. And based on comparison methodology where students from various ages starting from age 11, 13 and 16 were chosen to be part in this experiment and make this research greatly successful.

Reference;

Beyond Concordance Lines from OTL Book, page 70 Arshad Abd. Samad (UPM)

Concordance is literally agreement, harmony, hence derivatively a citation of parallel passages, and specifically an alphabetic arrangement of the words contained in a book with citations of the passages in which they occur. - Encyclopedia Britannica, 11 ed.

It is also a bulky, often eight, ten, or more times the size of the original text. It's easy to see why: if a line in the original text contains (say) eight words, that line will appear in the concordance once for each of those words: eight times in all. The Web Concordance System goes to considerable lengths to minimize delays when delivering concordances across the network.

In its simplest form, a Keyword-in-Context (KWIC) Concordance is a listing of some or all of the words in a text or set of texts, surrounded by the text that they are embedded in. Here is a section of a concordance of just the first sentence of this page:

 surrounded by the text    
that
they are embedded in.  
sting of some or all of
the
words in a text, or set 
of texts, surrounded by
the
text that they are embed

Typically, the concordance lines would show more of the surrounding text, so the user could more clearly understand how the words are used.

The purpose of a concordance is to study how words are used in a language, and to allow us to acquire a deeper understanding of meaning and usage than can be obtained from a dictionary. As an example, consider the words tan and auburn. Both can be used to mean a color; both indicate a brownish hue. This much you can find in a dictionary. But in a dictionary, you would not find that auburn is used frequently to describe hair color but never to describe skin color. Nor would you find that tan is not used to describe hair. But a concordance which uses a large amount of text from the target language could show you many occurrences of these two words at a glance (and other meanings as well, of course, such as the use of tan as an abbreviation of a trigonometric tangent). In this way you could infer how native speakers use the words, and how these usages may be limited to specific situations.

Its simplest use is as an index, to locate quickly any passage in a text. All you need to know is one word from the passage: look up that word in a concordance to the text and you will find the passage. But a concordance does much more than this. What you find when you look up any word is a gathering-together of all the usages of that word. Straight away you can compare all the contexts in which the word is used. This often enables special insights into the particular meanings of a text and into its characteristic language. For literary, legal, or philosophical texts, where language and meaning are primary concerns, the concordance is one of the most powerful investigative tools available.

Apart from that, a concordance will also show quickly how often any word is used. Just as significantly, it can show you what words are not used. Such features offer special insights into the issues or themes which are important or recurrent in a text - and they may not be quite what you expect.

However, concordances are not only for literary use. Computer programmers also know them as cross-reference systems, which enable a team of people working on the same project to keep track of all the references to a particular entity (say, a variable name) across many files which make up the project. Organizations of all kinds can use concordances to provide rapid reference to internal information in any set of documents on an intranet. Something as humble as a series of minutes of meetings, for example, becomes much more powerful if all references to any topic can be seen instantly side-by-side. For more interesting material such as a set of related research papers the possibilities are even greater.

All the words used in the original text appear arranged alphabetically in a concordance. However, all the occurrences of any particular word (the citations for each headword), once gathered together, can be arranged in various ways. A common way is to follow the order of appearance in the original text. More revealing arrangements are sometimes possible. In my Concordance to the Poetry of Philip Larkin (Olms-Weidmann, 1995), for example, every word is accompanied by a note of the year in which it was used, so letting the reader follow the evolution of the poet's vocabulary.

A concordance doesn't do your thinking or investigating for you, any more than a computer does. You have to ask the right questions, you have to follow up your own hunches and insights. A concordance is a matchless tool for investigating texts. Its power is limited only by its user's imagination.

[1]A computer concordance can now be prepared relatively easily provided a suitable e-text is available. However, in practice a good deal of editorial work is required in the early stages if a reliable concordance is wanted, and to produce a concordance of a quality suitable for publication in book form is generally reckoned to take around 2000 hours.

In brief, with Concordance you can make indexes and word lists, count word frequencies, compare different usages of a word, analyze keywords, find phrases and idioms, publish to the web and much more besides. It is being used in language teaching and learning, data mining and data clean-up, Literary and linguistic scholarship, translation and language engineering, corpus linguistics, natural language software development, lexicography and content analysis in many disciplines including accountancy, history, marketing, musicology, politics, geography, and media studies as well.

My friend, Nurhuda Binti Muhamad Nazri and I have chosen the applications of concordance in term of content analysis. Content analysis is a method for analyzing written and oral textual materials which is used sparingly by organizational researchers. Below is an example of content analysis which is in methodology toolbox and reviews in term of what content analysis is and how it has evolved, examples of its use in leadership research, the variety of analysis techniques for which the results are suitable, and the strengths and weaknesses associated with this method. In addition, a suggested procedure is presented to assist users in using content analysis. Finally, the role of content analysis in future leadership research is considered as well.

I. Methodological background

The following is an attempt briefly to sketch a methodology for elementary text-analysis, with particular emphasis on how to approach a text one does not know well. It is essentially an abstraction of the practice illustrated in the exercises of the following pages under this topic. Here no particular tool for the activity is presumed, nor are particularly sophisticated tools in view. All of what follows can be done with conceptually quite simple ones, such as Monoconc.

A. Kinds of text-analysis

Throughout “text-analysis” should be taken to mean “the analysis of text with the aid of algorithmic techniques”.

An algorithm may be defined as a step-by-step procedure capable of being run on a computer—i.e., an unambiguous and completely stated description of what the computer is to do. It can be expressed by a computer program but need not be; often the specifics of how an algorithm is implemented in a particular programming language would obscure the essentials. Text-analytic methods cover a spectrum between the completely algorithmic and the exploratory: in exploratory work we do not have a specific goal or procedure to follow but instead we look for leads. Most work mixes approaches from various points in the spectrum: we may make a word frequency list by algorithmic methods, but the results always need to be interpreted and investigated further, usually by much less algorithmic means.

Text-analysis may be divided into the following kinds, usually practiced at different places along the algorithmic–exploratory spectrum:

  • Concording and related transformations of the textual data. These constitute the primary focus of attention here.

  • Content analysis. This is a closely related if not overlapping kind, often included under the general rubric of “qualitative analysis”, and used primarily in the social sciences. It is “a systematic, replicable technique for compressing many words of text into fewer content categories based on explicit rules of coding” (Stemler 2001). It often involves building and applying a “concept dictionary” or fixed vocabulary of terms on the basis of which words are extracted from the textual data for concording or statistical computation.

  • Statistical analysis. This involves counting particular features of the textual data and then applying one or more mathematical transformations. The simplest type produces frequency lists of word-forms, usually arranged from the most to the least frequent. We will pay some attention to such lists here. More powerful and complex types of statistical analysis are used for example in stylometry and authorship studies; see Burrows 1992, Holmes 1998.

B. Application to unseen or poorly known texts

There are two reasons why one might legitimately be using text-analytic techniques on a text one does not know well. First, corpora of use in the humanities are approaching and some are already past the point at which a human being could read through their contents in a lifetime—especially given when that person might begin his or her reading; furthermore, some of these are not intended for normal reading, such as the non-literary collections meant for historical or linguistic purposes. Second, and more importantly, text-analysis is fundamentally different from manual methods and so reveals aspects of even well-known texts that one is likely not to have considered before. To the degree to which these texts are made new by the change in perspective, understanding will be aided by text-analytic techniques.

The first reason, that corpora tend to be too large, can be put in more positive terms: a good command of these techniques will make it practical for the ignorant but intelligent person to profit from materials outside his or her own field. Thus interdisciplinary research tends to be fostered.

II. Prior knowledge

We assume, then, application of the first kind of analysis, concording, with some use of frequency lists, to unseen texts.

Nevertheless the place to begin is with whatever you know about the given body of text (known as the corpus). It is unlikely that you will know absolutely nothing at all about it, but in any case read around in it briefly, picking up what you can. Consider

  1. Genre. What kind of a text do you have? Novelistic, poetic, bureaucratic, legal? Was it originally written, or was it delivered orally? What are the formal features you would expect such a text to have, which can you spot when you look at it? In the Stephen material, for example, we have the spontaneous secular sermons of a hippie “guru”, stream-of-consciousness, orally delivered. Stephen is talking to an audience in a highly personal style and mode.

  1. Rhetoric and vocabulary. Genre will tend to define a particular way of speaking or writing and to shape the vocabulary, including how frequently particular words appear. In the case of Stephen, personal pronouns are quite frequent—he is talking directly to the people in his audience (hence “you”) and centrally about a way of life centered on awareness (hence “know”).

  1. Social or psychological circumstances. Familiarity with the social circumstances surrounding the creation of the text may be relevant; so also the known or suspected psychology of the author or speaker. Note that Stephen's sermons, however nonsensical they may seem to subsequent generations, were hugely popular, uncomfortable to attend (people sat on the floor), raptly attended and meticulously transcribed. Evidently they had meaning to those who listened. Therefore our search for patterns of meaning in the Stephen corpus is not in the least mistaken.

  1. Historical circumstances. The more you know about the historical circumstances under which the text was produced the better. In the Stephen corpus, for example, it is crucial to understand how widely the now rather odd sounding language of Stephen's hippie subculture was accepted and spoken. As just noted, his talks apparently communicated a great deal to his audience. Hence we may conclude that the usages are richly dialectical. Awareness of his historically (and to a certain extent, regionally) defined vocabulary will give you hints as to where you might begin in a search for interesting terms.

  1. Nature of the artifact. The physical object from which the text has been taken, usually a printed book, may be relevant. Stephen's book, Monday Night Class, gives several indications of its time and subculture of origin; likewise, the photographs included in it and on the back cover reinforce the historical fact of the seriousness with which his words were taken. These, again, give reason to press forward with the analysis, and the clearly religious character of the assemblies to which he spoke direct you to the corresponding language.

In other words, the seemingly disembodied electronic text has several contexts essential to a full understanding of it. The more of that understanding you can have the better, though because the focus here is on technique, the point is not to dwell on acquiring knowledge of the contexts, only to get what you can quickly.

III. Steps in the analysis

The methodology outlined here is like a fishing expedition: you go at the text with a quiet, open mind, having little or no idea what you are going to catch. If you are after something in particular, then of course it is a different kind of activity. Even in a focused enquiry, however, software allows you to ask certain kinds of questions so easily and get answers back so quickly that curiosity is given a much freer reign; you can afford to play, ask even apparently improbable questions, and so raise the chances that you will be surprised by an important result you had little reason to expect. Thus a certain amount of fishing is recommended even for the focused questioner.

  1. High-frequency words. A quite crude but useful technique is to look through a list of the most frequent word-forms for anything that is unusual or particularly characteristic of the text in question. Frequency of word-forms is only roughly related to what a text says, but it is related, and so is useful to work with.

Two examples spring to mind from both the Simpson and Stephen corpora: the verb “know” and the first-person singular pronoun “I”. (Note, in the comparison study outlined in Corpus analysis of meaning, how so little information says so much about both, how it draws a contrastive parallel between the two men.)

There are of course severe limitations on what you can do with a frequency list, especially if you are interested in words (dictionary headwords, such as “know” or “I”) rather than word-forms (such as “knows” or “knew”, or “me” or “we”), and much more if you are focused on ideas (such as cognition or the self) rather than words. If the former, then you need to find all the inflected forms of the word and combine their frequencies. If the latter, you need to find all the relevant synonyms and combine the frequencies of all the inflected forms; even then, since ideas are only tangentially related to words, the result would be incomplete. Very often, however, the raw frequency list will prove useful enough.

  1. Collocations. A somewhat more sophisticated tool for relating word-forms to meaning generates information on what words tend to be found together, either contiguously, such as “I didn't know that”, or within a specified proximity or span, e.g. “black” within 5 words of “bag”. The idea here is that repeated collocations are more reliable indicators of meaning that repetitions of single word-forms. See Sinclair 1991 (chapter 8) for a full discussion.

The program Monoconc and others will generate lists of collocations ordered by frequency so that you can identify recurring phrases and associations of words quickly. Note that if you wish to study collocations over a wider span than the program permits, you can do this by following these steps:

    1. Set the concordance “window” (the number of characters shown on either side of the target word) to a sufficiently large number;
    2. Run a concordance of the word for which you wish to study the collocates;
    3. Save the concordance as a text-file;
    4. Use that file as input to the program, generate from it a frequency listing

This listing will thus give you the frequencies of the collocates of your target word.

A government document, for example, will tend to have quite high frequencies of standard phrases; for a literary work, even two occurrences of a phrase may be highly significant. The Monoconc-style listing, of collocations within a span, is of course less bound to literal repetition—it will include together, for example, instances of the collocation of “don't” and “know” in the phrases “I don't know” and “I don't even know”.

Classicists will be interested in the collocation tools implemented by the Perseus Project; see in particular the Greek and Latin context search tools.

  1. Concording. The essential idea behind the concordance, especially the KWIC, is to direct your attention to the immediate linguistic environment of the specified word. Hence when you find a potentially interesting word, often the next step is to run a concordance on it, then look down the concordance listing to see what patterns you can spot. With Monoconc generating collocation statistics will often immediately follow.

A KWIC is made considerably more useful by the ability to sort an on-screen listing according to the words to the left and right of the target words; Monoconc offers such ability, and the same can be done with other concordance software. Such sorting tends to bring out the patterns, since repetitions are grouped together.

Since current KWIC software deals only with word-forms rather than words, you will often also need to concord the inflected forms. In English many of these can be caught by use of the appropriate wildcards, but not all. An example is go, went and gone; another is I, me, my, mine, we, us, our(s), all forms of the first-person personal pronoun.

Synonyms, of course, are entirely your task to identify, but doing so is made considerably easier than it might be by the tendency in many writers and speakers to emphasize an idea by using a number of synonyms together or nearby each other. Thus the text can itself help you to build a reasonable list for further concording. Compiling such a list is a recursive activity—in the beginning, a new synonym will tend to turn up others; when the law of diminishing returns asserts itself, it is time to stop. The result we may call a “fixed vocabulary”, to which can be added the contiguous collocations you have identified. All together these represent a translation of an idea, as it were, into data.

A fixed vocabulary can then be used to turn up passages in the text for study—as is commonly done in “content analysis”. If you know the text well, then a very interesting further question to ask is, when does this vocabulary not identify passages in which the targeted idea clearly or arguably occurs? Why does it not? Some very interesting findings can result from pursuit of this question.

II. Text and meta-text

In a sense markup is not new. From very early on in the development of written language graphical devices and words have been used to tell us about the text we are reading. Word-separators are an example. In classical Roman inscriptions, marks were sometimes used for the purpose when confusion might otherwise result, but spaces between letters did not become conventional until later, in the Middle Ages, when they were introduced in manuscripts, perhaps to assist the then new practice of silent reading. Paragraphing, which began with interlinear or marginal graphics in manuscripts, indicates that a block of text is to be considered a significant unit. Punctuation marks sometimes indicate only a pause, sometimes confer a particular status or meaning on the punctuated words. Chapter titles may be indicated as such in many ways, e.g. by blank space, the word "chapter", graphics of various kinds.

All such devices are instances of metatext, i.e. text, textual symbols or other graphical devices used to say something about the text we read. Furthermore metatext is just one kind of "paratext", as Gérard Genette has called those devices and conventions which form part of the complex mediation between the text, the author, the publisher and reader. Before we even begin reading a book, its paratext tells us many things about it and so shapes our subsequent reading—if, partly on the basis of that paratext, we decide to read it.

Although the basic notion of metatext is not new, its implementation in markup creates new conditions for work by imposing those two computational constraints: total explicitness and absolute consistency. If the paratext is to be computationally tractable, if we want it to figure into our analysis, it must be rendered as markup, explicitly and consistently. As in all other cases of markup, this necessarily means some degree of interpretation—from almost none (e.g. that a block of text preceded and followed by blank lines is a paragraph) to a significant amount (e.g. that the design on the cover of a poetry magazine figures into how we read the poetry inside). In other words, again, encoding provides a means for the scholar to express his or her interpretation of the text.

III. Text-analytic markup

In text-analysis one is for example often concerned not just with the immediate linguistic environment of the target word, but also with the structure of the text in which that word occurs, especially if the analysis is literary or historical. If the corpus you are analyzing is a novel, for example, you will likely want to know the chapter number of each occurrence; if it is a play, the act, scene and line numbers, perhaps also the speaker of the lines. Furthermore, you may want to specify in your query which part of the text to search, e.g. the word "blood" only when spoken by Macbeth, or the word "exit" only when it is not part of a stage direction. Since in general it is impossible automatically to extract such information from an unprepared text, text-analysis will often require that the text be prepared by manual insertion of metalinguistic tags that unambiguously denote this structural information.

Textual structure, as suggested by these examples, may involve simply a translation of the conventions of a printed original, but it may also be significantly interpretative. There may be several competing structures one wishes to take account of. The boundary between one part of a text and another may be ambiguous.

V. Summary of essential points

When texts are marked up for the purposes of literary study (or any other kind that focuses on how the text says what it says), the metatext tends to be highly interpretative. Two imperatives follow.

  1. Consistency. Although nothing prevents a person from simply tagging something as an instance of something else (e.g. "Joseph is a fruitful bough" as a metaphor) and doing so without further thought, the results will not be very useful unless the tagging is done with as close to absolute consistency as possible. Thus if I tag location X as an instance of phenomenon Y because criteria A, B and C are present there, then wherever A, B and C occur, I must tag those locations as instances of Y also. If my reading of the text in those other locations will not allow me to do that, then I must revise my criteria accordingly.

  1. Explicitness. Such rigorous, well-defined consistency implies a set of rules or guidelines that the tagger has followed. In many academic instances these rules will need to be invented by the tagging scholar to describe what he or she has figured out to do, but in any case they are available to the reader, who is therefore able to use the marked-up text with appropriate confidence and direction. He or she will know the criteria by which e.g. all metaphors were tagged and so can either agree with or differ from what has been done. A better understanding of the phenomenon should result.

There are two consequences follow from such work, providing of course that it has been well done:

  1. A useful means for studying the text on the basis of the interpretation expressed in markup.
  2. Increased understanding of the text through the failure of markup to capture its features completely.

Besides that, we also want to show a good example of analyzing text using concordance which is as below example;

CONCORDANCE

1. to guide herself about the house, and to do a good deal

2. who told me all this about my poor mother, long after her

3. did not often think about him, she had fallen so

4. get from the people about the farm, who hardly waited

5. of work, may be, about the farm. And he would take

6. and kept he with him about the farm. Gregory was made

7. when I was about sixteen, and Gregory nineteen

8. an errand to a place about seven miles distant by the road

9. by the road, but only about four by the Fells. He bade me

10. I tried to move about, but I dared not go far, for fear

11. spirits of the Fells, about whom I had heard so many

12. feared that, in moving about just now, I have lost the right

13. has gotten ought about thee they’ll know at borne?’ I

14. still, and again about our mother, when I fell asleep

15. were many voices about me—many faces hovering

16. when all were running about in wild alarm, not knowing

17. … I – stocked house, not above half an hour’s walk from

18. that might not be above his comprehension. I think he

Analyzing the words

According to above example, we can see a repeated use preposition in the sentences. So we can start analyzing the text in term of the occurrences of preposition and how the preposition functions in the sentence itself.

In short, a preposition is a word which is used before a noun or pronoun to show its relation to some other word in the sentence. There are common of preposition which is preposition of time, place, direction/movement, manner and purpose.

1. herself (pronoun) about (preposition) the house(noun)= Preposition of place

2. this(determiner) about (preposition) my (pronoun) = Preposition of manner

3. think(verb) about (preposition) him(pronoun) = Preposition of manner

4. the people(noun) about (preposition) the farm(noun) = Preposition of place

5. may be(verb) about (preposition) the farm(noun) = Preposition of place

6. him(pronoun)about (preposition) the farm(noun) = Preposition of place

7. was(verb) about (preposition) sixteen(number)= Preposition of time

8. a place(noun)about (preposition)seven miles(number) = Preposition of direction

9. only(adverb) about (preposition) four (number) = Preposition of direction

10. move(verb) about (preposition) but (conjunction) = Preposition of direction

11. Fells(noun) about (preposition) whom(wh-question) = Preposition of place

12. moving(verb) about (preposition) just now(adverb) = Preposition of manner

13. Ought(verb) about (preposition) thee(noun) = Preposition of manner

14. again(adverb) about (preposition) our(pronoun) = Preposition of manner

15. voices(noun) about (preposition) me(pronoun) = Preposition of manner

16. running(verb) about (preposition) in(preposition) = Preposition of manner

17. not(verb) about (preposition) half an hour’s walk(time) = Preposition of time

18. be(verb) about (preposition) his(pronoun) = Preposition of manner

Before we end this writing which is about concordance, we would like to show readers what are the comments from users around the world to the author of the website http://www.concordancesoftware.co.uk/userviews.htm since 1999 up to 2004 about concordance. And this proves the greatness and successfulness of concordance program itself.

‘I recently downloaded Concordance. Brilliant idea. It's like magic happening right before your eyes when I do content analysis. Really facilitates getting to the heart of meaning.'

'I did a Web search for concordance tools, found several, evaluated all of them, and yours was head and shoulders above the rest. '

'Your program is amazingly documented. There is so much there it is easy to overlook its features.'

'The ability to save the results as HTML pages is certainly a very useful function.'

'I tried the demo. The following is my impression after using it -

i. The user interface is good and easily accessible.

ii. The text processing speed is awesome.

iii. The capability to get any size of data is also admirable.

iv. The ability to publish the result to web makes me excited.'

'Your Concordance is very well designed. As I have to run a number of tests on language data, I was thrilled to find the download site. Also, because I am presently so busy, I don't have a great deal of time to waste on the "learning curve factor." Concordance is going to save me a lot of time and headaches. Well done!'

'I like your product. I helped to develop a concordance product in the 1970s in PL/1 when I was a programmer at Cornell U, and was surprised by the significant help it provided to researchers. Your product has great features and works quickly. Congratulations. '

'I want to congratulate you for the user-friendly functionality of Concordance. It is really a pleasure to work with. I bought your program out of frustration with using another concordance program... which was difficult to use. I especially appreciate your program's Lemmatiser, which permits me to group thematically related words in order to query their appearance in a text.'

...about its wide variety of uses:

'I've been having fun with Concordance. It's a very useful tool, and I can use it for text analyses in an industrial environment.' - language engineer

'I have been trying out your concordance package and have been very impressed with its flexibility, especially in handling very large numbers of words and occurrences.' - drama researcher

'My background is in the marketing of industrial equipment. I have started to use your program to analyze various sources of information about my industry and monitor the occurrences of particular words and phrases... I thoroughly enjoy working with Concordance and have become confident that is a very useful marketing tool... it also amazed me to see how the concordances can stimulate ideas.' - marketing director

'I have been testing your programme Concordance for two weeks now and am considering buying it because it can be useful in my job as a translator.'

'I am trying out your Concordance application (...I love it, so far) -... I am a PDF help author...I am going to purchase your application on-line as soon as I finish writing this e-mail.'

'I recently purchased Concordance and am delighted with it. I use it for a somewhat odd purpose: computer forensics. Essentially I examine hard drives seeking evidence of certain specified activities.'

'Your response motivated me to download your "Concordance" and after an hour I had my first concordance of John's Gospel.' - biblical scholar

'Your concordance software, which I have just ordered, is very fine. I will use it to help me edit parts of the Canterbury Tales in such a way that stressed vowels are spelled consistently (distinguishing long vs. short and high vs. low mid) and vowel letters having no equivalent in the pronunciation of a regularly alternating weak/strong meter are omitted.'

'Concordance is still a great joy to work with.' - language teacher

'I've been working with a couple of groups of students on Concordance for 6 or 7 weeks now, and it is going well.' - teacher of humanities computing course.

'I like your Concordance quite a bit. It definitely does what I asked for.' - computer magazine editor

"It really has saved us days of work - possibly weeks". - A professor of psychology

'I just wanted to applaud you and your concordance software. After a few hours of learning its functioning capabilities I'm simply amazed. One incredible piece of work! Highly recommended!'

'Bravo! Extremely useful.'

'This is a nice piece of work - I have registered.'

'Thanks for the marvelous program!'

References:

http://www.concordancesoftware.co.uk/userviews.htm

http://www.translate.com/technology/tools/olifant/ReadMe.htm.

http://www.translatum.gr/forum/index.php?topic=344.0

http://www.lingo24.com/articles/CAT_tools_A_brief_overview_about_concordance%20software--2.htm

http://www.blackwell-synergy.com/doi/abs/10.1111/j.1468-2958.2002.tb00826.x

Arshad Abd.Samad. P.70. Online Teaching Learning in ELT

Nandy. 2005. Mastering English the Easy Way. 2nd printing. Selangor: Pelanduk Publications (M) Sdn. Bhd.



[1] www.concordancesoftware.co.uk