Crises com Bancos de Imagens: autenticidade e visibilidade no conteúdo em mídias sociais

Neste maio (2019), aconteceu o XIII Congresso da Abrapcorp, em São Paulo. Daniele Rodrigues e eu apresentamos trabalho inédito com o título “Bancos de imagens em conteúdo nas mídias sociais: entre (in)visibilidade e autenticidade“. O artigo teve dois objetivos. Em primeiro lugar, enfatizar a relevância dos bancos de imagens na cadeia produtiva da comunicação, uma vez que os bancos de imagens foram objeto relativamente negligenciado nos estudos científicos em língua portuguesa. Internacionalmente, passaram a ser foco de estudo recentemente do ponto de vista de representatividade, como o trabalho Interrogating Vision APIs. Em segundo lugar, o trabalho se dedicou a discutir conceitos como visibilidade, invisibilidade e autenticidade nos bancos de imagens através de uma coleção de anti-cases.

O resumo oficial do trabalho: “O presente artigo busca informações e reflexões sobre um ponto específico da cadeia de produção de conteúdo para mídias sociais: o uso de bancos de imagens na comunicação de marcas. Criados no início no século XX, os bancos de imagens influenciaram os campos da produção editorial, publicitária e midiática como um todo. Nos últimos anos, graças ao modelo de microstock e à disseminação das mídias sociais, os sites de venda ou aluguel de imagens ganham espaço e relevância nas culturas visuais corporativas. O texto apresenta, por meio de levantamento de trabalhos e casos relacionados a bancos de imagens, reflexões sobre aspectos produtivos do uso desses acervos no cotidiano de profissionais e sua relação com o conceito de autenticidade.

Veja apresentação em slideshow sobre o artigo e leia completo no ResearchGate:

Text Analysis with AntConc for social media data: Keyword Lists and Keyness

The Keyword List tool measures which words are unusually frequent or infrequent in datasets or corpora compared to a reference/benchmark word list (or files).

This post is part of a series of tutorials about AntConC:

  1. Intro, Opening a File and Settings
  2. Word Lists and File Viewer 
  3. Concordancer and Concordance Plot 
  4. Clusters and N-Grams
  5. Collocations 
  6. Keyword Lists

 

Uploading a Keyword List and analyzing keywords / keyness metric
1. As usual, the first step is to open your files/corpora and produce a word list. For that example, we are going to use the file with 16k tweets containing the term “Brazil’ (download it): .

2. Generate the word list through the Word List tab.

 

3. Basically, the Keyword List tool compares your files/corpora to one or more reference/benchmark files/corpora to highlight which words are more unusually frequent or infrequent. Or, in other words, which words are keywords in file(s)/corpora.

There are three main ways of using the Keyword List tool:

  • Compare your file(s) to a general corpus representing a national language. Here you can use a reference word list produced from texts representing a national language.
  • Comparing your file(s) to past texts or wordlists. This option could be appropriate for social media data, specially to discover new info. For example, you can compare recent texts to past corpora comprising 12-month period.
  • Comparing texts produced by different social media communities or texts reacting to different authors/pages (e.g. Facebook comments in different periods).

4. For the sake of simplicity, we’ll a comparison with the pre-produced corpus called BNC wordlist. To use it, download the wordlist from our wordlist folder (or from the BNC website), click in Use word list(s) and click in Add Files to add the BNC_WRITTEN_wordlist.txt . Click in Load then, Apply.

 

5. Now you can discover which terms/words in your dataset are the most “important” in the sense they are frequent or infrequent in a unusual amount.

Most unusually frequent words (positive Keyness):

 

Unusually infrequent words (negative Keyness). Bear in mind that we’are analyzing Twitter data, so the presence of some terms like however or although could be related to tweet formats and limitations:

 

So, this posts concludes our series on AntConc. To learn more about AntConc, text analysis and corpus linguistics:

Text Analysis with AntConc for social media data: Collocations

Collocations refers to how words occur regularly together in the texts/corpora. Searching for colocates related to a specific term could point to other words and expressions important in the documents.

This post is part of a series of tutorials:

  1. Intro, Opening a File and Settings
  2. Word Lists and File Viewer 
  3. Concordancer and Concordance Plot 
  4. Clusters and N-Grams
  5. Collocations (we are here)
  6. Keyword Lists (soon)

 

Exploring Collocation

1. As usual, open a file or set of files (don’t forget to configure the settings). In this tutorial, we are going to use the file plastic_19k_tweets_june_2018.txt available in our datasets folder.

2. Generate a Word List.

3. Go to the tab Collocates and search for a term like ‘plastic’. The following list ranks the more relevant collocates:

As you can see, most of the collocates are related to “plastic surgery”, not the material plastic.

 

4. A frequent problem is the listing of words which appears only one or few times in the file(s). So you can increase the Minimum Collocate Frequency:

 

5. The words will be searched in a Window Span to count the co-occurrences in the vicinity of the search term. You can increase or decrease the span on the left and on the right of the search term.

 

6. Since Twitter texts are very short, we recommend decrease the span. The results might be more precise, as in the following example:

With these results, you can explore the collocates to try to understand and locate meaningul words related to your keywords of interest.

Text Analysis with AntConc for social media data: Clusters and N-Grams

In the last post, we learn how to explore and analyze concordances in the software AntConc. Now we are going to show you how to use AntConc to study clusters, n-grams and also how to count hashtags frequency and locate them.

This post is part of a series of tutorials:

  1. Intro, Opening a File and Settings
  2. Word Lists and File Viewer 
  3. Concordancer and Concordance Plot 
  4. Clusters and N-Grams (we are here)
  5. Collocations (soon)
  6. Keyword Lists (soon)

 

Extracting N-Grams from the corpora

N-Grams are a contiguous sequence of items from a text(s). They can be used as a way to analyze phonemes, letters, syllables or words, for example. To our goals, we are going to see the required steps to generate n-grams of words in a given text.

1. Firstly, as usual, you need to open a file and generate a Word List. This time, we are going to use a corpus of  thousand tweets containing the term ‘plastic’. Download a corpus file with 19k tweets.

2. It’s important that you don’t forget to open the customized settings, because we’ll work with #hashtags and @usernames.

3. To use N-Grams, you need to go to the tab Clusters/N-Grams and check the option highlighted on the image:

 

4. Click and Start and here it is the result

 

5. You can change the minimum and maximum values of the N-Gram Size. For example, if you change them to 1 and 3, the result changes and now you can see 1-gram and 3-gram sequences of words:

 

6. To analyze specific clusters of words, you can search for them using the Search Term box.

 

7. Mind the options like Search Term Position. You can search for words before or after your terms. Searching for clusters with size 2 with the term plastic On Right, we get the following result.

 

8. Important: the column Range counts the number of Corpus Files that word or term is present. This number is specially interesting when you are comparing corpora.

 

Counting Hashtags

You can use the Clusters/N-Grams tab to count terms that have some structure, character or word in common. For example: #hashtags and @usernames in some social media platforms.

1. Open your file(s). Don’t forget to configure the settings as explained in the tutorial on files and settings.

2. Generate a Word List.

3. To generate a list of hashtags, for example, you should go in Clusters/N-Grams tab and use the Search Term   ‘ #* ‘ , using 1-1 values in the Cluster Size options.

 

Troubleshooting: counting hashtags didn’t worked? Probably, AntConc is configured with the default settings, where the ‘#’ symbol is a wildcard. Check!

Text Analysis with AntConc for social media data: Concordancer and Concordance Plot

In the lasts posts, we learn the basics about AntConc and how to generate and analyze word lists. Now we are going to show you how to use AntConc to study concordances through allowing the navigation of instances of a keywords in context.

Don’t forget that this post is part of a series of tutorials:

  1. Intro, Opening a File and Settings
  2. Word Lists and File Viewer 
  3. Concordancer and Concordance Plot (we are here)
  4. Clusters and N-Grams (soon)
  5. Collocations (soon)

 

Concordance basics

1. Open your file(s). In that example, we are going to use a simple example of only two texts: the english webpages about the countries United States and United Kingdom. You can find them at our datasets folder.

 

2. Generate a Word List.

 

3. Click on the Concordance tab. Now you can search for a letter, word or expression. We searched for the term ‘war’ and, as you can see, we got a total of 107 Concordance Hits. This is the sum of the total amount of results to that specific search in the two files.

 

4. By default, the search term will be highlighted in blue and up to three terms on the right will be highlighted in red, green and purple. You can change those colors in Global Settings.

They are useful to understand and sort the words in the vicinity of our term (or terms) of interest.

 

5. At the bottom, you can change the Kwic (Keyword-in-context) settings and choose which items you want to highlight and sort in the Concordance tab. You can select up to three levels. In the following images, 1R stands for “1 Right, or first item on the right”, 1L stands for “1 Left, or first item on the left” and so on.

 

Changing the levels to 1L, 2L and 3L will result on something like the following:

 

World War

 

as * as

 

Concordance plot

The concordance plot is a straightforward way to compare two or more corpora, their concordance hits and the distribution of words or phrases along them.


1. It is very simple to use the Concordance Plot. Just generate a Word List (if you haven’t already), search for a word or phrase on the Concordance tab and click on the Concordance Plot tab. Let’s use that same example from before, searching for ‘war’:

 

2. As we can see, the word ‘war’ is much more common on the United States WIkipedia page. What does that mean? It could be a indication of more interest on war between the editors of the USA wikipedia page, for example?

 

3. You can increase the zoom on the visualization in the option Plot Zoom:

 

4. You can observe in Step 2 that besides the number the fact that the characters in each file are different (126k x 135k), the rectangle representing the files has the same width. This is because the visualization is normalized to facilitate some comparisons. You can change that going in Tool Preferences -> Concordance Plot and selecting the option “Use a relative lenght” in the Plot Length Options.

Now you can see a width that better represents the length and the difference between the files:

 

Searching for a list of words

1. At the Concordance tool, as well in other AntConc tools, we can search for a list of words at once. Those words could be inserted manually or through a txt file.

 

2. In the Advanced Search options you can check the option “Use search term(s) from list below” and input the words. In the following example, we used the expressions below to include also terms like “economics”, “economically”, “finance”,  “financial” etc.

 

3. Alternatively, you can click on “Load File” and upload words from a .txt file.

 

4. With this technique you can look and explore concordances for words with semantic similarity:

 

5. Finally, another option on the Advanced Search specifications is to use Context Words and Horizons. The terms searched are filtered out according to proximity to other words. In the example below, the combination of terms related to war and conflicts to a filter of instances next to ‘crisis’ and ‘depression’ could be used for understanding the relations between war and economic problems: