The past few years have seen an increasing interest in Natural Language Processing and other text-mining techniques in the humanities. This tendency has been sparked by the distant reading approaches for literary theory, which aim to draw general conclusions on larger amounts of text using computational techniques. This way of tackling data has gained momentum, and many (large scale) projects have been set up in order to meet the expectations of humanities scholars wanting to make sense of the vast amount of digitised data that has become available in public domain, primarily, 18th and 19th but also 20th century publications. For some time NLP has been mainly used for the confirmation of existing historical knowledge, but now many techniques and tools have become more mature, it is time to draw an intermediary balance of NLP and text mining in general and the mining of serial publications in particular. Moreover, it also time to question the phenomenon of ‘scientific serendipity’ and NLP as a set of technologies enabling such serendipity.
From 31 March to 31 May, the Ghent Centre for Digital Humanities will be organising a doctoral specialist course to introduce humanities PhD students to the methods and approaches in Digital Humanities. The course is free for all Belgian PhD students within arts and humanities.
Programme + registration: click here
The Faculty of Arts and Philosophy is looking for a full-time post-doctoral assistant in the area of digital textual analysis, within the Ghent Centre for Digital Humanities. All applications must be received no later than 07/10/2016 at 23:59 (CET) at email@example.com. More information about this can be found on our website.