“Preliminary Analysis of the Preliminaries Project”: More on the Preliminaries Project from David Brown

CulturePlex Laboratory at Western University

A new blog posting by David Brown of the CulturePlex Lab explores the Preliminaries Project in more detail, with a particular focus upon analyzing the project graph:

Currently the first editions list (Duque de Lerma, 1598-1618) constists of 330 editions, out of which I have been able to obtain 228 scanned copies of preliminary sections, approximately %70, which isn’t bad considering that these texts were published 400 years ago. Of these scans, around 120 have been entered into the database, producing a graph with 1612 nodes and 3472 relationships.

In a post that includes a number of visualizations of the database using Gephi, Brown goes on to detail the methods by which he used Python to produce “to mimic the filtering abilities of Gephi and create a way to isolate and compare subsets of the graph in order to generate these Publication Networks.”

David Brown’s blog post, with visualizations and Python code snippets, can be found here.

The Preliminaries Project at Western Arts and Humanities’ CulturePlex Lab

CulturePlex Laboratory at Western University

In a new blog posting, David Brown of the CulturePlex Lab at the Faculty of Arts and Humanities at Western University introduces the “Preliminaries Project,” a project to  “study the complex social networks involved in the production of Early Modern Spanish literature.”

In particular, the “Preliminaries Project”  will focus its attention upon “literature published in the European Spanish Empire and its American colonies during the 17th century, a period characterized by an increasingly complex globalized structure that allowed for a comparatively rapid exchange of ideas, goods, and cultural objects between Asia, the Americas, and Europe.”

Using the CulturePlex Lab’s own “Sylva” graph database tool to store and manage the information gleaned from a broad and comprehensive survey of the literature, the project will run this data through “visualization and statistical/metric analysis” employing “built-in algorithms and Python based scripting.”

A full discussion of the project and its aims can be found on the project blog, and will be supplemented and expanded by future postings on the project.