Tag: codingdurer

  • Thank you very much!

    Thank you very much!

    Coding Dürer has been a dream once. Then it has been a third-party funding project. Then an organizational knot to untangle. And then it became true.

    Forgetting the sweat, I am very very happy about the process and outcome of Coding Dürer. Plus: Nothing went seriously wrong. A project like this wouldn’t be possible without the help of many.

    Here is a–probably still incomplete–list of those people I wholeheartedly like to thank:

     

    The Volkswagen Foundation and particularly Vera Szöllösi-Brenig for making this event financially possible. All food, drinks, trips and rooms are on them! We are indebted and will report back the results of this event.

    Natalia Karbasova and Patrick Müller from Hubert Burda Bootcamp for kindly hosting us and helping out in many situations.

    Sonja Gasser in helping me securing the funding from Volkswagen Foundation and in particular for designing the great Coding Dürer logo.

    Philipp Hartmann for the logistics of food and technology and in particular for being reliably at the right place at the right time.

    Nuria Rodríguez Ortega, Anna Bentkowska-Kafel, Lev Manovich, Justin Underhill and Mario Klingemann for their friendship, inspiration and believe in my pursuits.

    Hubertus Kohle for giving me the opportunity and freedom to engage in those pursuits.

    Liska Surkemper from the International Journal for Digital Art History for her wonderful support.

    Christian Waha, from Industrial Holographics, for kindly providing the Hololense. We will never forget.

    Christian Soellner and Florian Thurnwald from Microsoft for making their visit possible on very short notice.

    Douglas McCarthy and Barbara Fischer for their invaluable advice and experience and in particular for framing this event into a wider context.

    The many data providers, in particular those, who contributed to our list and blog. You are our partners.

    All following and contributing to #CodingDurer on Twitter. You have been an integral part of this event.

    And last, but not least, all participants. You have been working hard and made this event the success it is.

     

  • Tracing Picasso

    Tracing Picasso

    Tracing Picasso is a project that aims to analyse and understand the Picasso Phenomenon

    Picasso’s artworks are present throughout the greatest art collections and museums today. We want to retrace the path of these artworks (provenance) as well as their institutional reception, both in Europe and the US, in order to get a better understanding of this global phenomenon. Here are a few of our research questions:

    • When were Picasso’s works acquired by the various institutions?
    • Are there peaks or patterns that can be identified?
    • Which works sparked interested at which times?
    • Through which routes did they spread from their place of creation until their current collection?

    The data set from American museums is very rich and we have therefore decided to produce several types of visualisations to explore it. Data from European museums (specially in Spain and France) is not as easily accesible, and reusable. The amount of labor required to retrieve the data (from European museums), together with copyright issues, had a major impact in the results we are presenting today.

    The time-map

    This interactive visualisation aims to show the migrations of Picasso’s artworks throughout the world. The spatial dimension highlights not only his personal travels (where the artworks are created) but also the one of his artworks (where they were and currently are). The chronological dimension allows for each work to travel from one location to the next. It also shows the growing size of the various collections as the artworks are acquired by the institutions.

    Geo-spatial data viz made using Leaflet.js and own code for timeline and playback functionality

    Demo (tested with Chrome): https://ilokhov.github.io/picasso

    Code: https://github.com/ilokhov/picasso

    Basic features and functionalities for the map/time visualisation:

    • displaying the location of the objects
    • temporal dimension: showing location at a certain time
    • visualise the movement of the objects

    The graphs

    These visual representations of our data set enable more detailed information on the acquisition trends. The graphs reveal certain clusters and peaks that require further analysis, prompting reflexion and creating need to supplement our data.

    MET Museum Data Viz using app.rawgraphs.io

    A few preliminary results:

    • A general observation in the data: there are many more gifts than purchases in the direct provenance of the works.
    • These donations contain certain periods of Picasso’s production that interestingly complete the museum’s collection.

    MoMA Data Viz using app.rawgraphs.io
    Graph comparing aquisition patterns of MoMA vs. Metropolitan Museum. created with app.rawgraphs.io

    The data

    We have currently normalised and merged data from UK and US open access collections:

    To use data, we required several attributes, such as the creation date, the acquisition date and the current collection. We would like and have begun to complete the provenance (all locations and dates for each individual artwork).

    Data processing and clean up

    • OpenRefine (http://openrefine.org/)
    • Excel
    • Access to the cleaned up data will be provided on github
    • Data Structure and interchange format for further use in visualisation: JSON

    Data structure

    Our project might be extended, so that bigger dataset could be used. In that case we put the extended data on a server in a relational DBMS (here open source MariaDB). Then by means of SQL, all queries like “what artworks were produced after two particular artist met”, or “where were centers of art dealers activity after WW2” can be answered. The query result, which is originaly in a table format is transformed then to JSON structure in PHP  and passed to the visualizsation tool. Here there is the relational schema we propose:

    MariaDB relational schema

    Tools for visualisation we evaluated:

    We concldued there is no “out of the box” solution for time/map viz that we found.

    • problems we had with some libraries and tools:
    • not versatile and generic enough
    • too complicated or not well documented enough for easy reuse
    • do not offer sufficient functionalities and have to be extended

    In our case, it was easier to implement the functionality by oneself.

    Implementation geotemporal visualisation:

    Issues encountered:

    • Data
      • Copyright and licensing issues
      • Most data creators, aggregators and projects don’t share data
      • Most databases don’t offer complete data dumps
      • Not all online data is available at the open data repositories
        • eg Tate shows all works online, but the open dataset on github does not include loans -> mismatch
        • eg Met shows provenance on their website, but didn’t include the information in the open dataset
      • Republishing of merged dataset is problematic, as not all data sources share the same license
      • Most open data is not documented properly – call for paradata
      • Most datasets contain complex fields, which summarize lots of information
        • call for reconciliation
      • Location of owner ≠ location of artwork, so we simplified it.
    • Tools
      • SPARQL endpoints:
        • SPARQL offers lots of flexibility, but requires extensive knowledge of the underlying data model
        • Many different data models exist, so queries can’t be reused
          • Wikidata – Wikibase model
          • Europeana – EDM
          • British Museum – CIDOC-CRM
        • endpoints are not stable
      • OpenRefine extensions:
        • Manual installations necessary
        • No standard reconciliation services are pre-configured
      • JS libraries
        • Many libraries out there – difficult to check which one could be used to implement the desired result (takes some time to evaluate them)
      • Data structure
        • Coming up with a good data structure is tricky
      • Data import
        • Problems fitting data from spreadsheets into required JSON format

    Participants

  • Project Groups (5) – Dutch Church Interior Paintings

    Project Groups (5) – Dutch Church Interior Paintings

    [The following text is written by the project group “Dutch Church Interior Paintings”. You will find more information on their project soon on their website, which will be linked here.]

    The genre of church interior paintings has developed in the Netherlands in the middle of the 17th century and lasted only a few decades. It is represented by a relatively small group of specialized artists, such as: Pieter Jansz Saenredam (1597-1665), Emanuel de Witte (1616-1692), Hendrick Cornelisz Van Vliet (1611-1675), Gerard Houckgeest (ca.1600–1661), Anthonie De Lorme (c.1610-1673) and others. In many cases, the same church’s interior was depicted by the same artists dozens of times, however, the iconography, composition and vantage point (a position from which the interior is viewed) varied. One of the main factors in the development of this type of painting was the Reformation and its consequences, particularly the Calvinist approach to art. The so-called Beeldenstorm in 1566, a series of events during which churches were plundered and their Catholic decorations removed or destroyed, was a starting point of this far-reaching transformation of church interiors in the Netherlands. The churches became obsolete civic spaces filled with everyday activities, not exclusively restricted to preaching the God’s word any more. The altars, statues and other decorative elements were replaced by white-washed walls and simple panels filled with biblical excerpts instead of representations of saints and miracles. This is reflected in the church interior paintings, where we can see, for example, a woman breastfeeding, children at play, groups of gentlemen involved in conversations about business, couples strolling down the aisles, beggars and even dogs urinating. The latter was perhaps the strongest symbol of this transition of the church as a building: from a holy temple to a civic, urban and mundane space.
    There are hundreds of church interior paintings scattered across collections around the world. The research of this subject to date has focused mainly on particular artists or churches, rather than the overall genre and its network of artists and places. This project, born at Coding Dürer 2017, addresses this issue by providing a platform for further research on the paintings and creating an insight into the bigger picture of the genre for the first time. This visualisation of over 200 paintings of 26 different churches by 16 different artists was created with the following research questions in mind:

    • In what places the artists were active and in what places they depicted church interior(s)?
    • Did the artists have ‘favourite’ church interiors?
    • In what places and when could the artists possibly meet?
    • What church interiors were depicted the most?
    • What church interiors were depicted by most artists?

     

    DATASET

    The starting point of the project was a spreadsheet listing the paintings, artists, collections, etc. that was created for research purposes 2 years ago. This re-purposed data needed cleaning and additional information, e.g. IDs (artists, churches, paintings), locations (longitude, latitude), stable URLs for images.

     

    GOAL

    To create a map/visualisation that shows:

    1. Dutch churches depicted in the paintings (25)
    2. Artists’ activity (16+)

    TOOLS

     

  • Project Groups (4) – Meta Data Group

    Project Groups (4) – Meta Data Group

    The topic of visualization is quite popular at Coding Dürer. We already saw an approach in visualizing interactions of photographers with an artwork as well as an attempt to show how the work of an artist moves around the world throughout time. The “meta data group” engages in a project that relates to the person who gave the Hackathon its name: Albrecht Dürer. The group wants to show to whom and how the artist was related. By creating a graphic plot they want to answer the question of the artist’s relationship to his contemporaries in a way that is intuitive and easy to understand. The main challenge the team faces is to find data that fits their research question. ULAN, thUnion List of Artist Names from the Getty Research Institute, might offer a solution, as its data is organized in a network of categories like “assistant” or “teacher” which the team uses in recreating a network.

    Screenshot of ULAN data (a standardized list of artist’s names)
    The data that ULAN provides (as well as data from online research) can be visualized with the help of WebVowl and Gephi.
  • A moment to report

    A moment to report

    It is quiet today. A few voices and keyboard tapping. Today is working time.

    Since it is also quieter for me, let me report, what we have done so far.

    On Monday, we had a lively discussion on the subject of data, data analysis and data visualization in the context of art history. Many aspects came up from the truth of data, the necessity of cleaning and the viewpoint of the end-user. The question was raised how art-historian and information scientists can work together even if there is this perceived gap. That gap consists of different approaches, ways of thinking and even concepts associated with particular terms. It was agreed, however, that we have to be the agents of change we want to see. This group is such a diverse group of people from different backgrounds that the fruitfulness of interdisciplinary collaboration—that is the flip side of the coin—can probably nowhere better be yielded than here.

    There were also solutions proposed how to fill that gap:

    1. It needs time to work together.
    2. It needs communication, including visual communication (flipcharts are available).
    3. It needs translators, who can bring the fields together.
    4. It needs a shared vision. If everyone knows the goal it is easier to do the first step.
    5. It needs an interdisciplinary mind-set of openness and cognitive flexibility.

    Are there more elements that you think are important? What are your experiences? Let me know in the comments below or via Twitter @HxxxKxxx.

    Then we were talking about data sources and on Tuesday gathered a list of tools. And after a Post-It wall of ideas we formed 8 project teams that started working together. Here is the list of preliminary group names:

    1. Church interiors
    2. Group One (later renamed Picasso Group)
    3. The Americans
    4. The Associatives
    5. Metadata Group
    6. Image Similarity Group
    7. Generative Machine Learning Group
    8. Chatbot for Exhibitions

    Additional input came from contextualising Lunch Talks by Nuria Rodríguez Ortega and Anna Benkowska-Kafel and the very inspiring Evening Talk with Lev Manovich. Also the Lightning Talks, where everyone had the chance to present their home project, showed what a fantastic group comes together here.

    All participants are now in high activity, discussing and gesturing in front of displays. That is wonderful to watch… Today we are looking forward to listening to the Lunch Talk by Justin Underhill (Berkeley University) and tomorrow by Mario Klingemann (Google Fellow). On Friday we will be presenting the results in a public event in the Department for Art History.

    You can follow those parts of the event via live streaming. Past lectures are also archived. You can also follow us on the Twitter hashtag #CodingDurer which is populated with many tweets not only from participants. Here you can contribute and get into a conversation, bring forward your own projects and ideas. We also try to keep you up-to-date on our blog. Have a look at it from time to time.

    We would like to have the global network be part of the event and interweave their talents into our group.

    You can also see an overview of Day 1 and Day 2 on Twitter.

    That’s it for the moment from me.

     

  • Project Groups (3) – Tracing Picasso

    Project Groups (3) – Tracing Picasso

    Photo by @airun72

    Throughout his life Picasso created a huge body of work, including paintings, drawings as well as sculptures, that travelled around the world. It seems impossible to grasp how and where the objects moved. One project group at Coding Dürer tries to solve this problem and help us understand the provenience of Picasso’s work by using digital tools. They use OpenRefine to handle the metadata provided by the Met Museum and the MoMA. D3 offers great timeline librarys to visualize time and place. Combined with information about Picasso’s life and exhibitions their interactive tool can show us how Pablo and his work moved throughout time.

  • Project Groups (2) – Albot

    Project Groups (2) – Albot

    Photo from Wikimedia

    You’re at a museum and want to find out more about an artwork you like? Then just ask Albot, the art history chatbot. He will access the museum’s metadata for you and answer simple questions about the artwork, like: Who’s the artist? What’s the title? Which people are depicted? At least that’s the vision of one of the project groups at Coding Dürer. They start with Albrecht Dürer’s “Allerheiligenbild” and try to formulate questions. By extracting keywords, Albot can understand questions and find answers. The team still tries to figure out which chatbot to use. Dexter or the Microsoft bot framework seem to offer great solutions.

  • Yale Center For British Art—Data Source Description

    Yale Center For British Art—Data Source Description

    JMW Turner, Inverary Pier, Loch Fyne: Morning

    The YCBA (@YaleBritishArt) has been sharing high-resolution images of its collection objects in the public domain since Yale University adopted its Open Access Policy in 2011, and today about 71,000 such images are available for download free of charge, including for commercial usage: http://britishart.yale.edu/collections/search

    The YCBA also makes its images available as IIIF assets. We publish a top-level collection that contains child collections for paintings, sculpture, etc.   The collections contain the IIIF Manifests for each object.   

    Machine readable YCBA data can be accessed currently by harvesting XML metadata (LIDO XML)and querying Linked Open Data semantic endpoint (data organized there with CIDOC CRM ontology). Access to or use of the Center’s data and services is subject to the Center’s Open Data And Data Services Terms of Use.

  • Albertina, Vienna—Data Source Description

    Albertina_Logo

    The Albertina safeguards one of the most important and extensive graphic art collections in the world. It comprises around 50,000 drawings and watercolours, as well as some 900,000 graphic art works, ranging from the Late Gothic era to the present.

    The arc of exquisite works stretches from Leonardo da Vinci, Michelangelo Buonarroti and Raphael Santi through Albrecht Dürer, Peter Paul Rubens and Rembrandt Harmensz van Rijn to Claude Lorrain, Honoré Fragonard and Paul Cézanne. In the modern section, the holdings range across Egon Schiele, Gustav Klimt and Oskar Kokoschka via Pablo Picasso and Jackson Pollock to Robert Rauschenberg, Andy Warhol, Alex Katz, and finally to Franz Gertsch, Georg Baselitz and Anselm Kiefer.

    Albertina is publishing a wide range of its works that are free of artist’s copyright in the Europeana collection. For CodingDürer Albertina is providing metadata from all its artworks that are published in Europeana collection. These datasets are placed in the public domain using a CC0 licence.
    Images are not included and are not part of the dataset.

    There are about 58.000 objects published in Europeana, amongst them about 40.000 drawings and prints from the Graphische Sammlung, 9.000 objects from the Fotosammlung, 5.500 objects from the Architektursammlung, 3.700 objects from the Plakatsammlung and some objects from the Gemälde- und Skulpturensammlung).

    Albertina is providing the following metadata concerning the work of art: title, creator, classification type, medium, size, creation date, provenance, identifier (= inventory number), institution, providing country, collection (is part of: Graphische Sammlung, Fotosammlung, Architektursammlung, Plakatsammlung, Gemälde- und Skulpturensammlung).

  • DAC Open Access Images—Data Source Description

    The Davison Art Center (DAC) at Wesleyan University in Connecticut (United States) holds more than 25,000 works on paper, chiefly prints and photographs. The DAC collection serves teaching, study, research, exhibition, and other educational purposes. This includes public sharing of high-resolution images of collection objects which are themselves free of copyright. These images have been provided in growing numbers since 2012 as DAC Open Access Images, which may be freely discovered and downloaded via DAC Collection Search.

    DAC Collection Search offers text-based catalog records for nearly the entire collection, along with (to date) 4,590 downloadable DAC Open Access Images representing most of the DAC’s European prints from the 16th through 19th centuries. High-resolution, zoomable images of those 4,590 prints also are available for viewing online. A shortcut relevant to Coding Dürer leads directly to links to all DAC Dürer holdings with images.

    Each DAC Open Access Image is provided for free public download and use in two versions: a publication-quality TIFF (4,096 pixels long dimension) and a presentation-ready JPEG (1,024 pixels). A ReadMe offers technical guidance for image users. These images may be freely used under the DAC Open Access Images policy, which applies to DAC images that have no known copyright restrictions. Please see that policy for details.

    DAC cataloging metadata for these images (as well as for other collection holdings) may be freely downloaded from the same DAC Collection Search pages in two forms: structured LIDO XML and a basic, human-readable text caption in English. In order to make it as useful as possible for projects working across multiple collections, this metadata is provided under the Creative Commons CC0 1.0 Universal (public domain dedication) license.

    Most of the images of British, Dutch, and German prints (and thus, the Dürer images) were made in 2015 or 2016 during the first two of three summers of grant-funded digital photography of DAC collection objects. This digitization project was made possible in part by the U.S. Institute of Museum and Library Services (IMLS).

    Development of DAC Collection Search is ongoing. It may be offline on occasion for updates and improvements between 5:00 and 7:00 PM Eastern time (GMT -5:00h or -4:00h, depending on season).

    #musetech #museweb #opencontent #openglam #codingdurer #digitalarthistory @wesleyan_u @roblancefield