Florence4D adopts digital art history approaches to deliver a paradigm shift in how context-based research on art and architecture can be delivered through the integration of spatial methodologies, especially mapping and 3D visualisation. Our project focuses on the well-studied city of Florence as a case study, drawing on extensive artworks in UK museum collections and decades of academic research, to offer a new approach drawing largely on existing data that favours diachronic histories over previously prevalent linear narratives. Our project workflow is reliant on a research-based modelling process, but creates interoperable data and visualisations, ensuring that our outputs address both the research community (in universities and museums), but also the wider public.

This website is a work in progress. We’re launching with a number of urban-scale datasets and a few case examples to demonstrate the potential of the integrated 3D models and mapping interface. There’s a variety of ways for you to explore these through the homepage and we’ll be building up the content over the coming months. 

San Pier Maggiore Scanning
Cappella Rucellai Scanning

We’re also interested in hearing from researchers who would like to get involved. We’re working with Open Access principles and are sharing our data, and where possible the code for some of our work. Future work will build on these methodologies to extend the full adoption of the IIIF frameworks and LOD.

Florence4D is a research collaboration between Prof Fabrizio Nevola (University of Exeter) and Dr Donal Cooper (University of Cambridge) and a growing team of researchers, PhD and student interns (read more here). Our project has been generously funded by the Getty Foundation, with additional support from the Arts and Humanities Research Council, the University of Exeter and the University of Cambridge. Partnerships with major museums, as well with digital creative industries are key to our approach, and you can discover more about these partnerships here.

Prev Next

Creative Commons License
This work is licensed under a Creative Commons Attribution 2.0 Generic License.