IOSACal is an open source program for calibration of radiocarbon dates.
A few days ago I released version 0.4, that can be installed from PyPI or from source. The documentation and website is at http://c14.iosa.it/ as usual. You will need to have Python 3 already installed.
The main highlight of this release are the new classes for summed probability distributions (SPD) and paleodemography, contributed by Mario Gutiérrez-Roig as part of his work for the PALEODEM project at IPHES.
A bug affecting calibrated date ranges extending to the present was corrected.
On the technical side the most notable changes are the following:
requires NumPy 1.14, SciPy 1.1 and Matplotlib 2.2
removed dependencies on obsolete functions
improved the command line interface
You can cite IOSACal in your work with the DOI https://doi.org/10.5281/zenodo.630455. This helps the author and contributors to get some recognition for creating and maintaining this software free for everyone.
Last week a tweet from the always brilliant Jolene Smith inspired me to write down my thughts and ideas about numbering boxes of archaeological finds. For me, this includes also thinking about the physical labelling, and barcodes.
Question for people who organize things for their job. I'm giving a few thousand boxes unique IDs. should I go random or sequential?
The question Jolene asks is: should I use sequential or random numbering? To which many answered: use sequential numbering, because it bears significance and can help detecting problems like missing items, duplicates, etc. Furthermore, if the number of items you need to number is small (say, a few thousands), sequential numbering is much more readable than a random sequence. Like many other archaeologists faced with managing boxes of items, I have chosen to use sequential numbering in the past. With 200 boxes and counting, labels were easily generated and each box had an associated web page listing the content, with a QR code providing a handy link from the physical label to the digital record. This numbering system was put in place during 3 years of fieldwork in Gortyna and I can say that I learned a few things in the process. The most important thing is that it’s very rare to start from scratch with the correct approach: boxes were labeled with a description of their content for 10 years before I adopted the numbering system pictured here. This sometimes resulted in absurdly long labels, easily at risk of being damaged, difficult to search since no digital recording was made. I decided a numbering system was needed because it was difficult to look for specific items, after I had digitised all labels with their position in the storage building (this often implied the need to number shelves, corridors, etc.). The next logical thing was therefore to decouple the labels from the content listing ‒ any digital tool was good here, even a spreadsheet. Decoupling box number from description of content allowed to manage the not-so-rare case of items moved from one box to another (after conservation, or because a single stratigraphic context was excavated in multiple steps, or because a fragile item needs more space …), and the other frequent case of data that is augmented progressively (at first, you put finds from stratigraphic unit 324 in it, then you add 4.5 kg of Byzantine amphorae, 78 sherds of cooking jars, etc.). Since we already had a wiki as our knowledge base, it made sense to use that, creating a page for each box and linking from the page of the stratigraphic unit or that of the single item to the box page (this is done with Semantic MediaWiki, but it doesn’t matter). Having a URL for each box I could put a QR code on labels: the updated information about the box content was in one place (the wiki) and could be reached either via QR code or by manually looking up the box number. I don’t remember the details of my reasoning at the time, but I’m happy I didn’t choose to store the description directly inside the QR code ‒ so that scanning the barcode would immediately show a textual description instead of redirecting to the wiki ‒ because that would require changing the QR code on each update (highly impractical), and still leave the information unsearchable. All this is properly documented and nothing is left implicit. Sometimes you will need to use larger boxes, or smaller ones, or have some items so big that they can’t be stored inside any container: you can still treat all of these cases as conceptual boxes, number and label them, give them URLs.
There are limitations in the numbering/labelling system described above. The worst limitation is that in the same building (sometimes on the same shelf) there are boxes from other excavation projects that don’t follow this system at all, and either have a separate numbering sequence or no numbering at all, hence the “namespacing” of labels with the GQB prefix, so that the box is effectively called GQB 138 and not 138. I think an efficient numbering system would be one that is applied at least to the scale of one storage building, but why stop there?
Turning back to the initial question, what kind of numbering should we use? When I started working at the Soprintendenza in Liguria, I was faced with the result of no less than 70 years of work, first in Ventimiglia and then in Genoa. In Ventimiglia, each excavation area got its “namespace” (like T for the Roman theater) and then a sequential numbering of finds (leading to items identified as T56789) but a single continuous sequential sequence for the numbering of boxes in the main storage building. A second, newer building was unfortunately assigned a separate sequence starting again from 1 (and insufficient namespacing). In Genoa, I found almost no numbering at all, despite (or perhaps, because of) the huge number of unrelated excavations that contributed to a massive amount of boxes. Across the region, there are some 50 other buildings, large and small, with boxes that should be recorded and accounted for by the Soprintendenza (especially since most archaeological finds are State property in Italy). Some buildings have a numbering sequence, most have paper registries and nothing else. A sequential numbering sequence seems transparent (and allows some neat tricks like the German tanks problem), since you could potentially have an ordered list and look up each number manually, which you can’t do easily with a random number. You also get the impression of being able to track gaps in a sequence (yes, I do look for gaps in numeric sequences all the time), thus spotting any missing item. Unfortunately, I have been bitten too many times by sequential numbers that turned out to have horrible bis suffixes, or that were only applied to “standard” boxes leaving out oversized items.
On the other hand, the advantages of random numbering seem to increase linearly with the number of separate facilities ‒ I could replace random with non-transparent to better explain the concept. A good way to look at the problem is perhaps to ask whether numbering boxes is done as part of a bookkeeping activity that has its roots in paper registries, or it is functional to the logistics of managing cultural heritage items in a modern and efficient way.
Logistics. Do FedEx, UPS, Amazon employees care what number sequence they use to track items? Does the cashier at the supermarket care whether the EAN barcode on your shopping items is sequential? I don’t know, but I do know that they have a very efficient system in place, in which human operators are never required to actually read numerical IDs (but humans are still capable of checking whether the number on the screen is the same as the one printed on the label). There are many types of barcode used to track items, both 1D and 2D, all with their pros and cons. I also know of some successful experiments with RFID for archaeological storage boxes (in the beautiful depots at Ostia, for example), that can record numbers up to 38 digits.
Based on all the reflections of the past years, my idea for a region- or state-wide numbering+labeling system is as follows (in RFC-style wording):
it MUST use a barcode as the primary means of reading the numerical ID from the box label
the label MUST contain both the barcode and the barcode content as human-readable text
it SHOULD use a random numeric sequence
it MUST use a fixed-length string of numbers
it MUST avoid the use of any suffixes like a, b, bis
In practice, I would like to use UUID4 together with a barcode.
A UUID4 looks like this: 1b08bcde-830f-4afd-bdef-18ba918a1b32. It is the UUID version of a random number, it can be generated rather easily, works well with barcodes and has a collision probability that is compatible with the scale I’m concerned with ‒ incidentally I think it’s lower than the probability of human error in assigning a number or writing it down with a pencil or a keyboard. The label will contain the UUID string as text, and the barcode. There will be no explicit URL in the barcode, and any direct link to a data management system will be handled by the same application used to read the barcode (that is, a mobile app with an embedded barcode reader). The data management system will use UUID as part of the URL associated with each box. You can prepare labels beforehand and apply them to boxes afterwards, recording all the UUIDs as you attach the labels to the boxes. It doesn’t sound straightforward, but in practice it is.
And since we’re deep down the rabbit hole, why stop at the boxes? Let’s recall some of the issues that I described non-linearly above:
the content of boxes is not immutable: one day item X is in box Y, the next day it gets moved to box Z
the location of boxes is not immutable: one day box Y is in room A of building B, the next day it gets moved to room C of building D
both #1 and #2 can and will occur in bulk, not only as discrete events
The same UUIDs can be applied in both directions in order to describe the location of each item in a large bottom-up tree structure (add as many levels as you see fit, such as shelf rows and columns):
and since we would have already built our hypothetical data management system, this data is filled into the system just by scanning two barcodes on a mobile device that will sync as soon as a connection is available. Moving one box to another shelf is again a single operation, despite many items actually moved, because the leaves and branches of the data tree are naïve and only know about their parents and children, but know nothing about grandparents and siblings.
There are a few more technical details about data structures needed to have a decent proof of concept, but I already wrote down too many words that are tangential to the initial question of how to number boxes.
L’archeologia italiana è finita. C’è chi dice che non è vero e c’è chi non se n’è ancora accorto, ma molti elementi puntano in questa direzione e si stanno verificando tutti in un tempo brevissimo. Provo a farvi una panoramica, se mi riesce.
1. La riforma del MiBACT
Per chi non lo sapesse ancora, con DPCM 171/2014 il MiBACT ha avviato una riforma che ogni tanto viene ripresa, a cui viene aggiunto un altro capitolo, come una specie di romanzo che non ha mai fine. Dopo l’accorpamento delle soprintendenze storico-artistiche e architettonico-paesaggistiche adesso è il turno di quelle archeologiche. Non importa che tutta la struttura periferica del ministero sia paralizzata da un anno proprio per mettere in atto la prima fase di questa riforma, con il passaggio di beni e competenze ai nuovi poli museale e agli sbandierati musei autonomi. Non importa che le grandi inefficienze nella gestione attuale siano da imputare ai continui cambiamenti a cui la macchina già lenta del ministero è sottoposta.
Adesso, con avallo e sostegno di personalità come Giuliano Volpe, si riparte con una nuova puntata della riforma, prima ancora di aver minimamente concluso il ciclo precedente. Stupisce la ferocia da storyteller con cui si dipinge la beltà tutta verbale e teorica di questi nuovi assetti da venire, che tradisce una profonda ignoranza dello stato effettivo delle cose. E a sentire Volpe, pare non sia nemmeno finita e che ci debba essere ancora una fase 3 in cui, udite udite, si avvierà una integrazione tra università (a proposito di istituzioni moribonde) e MiBACT. Io mi domando se e come chi è rimasto miracolosamente a galla dopo un decennio abbondante di sfascio del sistema universitario schiacciato da riforme su riforme pensa che sia questo il modo per migliorare l’efficienza dell’apparato che deve tutelare il patrimonio culturale italiano. Domanda retorica. Io sono miope, mi mancano 5 diottrie, che ci volete fare.
Qualunque sia l’assetto sulla carta che questa riforma vuole dare al ministero, di fatto l’unico risultato effettivo e già ben visibile è la sua totale paralisi ed inefficacia, in tempi di efficientissimo silenzio-assenso. Non diamo il beneficio del dubbio ad un governo che ha dimostrato di avere a cuore solo gli interessi di pochi, e pensiamo ‒ con il rassoio di Occam in mano ‒ che sia pienamente raggiunto l’obiettivo di eliminare un fastidioso ostacolo alle attività economiche tipiche del rilancio post-crisi come edilizia, gli scempi paesaggistici, etc. e non a caso altri ostacoli hanno subito sorte anche peggiore in questi stessi giorni (abolito il Corpo Forestale dello Stato).
Temo che a poco servano gli strali quotidiani di Tomaso Montanari, che ha sostituito Salvatore Settis nel ruolo di Grillo parlante. È un gioco delle parti che al massimo rassicura quei pochi che ancora si preoccupano di questi problemi di non essere soli, e di aver assolto la pratica dell’indignazione tramite la lettura sulle pagine di un quotidiano.
2. L’abolizione dell’archeologia preventiva
L’archeologia però dà veramente fastidio a Matteo Renzi. Infatti, come voci bene informate dicevano da un paio di mesi, nelle ultime bozze della nuova versione del Codice degli Appalti gli artt. 95 e 96 sulla “archeologia preventiva” sono completamente scomparsi. Fonti di alto livello del MiBACT confermano questa versione dei fatti, lasciando un minuscolo spiraglio per la possibile inclusione della stessa norma nel Codice dei BBCC. Ma vedete al punto sopra per immaginare con quale efficacia potrà essere attuata l’archeologia preventiva fuori dai cardini del sistema più ampio dei lavori pubblici (lasciamo perdere le opere private, eh). Siamo in pieno spregio alla Convenzione della Valletta, che pure con una tipica operazione renziana di fumo negli occhi era stata ratificata a 23 anni di distanza dal Parlamento italiano (altre operazioni di fumo negli occhi: unioni civili, Freedom of Information Act … per creduloni di ogni ordine e grado).
Ovviamente questa riforma avviene al di fuori del MiBACT, e quindi conferma che la scure sull’archeologia è un unico disegno più ampio, di cui il ministero è solo spettatore passivo (la riforma è stata d’altra parte scritta dal prof. Lorenzo Casini).
Vedremo nei prossimi anni chi sarà tanto ingenuo da voler intraprendere studi archeologici all’università. Ormai è passata la stagione delle iscrizioni facili e ben pochi atenei hanno le carte per attrarre studenti verso le materie umanistiche in generale, figuriamoci verso una professione fallita in partenza. Una volta ridotti all’osso gli iscritti ai corsi universitari di archeologia vedremo chi sarà ancora così soddisfatto del nuovo assetto e delle nuove libertà di ricerca concesse agli atenei se è vero che anche gli articoli 88 e 89 del Codice cadranno sotto la scure della riforma.
3 . La fine delle piccole cose
Qualche giorno fa è stata diffusa la notizia del taglio dei fondi del MiBACT a FastiOnline, un progetto unico nel suo genere di raccolta e condivisione dei dati sugli scavi archeologici in molti paesi, che aveva avuto uno slancio particolarmente bello con l’obbligo per tutti i concessionari di scavo di contribuire ad aggiornare la banca dati. Ma dimensione internazionale, open data, trasparenza e standardizzazione sono tutte voci assenti dal nuovo corso dell’archeologia italiana e magari qualche concessionario di scavo sarà contento di non dover più rendere conto delle proprie goffe performance sul campo. Nelle nuove, bellissime soprintendenze uniche su base interprovinciale si farà una tutela di puro cabotaggio burocratico, entro confini ancora più ristretti dei precedenti (perché, lo sappiamo benissimo, l’archeologia italiana di Stato non brilla né per ecumenismo né per ampiezza di vedute).
Qualche settimana fa tutte queste notizie mi sembravano già gravi sintomi, e non ho avuto il tempo di scrivere tutto in un unico foglio, su cui mostrarvi come unire i puntini. Le voci di corridoio si confermano sempre, e Matteo Renzi ci ha abituati a svolgere i suoi obiettivi con efficienza inumana. A breve quindi le nuove soprintendenze uniche, ancora in pieno marasma riorganizzativo, saranno trasferite sotto i nuovi uffici territoriali dello Stato, per garantire la loro totale servitù nei confronti della politica, in uno dei paesi più intrisi di corruzione e malaffare.
Rimane solo una soddisfazione: il riconoscimento che l’archeologia ha un ruolo veramente straordinario nella società, trasformativo dei rapporti di potere e di accesso al paesaggio, e questo ruolo dà fastidio a chi vuole mantenere lo status quo a proprio vantaggio. Di questo possiamo essere orgogliosi e continuare come possiamo a infastidire il popolo italiano presentandogli i frammenti del suo passato.
Immagine di copertina: Woodcut illustration of Cassandra’s prophecy of the fall of Troy (at left) and her death (at right) – Penn Provenance Project by kladcat [CC BY 2.0], via Wikimedia Commons
Perché Genova si chiama così? Dipende dall’epoca in cui fate questa domanda.
Oggi il calendario segna 2015 quindi lasciamo perdere la (interessante ma ben nota) paretimologia medievale di Ianua e quella molto meno interessante che rimanda al termine greco xenos. Parliamo dell’etimologia “vera” di Genua, attestata per la prima volta in un cippo miliario dell’anno 148 a.C. (CIL I¹ 540 = CIL V 8045).
L’ipotesi principale è che genua sia un termine indoeuropeo, che significherebbe “bocca” (*genaua), riferito alla foce del fiume ‒ il Bisagno. La tesi è stata formalizzata da Xavier Delamarre che nota nel suo Noms de lieux celtiques de l’Europe ancienne. Dictionnaire (p. 13, nota 5; traduzione mia):
Per limitarsi alla toponomastica, è notevole che l’antico nome di Genova, Genua, porto ligure per antonomasia, abbia una costruzione precisamente simile a quello della gallica Ginevra, Genava, entrambi esito di *Genoṷā, derivazione in -ā di un tema *genu- che indica la bocca in celtico (irlandese gin bocca, gallese gên ‘mascella’), e quindi per estensione ‘l’imboccatura’. Ora, se la derivazione semantica bocca → imboccatura, porto è banale e universale (latino ōs → ōstium, tedesco Mund → Mündung, finlandese suu ‘bocca’ → (joen)suu, etc), è in celtico e solo in celtico che il tema indoeuropeo *ǵénu- / *ǵonu- che inizialmente indica la mascella o le guance (latino genae, gotico kinnus, sanscrito hanu-, etc.) è passato per metonimia a designare la bocca. Il nome «ligure» del porto di Genua è pertanto costruito su un tema la cui semantica è specificamente celtica.
Tra gli archeologi Delamarre ha trovato un primo forte sostegno da parte di Filippo Maria Gambari. La validità di questa ipotesi è slegata dalla attribuzione della lingua ligure preromana alla famiglia indoeuropea o al substrato pre-indoeuropeo, proprio perché il nome è attestato solo in epoca così tarda, e quindi potrebbe essere una acquisizione linguistica dalla lingua celtica in una situazione ‒ attestata anche archeologicamente ‒ di commistione celto-ligure. Le scoperte archeologiche dell’ultimo decennio nella zona della foce del Bisagno rafforzano questa ipotesi e indeboliscono molto la precedente ipotesi, sempre di ambito indoeuropeo, che indicava una possibile radice *genu– “ginocchio”, riferita ad un altra caratteristica geografica di Genova: l’insenatura del porto.
È interessante come in entrambe le ipotesi sia stata data per acquisita la coincidenza dell’etimologia di Genova con quella di Ginevra (registrata per la prima volta come Genaua nel De Bello Gallico), come indicato ad esempio sul Wikeriadur Brezhoneg (wikizionario bretone). Il bretone (con alcune lingue affini) e il gallese sono di fatto le uniche lingue che permettono di indicare questa parola come celtica, creando quindi un possibile legame etimologico. In effetti nel Catholicon breton (1464), la più antica testimonianza scritta di questo termine è indicata come guenou (mentre nel bretone contemporaneo il lemma è genoù), quindi il termine antico è più difforme dalla forma “celtica” rispetto a quello contemporaneo. In gallesegenau indica “mouth, lips; estuary, entrance to a valley, pass, mouth (of sack, cave, bottle, &c.), hole; fig. saying, speech.” (Geiriadur Prifysgol Cymru). Sia il gallese sia il bretone sono considerate lingue celtiche “insulari”, cioè parzialmente distinte dalle lingue celtiche parlate sul continente (a maggior ragione in Liguria) e ora estinte.
L’archeologa Piera Melli, accettando questa ipotesi, sostiene nel suo recente volume Genova dalle origini all’anno Mille che potrebbe anche essere avvenuta una “etruschizzazione” del nome, in cui sarebbe stato sostanzialmente preso a modello il nome di kainua (Marzabotto) e di altri nomi di città etrusche come mantua e padua. Tuttavia questa forma etrusca del nome di Genova non è attestata, e rimane una suggestione legata alla relativa abbondanza di iscrizioni in alfabeto etrusco rinvenute a Genova.
Genova nacque alla foce del Bisagno, ma era solo l’inizio.
Who said PhD theses have to be boring texts with horrible typography?
Even if my thesis is far from being ready for discussion, I can’t help some diversion from the actual writing. Today I put together this experiment for an interlude page: imagine you’re skimming through dozens of pages and suddenly your eyes catch something different: a short sentence at font size 36, coupled with a rough sketch drawing of a Byzantine cooking pot, or the interior of a cellar where a young girl is walking to bring wine to the table.
The drawings are mine ‒ pencil on pieces of recycle paper with minor passages of digital editing and vectorisation. You may like them but they’re not sketchy for an artistic choice, that’s just the best I am able to do with my bare hands. Some practice might help, I am told.
The text is typeset in the Brill font, that is only free for personal use, but I like it and I wanted to experiment. Alegreya, Linux Libertine, Source Serif all look good on that page, too. I think it needs a serif font.
Does this bring more value to the surrounding pages? I’m not sure, to be honest. It could be said that they distract from the actual content, that is supposed to be of academic value, and that this kind of page layout is best left for architecture and design magazines. However, not everyone is going to read your PhD thesis from cover to cover, and a bit of typographic color here and there will not hurt.
Earlier this year, in cold January morning commutes, I finally read William Gibson’s masterpiece trilogy. If you know me personally, this may sound ironic, because I dig geek culture quite a bit. Still, I’m a slow reader and I never had a chance to read the three books before. Which was good, actually, because I could enjoy them deeply, without the kind of teenage infatuation that is quickly gone ‒ and most importantly because I could read the original books, instead of a translation: I don’t think 15-year old myself could read English prose, not Gibson’s prose at least, that easily.
I couldn’t help several moments of excitement for the frequent glimpses of archaeology along the chapters. This could be a very naive observation, and maybe there are countless critical studies that I don’t know of, dealing with the role of archaeology in the Sprawl trilogy and Gibson’s work in general. Perhaps it’s touching for me because I deal with Late Antiquity, that is the closest thing to a dystopian future that ever happened in the ancient world, at least as we see it with abundance of useless objects and places from the past centuries of grandeur. Living among ruins of once beautiful buildings, living at the edge of society in abandoned places, reusing what was discarded in piles, black markets, spirituality: it’s all so late antique. Of course the plot of the Sprawl trilogy is a contemporary canon, and the characters are post-contemporary projections of a (very correctly) imagined future, but the setting is, to me, evoking of a world narrative that I could embrace easily if I had to write fiction about the periods I study.
Count Zero is filled with archaeology, of course especially the Marly chapters. Towards the end it gets more explicit, but it’s there in almost all chapters and it has something to do with the abundance of adjectives, the care for details in little objects. Mona Lisaoverdrive is totally transparent about it, since the first pages of Angie Mitchell on the beach:
The house crouched, like its neighbors, on fragments of ruined foundations, and her walks along the beach sometimes involved attempts at archaeological fantasy. She tried to imagine a past for the place, other houses, other voices.
– William Gibson. Mona Lisa Overdrive, p. 35.
But really, you just have to follow Molly along the maze of the Straylight Villa in Neuromancer to realize it’s a powerful theme of all the Sprawl trilogy.
The Japanese concept of gomi, that pervades Kumiko’s view of Britain and the art of Rubin in the Winter Market, is another powerful tool for material culture studies, at least if we have to find a pop dimension where our studies survive beyond the inevitable end of academia.
I’ve been serving as co-editor of the Journal of Open Archaeology Data (JOAD) for more than one year now, when I joined Victoria Yorke-Edwards in the role. It has been my first time in an editorial role for a journal. I am learning a lot, and the first thing I learned is that being a journal editor is hard and takes time, effort, self-esteem. I’ve been thinking about writing down a few thoughts for months now, and today’s post by Melissa Terras about “un-scholarly peer review practices […] and predatory open access publishing mechanisms” was an unavoidable inspiration (go and read her post).
Some things are peculiar of JOAD, such as the need to ensure data quality at a technical level: often, though, improvements on the technical side will reflect substantially on the general quality of the data paper. Things that may seem easily understood, like using CSV for tabular data instead of PDF, or describing the physical units of each column / variable. Often, archaeology datasets related to PhD research are not forged in highly standardised database systems, so there may be small inconsistencies in how the same record is referenced in various tables. In my experience so far, reviewers will look at data quality even more than at the paper itself, which is a good sign of assessing the “fitness for reuse” of a dataset.
The data paper: you have to try authoring one before you get a good understanding of how a good data paper is written and structured. Authors seem to prefer terse and minimal descriptions of the methods used to create their dataset, giving many passages for granted. The JOAD data paper template is a good guide to structuring a data paper and to the minimum metadata that is required, but we have seen authors relying almost exclusively on the default sub-headings. I often point reviewers and authors to some published JOAD papers that I find particularly good, but the advice isn’t always heeded. It’s true, the data paper is a rather new and still unstable concept of the digital publishing era: Internet Archaeology has been publishing some beautiful data papers,and I like to think there is mutual inspiration in this regard. Data papers should be a temporary step towards open archaeology data as default, and continuous open peer review as the norm for improving the global quality of our knowledge, wiki-like. However, data papers without open data are pointless: choose a good license for your data and stick with that.
Peer review is the most crucial and exhausting activity: as editors, we have to give a first evaluation of the paper based on the journal scope and then proceed to find at least two reviewers. This requires having a broad knowledge of ongoing research in archaeology and related disciplines, including very specific sub-fields of study ‒ our list of available reviewers is quite long now but there’s always some unknown territory to explore, for this asking other colleagues for help and suggestions is vital. Still, there is a sense of inadequacy, a variation on the theme of impostor syndrome, when you have a hard time finding a good reviewer, someone who will provide the authors with positive and constructive criticism, becoming truly part of the editorial process. I am sorry for the fact that our current publication system doesn’t allow for the inclusion of both the reviewers’ names and their commentary ‒ that’s the best way to provide readers with an immediate overview of the potential of what they are about to read, and a very effective rewarding system for reviewers themselves (I keep a list of all peer reviews I’m doing but that doesn’t seem as satisfying). Peer review at JOAD is not double blind, and I think often it would be ineffective and useless to anonymise a dataset and a paper, in a discipline so territorial that everyone knows who is working where. It is incredibly difficult to get reviews in a timely manner, and while some of our reviewers are perfect machines, others keep us (editors and authors) waiting for weeks after the agreed deadline is over. I understand this, of course, being too often on the other side of the fence. I’m always a little hesitant to send e-mail reminders in such cases, partly because I don’t like receiving them, but being an annoyance is kind of necessary in this case. The reviews are generally remarkable in their quality (at least compared to previous editorial experience I had), quite long and honest: if something isn’t quite right, it has to be pointed out very clearly. As an editor, I have to read the paper, look at the dataset, find reviewers, wait for reviews, solicit reviews, read reviews and sometimes have a conversation with reviewers, if something is their comments are clear and their phrasing/language is acceptable (an adversarial, harsh review must never be accepted, even when formally correct). All this is very time consuming, and since the journal (co)editor is an unpaid role at JOAD and other overlay journals at Ubiquity Press (perhaps obvious, perhaps not!) , usually this means procrastinating: summing the impostor syndrome dose from criticising the review provided by a more experienced colleague with the impostor syndrome dose from being always late on editorial deadlines yields frustration. Lots. Of. Frustration. When you see me tweet about a new data paper published at JOAD, it’s not an act of deluded self-promotion, but rather a liberatory moment of achievement. All this may sound naive to experienced practitioners of peer review, especially to those into academic careers. I know, and I still would like to see a more transparent discussion of how peer review should work (not on StackExchange, preferably).
JOAD is Open Access. It’s the true Open Access, not to differentiate between gold and green (a dead debate, it seems) but between two radically different outputs. JOAD is openly licensed under the Creative Commons Attribution license and we require that all datasets are released under open licenses so readers know that they can download, reuse, incorporate published data in their new research. There is no “freely available only in PDF”, each article is primarily presented as native HTML and can be obtained in other formats (including PDF, EPUB). We could do better, sure ‒ for example, provide the ability to interact directly with the dataset instead of just providing a link to the repository ‒ but I think we will be giving more freedom to authors in the future. Publication costs are covered by Article Processing Charges, 100 £, that will be paid by the authors’ institutions: in case this is not possible, the fee will be waived. Ubiquity Press is involved in some of the most important current Open Access initiatives, such as the Open Library of Humanities and most importantly does a wide range of good things to ensure research integrity from article submission to … many years in the future.
You may have received an e-mail from me with an invite to contribute to JOAD, either by submitting an article or giving your availability as a reviewer ‒ or you may receive it in the next few weeks. Here, you had a chance to learn what goes on behind the scenes at JOAD.
Friday 17th July is the last day at work in this short GQB 2015 field campaign. I’m still a bit exhausted from the return trip to Rethymno, but most importantly I’m very satisfied with the exchange of ideas about various topics (Early Byzantine fortifications, water supply systems, pottery, exploitation of natural and agricultural resources) that we had.
Since my main task here was to work on the analysis of ceramic contexts, I just continued my writing of text and R source code as in the past days. In the late afternoon we left to pay a short visit to the village of Panagia where we found an old water fountain that is depicted in a 100-years old photograph. It’s strange, photographs seem to tell true stories, so direct ‒ whereas in fact they’re a paradigmatic form of mediation. Sometimes, when you need to get a better understanding of an object, it’s useful to look at it from different angles, at different scales, alone or in its natural context, under a microscope or in your bare hands. I think that’s what I’m trying to do with the ceramic contexts from the Byzantine District of Gortyna: it’s not always easy and of course it’s not always working because I lack the archaeological, statistical, petrographic, drawing skills that would be needed to make this “prism” fully working. However, I am convinced that the result is worth the trade-offs, and there will be room for improvement of the details at a later stage. For now, I just go on iterating, half artificial intelligence algorithm and half craftsman.
On 16th July we’re out of the Mesara to join a study seminar about the Early Byzantine settlements of Crete, organised by the Institute of Mediterranean Studies (FORTH-IMS) in Rethymno as conclusion to the DynByzCrete research project led by Christina Tsigonaki and Apostolos Sarris. I was really happy to meet other colleagues I’ve met before in various parts of Europe: Kayt Armstrong, Anastasia Yangaki, Gianluca Cantoro. Yesterday I posted the summary of my talk, apart from the conclusions.
I had the privilege of being the last speaker, and taking advantage of the fact that Anastasia Yangaki had provided a detailed overview of ceramic consumption and production in Crete from the 4th to the 9th century, I could point to some specific issues in how we date archaeological contexts with pottery and most importanly in how we prioritise ceramic studies. Ceramic specialists are a rare species, and until now we have failed to provide the means for other archaeologists to quickly identify characteristic type finds of the Early Byzantine period, with sufficient detail to avoid very generic chronologies like “5th-7th” and “8th-9th”, that are highly problematic. We also have a responsibility for the fact that studies and publications of ceramic finds are always lagging behind fieldwork, because 1) there is little selection of significant, well-dated stratigraphic contexts 2) and the study and publication have been for too long done by separating ceramic productions that were looked at separately by hyper-specialists, rather than looking at contexts as our atomic unit. Therefore, it has been impossible to provide the quantified ceramic data that are needed for the type of analytical work that it envisaged by the DynByzCrete project – and we should admit that this data will be unavailable for a long time. As a thought experiment, we could stop doing fieldwork for 25 years and dedicate most of our efforts to the study of all significant ceramic contexts from recent rescue archaeology.
If we agree that there is a potential for extracting information about social dynamics from pottery, can we also agree that provenance studies based on standardised archaeometric procedures are only one of many ways that this can be done? We know very little of the actual manufacturing of most pottery types, of the material culture that permeates their making and usage. So, taking a broader view at the DynByzCrete project, while the environmental determinism behind some of the geospatial analysis needs to leave room for the complexity of Byzantine societies (plural), it is clear that we are at a turning point in the way we look at Early Byzantine Crete, and that’s because we are starting to consider the island in its entirety instead of focusing on a single settlement, no matter how large or important. In this respect, regional surveys don’t seem to provide a qualitative advantage over prolonged excavations – and their multi-period focus is an opportunity to deal with longue durée patterns but also a rather discomforting exercise in oversimplification of changes in historical periods. Pulling an amazing variety of data, that are mostly already available and published, stress-test the obvious and non-obvious patterns of interaction (travel time by horse/donkey among known episcopal cities? Social networks of elite members as known from lead seals and written sources and epigraphy and likely connections to luxury items?) is the best way to stop repeating the same dull research questions over and over.
How can we move forward? These are difficult times, for foreign research projects and especially for Greek institutions. It seems unlikely that we will be able to work more, with more resources, on this and other related topics of Cretan history. Thus, our first aim should be to make our research more sustainable (no matter how much the term is abused): publish on the Web, encourage horizontal and vertical exchange of skills and knowledge among institutions, focus on research outputs that are reusable and continuously upgraded (and perhaps kill interim reports).