Publications
International Peer Reviewed Conferences and Workshops
- Antonio Maccioni, Edoardo Basili, and Riccardo Torlone. QUEPA: QUerying and exploring a polystore by augmentation. International Conference on Management of Data (SIGMOD 2016), San Francisco, USA, 2016.
- Antonio Maccioni, Daniel J. Abadi. Scalable pattern matching over compressed graphs via dedensification. 22nd International Conference on Knowledge Discovery and Data Mining (SIGKDD 2016), San Francisco, USA, 2016.
- Antonio Maccioni, Matteo Collina. Graph databases in the browser: using LevelGraph to explore New Delhi. 42nd International Conference on Very Large Data Bases (VLDB 2016), New Delhi, India, 2016.
- Alessio Conte, Roberto De Virgilio, Antonio Maccioni, Maurizio Patrignani, Riccardo Torlone. Finding all maximal cliques in very large social networks. 19th International Conference on Extending Database Technology (EDBT 2016), Bordeaux, France, 2016.
- Antonio Maccioni. Flexible query answering over graph-modeled data. SIGMOD/PODS PhD Symposium, Melbourne, Australia, 2015.
- Gabriele Ciasullo, Giorgia Lodi, Antonio Maccioni, Francesco Tortorelli. Italian national guidelines for the valorization of the public sector information. Share-PSI 2.0 Workshop on “A Self Sustaining Business Model for Open Data”, Krems, Austria, 2015.
- Roberto De Virgilio, Antonio Maccioni, Riccardo Torlone. A unified framework for flexible query answering over heterogeneous data sources. 11st International Conference on Flexible Query Answering Systems (FQAS 2015), Krakow, Poland, 2015.
- Roberto De Virgilio, Antonio Maccioni. Distributed Keyword Search over RDF via MapReduce. 11th Extended Semantic Web Conference (ESWC 2014), Crete, Greece, 2014.
- Roberto De Virgilio, Antonio Maccioni, Riccardo Torlone. Model-driven design of graph databases. 33rd International Conference on Conceptual Modeling (ER 2014), Atlanta, USA, 2014.
- Roberto De Virgilio, Antonio Maccioni, Riccardo Torlone. R2G: a Tool for Migrating Relations to Graphs. 17th International Conference on Extending Database Technology (EDBT 2014), Athens, Greece, 2014.
- Roberto De Virgilio, Antonio Maccioni, Riccardo Torlone. Graph-driven Exploration of Relational Databases for Efficient Keyword Search. 3rd International Workshop on Querying Graph Structured Data (GraphQ 2014) in conjunction with EDBT 2014, Athens, Greece, 2014.
- Davide Lamanna, Antonio Maccioni. Renewable Energy Data Sources in the Semantic Web with OpenWatt. 3rd International Workshop on Energy Data Management (EnDM 2014) in conjunction with EDBT 2014, Athens, Greece, 2014.
- Giorgia Lodi, Antonio Maccioni, Monica Scannapieco, Mauro Scanu, Laura Tosco. Publishing Official Classifications in Linked Open Data. 2nd International Workshop on Semantic Statistics (SemStats 2014) in conjunction with ISWC 2014, Riva Del Garda, Italy, 2014.
- Roberto De Virgilio, Antonio Maccioni, Paolo Cappellari. A Linear and Monotonic Strategy to Keyword Search over RDF Data. 13th International Conference on Web Engineering (ICWE 2013), Aalborg, Denmark, 2013.
- Roberto De Virgilio, Antonio Maccioni, Riccardo Torlone. Converting relational to graph databases. 1st International Workshop on Graph Data Management Experiences and Systems (GRADES 2013) in conjunction with SIGMOD 2013, New York, USA, 2013.
- Antonio Maccioni. Towards an Integrated Social Semantic Web. 2nd International Workshop on Data Management in the Social Semantic Web (DMSSW 2013) in conjunction with ICWE 2013, Aalborg, Denmark, 2013.
- Roberto De Virgilio, Antonio Maccioni. Generation of Reliable Randomness via Social Phenomena. 3rd International Conference on Model & Data Engineering (MEDI 2013), Amantea, Italy, 2013.
- Roberto De Virgilio, Antonio Maccioni, Riccardo Torlone. A similarity measure for approximate querying over RDF data. 2nd International Workshop on Querying Graph Structured Data (GraphQ 2013) in conjunction with EDBT 2013, Genoa, Italy, 2013.
- Giorgia Lodi, Antonio Maccioni, Francesco Tortorelli. Linked Open Data in the Italian e-Governement Interoperability Framework. 6th International Conference on Methodologies, Technologies and Tools enabling e-Government (METTEG 2012), Belgrade, Serbia, 2012.
- Paolo Cappellari, Roberto De Virgilio, Antonio Maccioni, Mark Roantree. A Path-Oriented RDF Index for Keyword Search Query Processing. 22nd International Conference on Database and Expert Systems Applications (DEXA 2011), Toulouse, France, 2011.
- Paolo Cappellari, Roberto De Virgilio, Antonio Maccioni, Michele Miscione. Keyword based Search over Semantic Data in Polynomial Time. 1st International Workshop on Data Engineering meets the Semantic Web (DesWeb 2010) in conjunction with ICDE 2010, Long Beach CA, USA, 2010.
|
||
Polystore systems (or simply polystores) have been recently proposed to support a common scenario in which enterprise data are stored in a variety of database technologies relying on different data models and languages. Polystores provide a loosely coupled integration of data sources and support the direct access, with the local language, to each specific storage engine to exploit its distinctive features. Given the absence of a global schema, new challenges for accessing data arise in these environments. In fact, it is usually hard to know in advance if a query to a specific data store can be satisfied with data stored elsewhere in the polystore. QUEPA addresses these issues by introducing augmented search and augmented exploration in a polystore, two access methods based on the automatic enrichment of the result of a query over a storage system with related data in the rest of the polystore. These features do not impact on the applications running on top of the polystore and are compatible with the most common database systems. QUEPA implements in this way a lightweight mechanism for data integration in the polystore and operates in a plug-and-play mode, thus reducing the need for ad-hoc configurations and for middleware layers involving standard APIs, unified query languages or shared data models. In our demonstration audience can experience with the augmentation construct by using the native query languages of the database systems available in the polystore.
|
||
|
||
One of the most common operations on graph databases is graph pattern matching (e.g., graph isomorphism and more general types of “subgraph pattern matching”). In fact, in some graph query languages every single query is expressed as a graph matching operation. Consequently, there has been a significant amount of research effort in optimizing graph matching operations in graph database systems. As graph databases have scaled in recent years, so too has recent work on scaling graph matching operations. However, the performance of recent proposals for scaling graph pattern matching is limited by the presence of high-degree nodes. These high-degree nodes result in an explosion of intermediate result sizes during query execution, and therefore significant performance bottlenecks. In this paper we present a dedensification technique that losslessly compresses the neighborhood around high-degree nodes. Furthermore, we introduce a query processing technique that enables direct operation of graph query processing operations over the compressed data, without ever having to decompress the data. For pattern matching operations, we show how this technique can be implemented as a layer above existing graph database systems, so that the end-user can benefit from this technique without requiring modifications to the core graph database engine code. Our technique reduces the size of the intermediate result sets during query processing, and thereby improves query performance.
|
||
|
||
|
||
|
||
The detection of communities in social networks is a challenging task. A rigorous way to model communities considers maximal cliques, that is, maximal subgraphs in which each pair of nodes is connected by an edge. State-of-the-art strategies for finding maximal cliques in very large networks decompose the network in blocks and then perform a distributed computation. These approaches exhibit a trade-off between efficiency and completeness: decreasing the size of the blocks has been shown to improve efficiency but some cliques may remain undetected since high-degree nodes, also called hubs, may not fit with all their neighborhood into a small block. In this paper, we present a distributed approach that, by suitably handling hub nodes, is able to detect maximal cliques in large networks meeting both completeness and efficiency. The approach relies on a two-level decomposition process. The first level aims at recursively identifying and isolating tractable portions of the network. The second level further decomposes the tractable portions into small blocks. We demonstrate that this process is able to correctly detect all maximal cliques, provided that the sparsity of the network is bounded, as it is the case of real-world social networks. An extensive campaign of experiments confirms the effectiveness, efficiency, and scalability of our solution and shows that, if hub nodes were neglected, significant cliques would be undetected.
|
||
|
||
The lack of familiarity that users have with
information systems has led to different flexible methods to access data (keyword
search, faceted search, similarity search, etc.). Sinceflexible query answering
techniques differ from one another, their integration in the same system is hard.
Flexible query capabilities require, in fact, ad-hoc representations of the datasets,
which often result in duplications and computational overhead. Moreover, if we want to
query heterogeneous data sources, the problem becomes almost impossible. To address such
variety in one fell swoop, we propose a meta-approach for different kinds of flexible
query answering over heterogeneous data sources. We consider structured and
semi-structured sources that can be modeled through
graph databases. To improve the platform storing the data, we have conducted research on
the representation of graph databases. In particular, we have devised a layer that
compresses the graph database without having to decompress it back into the original
graph before query execution.
|
||
|
||
The Italian legislation for public sector digitalization has defined a cyclic process for the valorization of the Public Sector Information. This process involves three main elements, namely, the definition of a strategic agenda that identifies principles and objectives to be achieved by public administrations in valorizing the information they own and manage; a set of technical guidelines that contain recommendations administrations should follow in order to meet the objectives indicated in the agenda; and a report including the principal results of an assessment of how well the objectives have been met by administrations. The legislation assigns to the Agency for Digital Italy (AgID) the role of national body responsible for governing the life-cycle of the process mentioned above. This paper discusses how AgID currently manages the process. In particular, the paper illustrates the principal recommendations included in the technical guidelines with a focus on the metadata management and the business models that can be enabled in Open Data initiatives. We advocate that the guidelines allow for supporting administrations in the creation of a uniform national single data market, as stimulated by the European Council.
|
||
|
||
The lack of familiarity that most users have with information systems has led to a variety of methods to access data in a flexible way (such as keyword search, faceted search, and similarity search). However, capabilities of flexible query answering are hard to integrate in one system since they are based on different data representations and relies on different techniques for query answering. The problem becomes more involved if we need to query heterogeneous data sources. To address such variety in one fell swoop, we propose FleQSy, a framework that relies on a ''meta'' approach for accessing heterogeneous data with different methods for flexible query answering. In FleQSy structured and semi-structured data sources are modeled as graphs and query answering consists of a multi-phase process that leverages the commonalities of the various search techniques. We show the effectiveness of our approach in different application scenarios that require easy-to-use and elastic methods for data access.
|
||
|
||
Non expert users need support to access linked data
available on the Web. To this aim, keyword-based search is considered an essential
feature of database systems. The distributed nature of the Semantic Web demands query
processing techniques to evolve towards a scenario where data is scattered on
distributed data stores. Existing approaches to keyword search cannot guarantee
scalability in a distributed environment, because, at runtime, they are unaware of the
location of the relevant data to the query and thus, they cannot optimize join tasks. In
this paper, we illustrate a novel distributed approach to keyword search over RDF data
that exploits the MapReduce paradigm by switching the problem from graph-parallel to
data-parallel processing. Moreover, our framework is able to consider ranking during the
building phase to return directly the best (top-k) answers in the first (k) generated
results, reducing greatly the overall computational load and complexity. Finally, a
comprehensive evaluation demonstrates that our approach exhibits very good efficiency
guaranteeing high level of accuracy, especially with respect to state-of-the-art
competitors.
|
||
|
||
Graph Database Management Systems (GDBMS) are rapidly
emerging as an effective and efficient solution to the management of very large data
sets in scenarios where data are naturally represented as a graph and data accesses
mainly rely on traversing this graph. Currently, the design of graph databases is based
on best practices, usually suited only for a specific GDBMS. In this paper, we propose a
model-driven, system-independent methodology for the
design of graph databases. Starting from a conceptual representation of the domain of
interest expressed in the Entity-Relationship model, we propose a strategy for devising
a graph database in which the data accesses for answering queries are minimized.
Intuitively, this is achieved by aggregating in the same node data that are likely to
occur together in query results. Our methodology relies a logical model for graph
databases, which makes the approach suitable for different GDBMSs. We also show, with a
number of experimental results over different GDBMSs, the effectiveness of the proposed
methodology.
|
||
| ||
We present R2G, a tool for the automatic migration of
databases from a relational to a Graph Database Management System (GDBMS). GDBMSs
provide a flexible and efficient solution to the management of graph-based data (e.g.,
social and semantic Web data) and, in this context, the conversion of the persistent
layer of an application from a relational to a graph format can be very beneficial. R2G
provides a thorough solution to this problem with a minimal impact to the application
layer: it transforms a relational database r into a graph database g and any conjunctive
query over r into a graph query over g. Constraints defined over r are suitably used in
the translation to minimize the number of data access required by graph queries. The
approach refers to an abstract notion of graph database and this allows R2G to map
relational database into different GDBMSs. The demonstration of R2G allows the direct
comparison of the relational and the graph approaches to data management
|
||
| ||
Keyword-based search is becoming the standard way to
access any kind of information and it is considered today an important add-on of
relational database management systems. The approaches to keyword search over relational
data usually rely on a two-step strategy in which, first, tree-shaped answers are built
by connecting tuples matching the given keywords and, then, potential answers are ranked
according to some relevance criteria. In this paper, we illustrate a novel technique to
this problem that aims, rather, at generating directly the best answers. This is done by
representing relational data as graph and by combining progressively the shortest join
paths that involve the tuples relevant to the query. We show that, in this way, answers
are retrieved in order of relevance and can be then returned as soon as they are built.
The approach does not require the materialization of ad-hoc data structures and avoids
the execution of unnecessary queries. A comprehensive evaluation demonstrates that our
solution strongly reduces the complexity of the process and guarantees, at the same
time, an high level of accuracy.
|
||
|
||
Although the sector of renewable energies has gained a
significant role, companies still encounter considerable barriers to scale up their
business. This is partly due to the way data and information are (wrongly) managed.
Often, data is: partially available, noisy, inconsistent, sparse in heterogeneous
sources, unstructured, represented through non-standard and proprietary formats. As a
result, energy planning tasks are semi-automatic or, in the worst cases, even manual. As
a result, the process that uses such data is exceedingly complex and results to be
error-prone and ineffective. OpenWatt aims at establishing an ideal scenario in the
renewable energy sector where different categories of data are fully integrated and can
synergically complement each other. In particular, OpenWatt overcomes existing drawbacks
by introducing the paradigm of Linked Open Data to represent renewable energy data on
the (Semantic) Web. With OpenWatt, data increases in quality, tools become interoperable
with each other and the process gains in usability, productivity and efficiency.
Moreover, OpenWatt enables and favours the development of new applications and services.
|
||
|
||
Data interoperability is well recognized as a basic
step for developing integrated services supporting inter-organizations communication.
The issue of ensuring data interoperability has been tackled by many different
communities in order to address various problems. In particular, the (over-)national
institutes of statistics deeply concern the issuing of official and wide-recognized
classifications (i.e., taxonomies, schemes, code-lists) to be used in the jurisdiction
of reference. On a different perspective, there has been much work from the Web data
management community to publish data on the Web in an interoperable way. The efforts
have converged on a series of standards and practices gathered under the Semantic Web
stack. Clearly, the two mentioned scenarios
are complementary as they can benefit one to another. To this concern, the Italian
Institute of Statistics (Istat) and the Agency for Digital Italy (AgID) have launched an
initiative aiming at producing official classifications under the form of Linked Open
Data to be published in the Web of data using standard ontologies. The paper describes
and motivates this initiative.
|
||
|
||
Keyword-based search over (semi)structured data is today
considered an essential feature of modern information management systems and has become
an hot topic in database research and development. Most of the recent approaches to this
problem refer to a general scenario where: (i) the data source is represented as a
graph, (ii) answers to queries are sub-graphs of the source containing keywords from
queries, and (iii) solutions are ranked according to a relevance criteria. In this
paper, we illustrate a novel approach to keyword search over semantic data that combines
a solution building algorithm and a ranking technique to generate the best results in
the first answers generated. We show that our approach is monotonic and has a linear
computational complexity, greatly reducing the complexity of the overall process.
Finally, experiments demonstrate that our approach exhibits very good efficiency and
effectiveness, especially with respect to competing approaches.
|
||
|
||
Graph Database Management Systems provide an effective
and efficient solution to data storage in current scenarios where data are more and more
connected, graph models are widely used, and systems need to scale to large data sets.
In this framework, the conversion of the persistent layer of an application from a
relational to a graph data store can be convenient but it is usually an hard task for
database administrators. In this paper we propose a methodology to convert a relational
to a graph database by exploiting the schema and the constraints of the source. The
approach supports the translation of conjunctive SQL queries over the source into graph
traversal operations over the target. We provide experimental results that show the
feasibility of our solution and the efficiency of query answering over the target
database.
|
||
|
||
The Social Semantic Web is the data space on the Web
where human produced information are enriched and modeled using Semantic Web standards.
This vision is enabled by the convergence between Web 2.0 and Web 3.0 but still, it's
far from be put into practice. Moreover, this miss causes drawbacks such as data
redundancies and overhead in the development of social applications.
In this paper we discuss how, in pursuing the Social Semantic Web vision, a different
approach for opening social data to the Semantic Web should be adopted.
Furthermore, without reinventing the wheel, we will use existing technologies to define
a framework for integrating Social Web and Semantic Web. This would help to reengineer
or extend the Social Network infrastructures in order to fully benefit from a social
Semantic Web.
|
||
|
||
Randomness is a hot topic in computer science due to its
important implications such as cryptography, gambling, hashing algorithms and so on. Due
to the implicit determinism of computer systems, randomness can only be simulated. In
order to generate reliable random sequences, IT systems have to rely on hardware random
number generators. Unfortunately, these devices are not always affordable and suitable
in all the circumstances (e.g., personal use, data-intensive systems, mobile devices,
etc.). Human-computer interaction (HCI) has recently become bidirectional: computers
help human beings in carrying out their issues and human beings support computers in
hard tasks. Following this trend, we introduce RandomDB, a database system that is able
to generate reliable randomness from social phenomena. RandomDB extracts data from
social networks to answer random queries in a flexible way. We prototyped RandomDB and
we conducted some experiments in order to show the effectiveness and the advantages of
the system.
|
||
|
||
Approximate query answering relies on a similarity
measure that evaluates the relevance, for a given query, of a set of data extracted from
the underlying database. In the context of graph-modeled data, many methods (such as,
subgraph isomorphism, graph edit distance, and maximum common subgraph) have been
proposed to face this problem. Unfortunately, they are usually hard to compute and when
they are used on RDF data, several drawbacks arise.
In this paper, we propose a measure to evaluate the similarity between a (small) graph
representing a query and a portion of a (large) graph representing an RDF data set. We
show that this measure: (i) can be evaluated in linear time with respect to the size of
the given graphs and, (ii) guarantees other interesting properties. In order to show the
feasibility of our approach, we have used such similarity measure in a technique for
approximate query answering. The technique has been implemented in a prototypical system
and a number of experimental results obtained with this system confirm the effectiveness
of the proposed measure.
|
||
|
||
Recently governments have started sharing a large volume
of public datasets on the Web. As a consequence, citizens, enterprises and also public
administrations can now easily access to them and have the opportunities to build
crowdsourcing, advanced mash up, and services in general. However, in order to enable a
fully reuse of the data and meet semantic interoperability requirements it is crucial to
provide them in a standard and machine-readable form, with possible interlinks directly
exposed. This paper presents our experience in creating the first interlinked dataset
within the context of the Italian Interoperability Framework SPC. The dataset is built
from the National Public Administration Registry named IPA; we consider IPA as the
nucleus of the SPC’s open data since it includes all the contact information of Italian
PAs (e.g., tax codes, e-mail and postal addresses, e-payment references, etc.). The
paper describes the Linked Open Data methodology we adopted in building the dataset and
that can be in the general Public Administration context in order to guarantee semantic
interoperability. The paper also introduces our web portal SPCData; the portal has been
designed so as to: (i) publish the dataset and its ontology, (ii) provide a freely
accessible environment for querying the data, and (ii) offer a set of demonstrative
public services that exploit the IPA dataset and created interlinks.
|
||
|
||
Most of the recent approaches to keyword search employ
graph structured representation of data. Answers to queries are generally sub-structures
of the graph, containing one or more keywords. While finding the nodes matching keywords
is relatively easy, determining the connections between such nodes is a complex problem
requiring on-the-fly time consuming graph exploration. Current techniques suffer from
poorly performing worst case scenario or from indexing schemes that provide little
support to the discovery of connections between nodes. In this paper, we present an
indexing scheme for RDF that exposes the structural characteristics of the graph, its
paths and the information on the reachability of nodes. This knowledge is exploited to
expedite the retrieval of the sub-structures representing the query results. In
addition, the index is organized to facilitate maintenance operations as the dataset
evolves. Experimental results demonstrates the feasibility of our index that
significantly improves the query execution performance.
|
||
|
||
In pursuing the development of Yanii, a novel keyword
based search system on graph structures, in this paper we present the computational
complexity study of the approach, highlighting a comparative study with actual PTIME
state-of-the-art solutions.
The comparative study focuses on a theoretical analysis of different frameworks to
define complexity ranges, which they correspond to, in the polynomial time class.
We characterize such systems in terms of general measures, which give a general
description of the behavior of these frameworks according to different aspects that are
more general and informative than mere benchmark tests on a few test cases. We show that
Yanii holds better performance than others, confirming itself as a promising approach
deserving of further practical investigation and improvement.
|
International Peer Reviewed Journals
- Roberto De Virgilio, Antonio Maccioni. Random query answering with the crowd. Journal on Data Semantics (JODS), 2015
- Roberto De Virgilio, Antonio Maccioni, Riccardo Torlone. Approximate Querying of RDF Graphs via Path Alignment. Distributed and Parallel Databases (DAPD), 2014.
|
||
Random data generators play an important role in computer science and engineering since they aim at simulating reality in IT systems. Software random data generators cannot be reliable enough for critical applications due to their intrinsic determinism, while hardware random data generators are difficult to integrate within applications and are not always affordable in all circumstances. We present an approach that makes use of entropic data sources to compute the random data generation task. In particular, our approach exploits the chaotic phenomena happening in the crowd. We extract these phenomena from social networks since they reflect the behavior of the crowd. We have implemented the approach in a database system, RandomDB, to show its efficiency and its flexibility over the competitor approaches. We used RandomDB by taking data from Twitter, Facebook and Flickr. The experiments show that these social networks are sources to generate reliable randomness and RandomDB a system that can be used for the task. Hopefully, our experience will drive the development of a series of applications that reuse the same data in several and different scenarios.
|
||
|
||
A query over RDF data is usually expressed in terms of
matching between a graph representing the target and a huge graph representing the
source. Unfortunately, graph matching is typically performed in terms of subgraph
isomorphism, which makes semantic data querying a hard problem. In this paper we
illustrate a novel technique for querying RDF data in which the answers are built by
combining paths of the underlying data graph that align with paths specified by the
query. The approach is approximate and generates the combinations of the paths that best
align with the query. We show that, in this way, the complexity of the overall process
is significantly reduced and verify experimentally that our framework exhibits an
excellent behavior with respect to other approaches in terms of both efficiency and
effectiveness.
|
Book Chapters
- Giorgia Lodi, Antonio Maccioni, Francesco Tortorelli. SPCData: the Italian Public Administration Data Cloud. In Information and Communication Technologies in Public Administration: Innovations from Developed Countries, CRC Press, Taylor & Francis Group, 2014.
- Roberto De Virgilio, Paolo Cappellari, Antonio Maccioni, Riccardo Torlone. Path-Oriented Keyword Search Query over RDF. In Semantic Search over the Web, Springer Verlag editions, 2012.
|
||
|
Peer-reviewed National Workshops/Conferences
- Alessio Conte, Roberto De Virgilio, Antonio Maccioni, Maurizio Patrignani, Riccardo Torlone Community detection in social networks: Breaking the taboos. 24th Italian Symposium on Advanced Databases System (SEBD 2016), Italy, 2016.
- Antonio Maccioni, Daniel J. Abadi. On compressing graph databases. New England Database Day 2015 (NEDB 2015), Boston, United States, 2015.
- Roberto De Virgilio, Antonio Maccioni, Riccardo Torlone. Modeling Graph Databases (discussion paper). 22st Italian Symposium on Advanced Databases System (SEBD 2014), Sorrento Coast, Italy, 2014.
- Roberto De Virgilio, Antonio Maccioni, Riccardo Torlone. Building Graphs from Tables. 21st Italian Symposium on Advanced Databases System (SEBD 2013), Roccella Jonica, Italy, 2013.
- Roberto De Virgilio, Antonio Maccioni, Riccardo Torlone. Using keywords to find the right path through relational data. 20th Italian Symposium on Advanced Databases System (SEBD 2012), Venice, Italy, 2012.
|
||
|
||
|
||
|
||
|
Other (non peer-reviewed) publications
- Agency for Digital Italy. National Guidelines on the Valorization of the Public Sector Information, 2014. [in Italian] [author] [Edition 2013]
- Antonio Maccioni. Valorise Government Data. In Ecoscienza 3/2013, 2013. [in Italian]
- European Commission. 10 Rules for Persistent URIs, 2013. [reviewer]
- Agency for Digital Italy. Guidelines For Semantic Interoperability Through Linked Open Data, 2012. [in Italian] [author]
- Agency for Digital Italy. An Architecture for the Smart Communities, 2012. [in Italian] [contributor]