1.0.0.M4
Welcome to the Spring Data Graph Guide Book. Thank you for taking the time to get an in depth look into the usage and workings of the Spring Data Graph Library. Spring Data Graph is part of the Spring Data project which aims to provide the convenient programming model of the Spring Framework to the big field modern datastores (mostly NoSQL). Spring Data Graph in particular currently provides integration for the Neo4j Graph Database.
It was written by developers for developers. So hopefully we created a documentation that is well received by our peers.
If you have any feedback to the Spring Data Graph Library or this book, please provide it via SpringSource JIRA, the SpringSource NoSQL Forum, github comments or issues or the Neo4j mailing list.
This book is presented as a duplex book, a term coined by Martin Fowler. A duplex book consists of at least two parts. The first part is an easily accessible narrative, that gives the reader an overview of the topics contained in the book. It contains lots of examples and more general discussion topics. This should be the only part of the book that is required to be read cover-to-cover.
We chose a tutorial describing the creation of a web applicaton (cineasts.net) that allows movie enthusiasts to find the favorites, rate them, connect with each other and enjoy social features. The application is running on Neo4j using Spring Data Graph and the well known Spring Web Stack.
The second part is the classic reference documentation containing the detailed information about the library. It discusses the programming model, the underlying assumptions, used toolset (like aspectj) as well as the APIs for the object-graph mapping and the template approach. The reference docs should be mainly used to look up concrete bits of information or to dig deeper into certain topics.
In this first part of the book we run a tutorial that creates a complete web application built on top of Spring Data Graph. It uses a domain that should be familiar to most - movies. So for cineasts.net we decided to add a social touch to the whole movie rating business, allowing friends to share their ratings and get recommendations for new friends and movies.
The tutorial is presented as a colloquial description of the steps necessary to create the application. It provides the configuration and code examples that are needed to understand what's happening. The complete source code repository for running the app is available on github.
Once upon a time I wanted to build a social movie database myself. First things first - I had a name: "Cineasts" - the people crazy about movies. So I went ahead and got the domain, cineasts.net. So, the project was almost done.
I had some ideas as well. Of course there should be Actors who play Roles in Movies. I needed the Cineast, too, someone had to rate the movies after all. And while they were there, they could also make friends. Find someone to accompany them to the cinema or share movie preferences. Even better, the engine behind all that could recommend new friends and movies to them, derived from their interests and existing friends.

I looked for possible sources for data, IMDB was my first stop, but they charge 15k USD for data usage. Fortunately I found TheMoviedb.org which has built its data independently by users and provides the data for free for any usecase. The also have liberal terms and conditions and a nice API for fetching the data.
There were many more ideas but I wanted to get something done over the course of one day. So this was the scope I was going to tackle. And this is how it should look like.

Being a Spring Developer, I would, of course, choose components of the Spring Framework to do most of the work. I'd already come up with the ideas - that should be enough.
What database would fit both the complex network of cineasts, movies, actors, roles, ratings and friends? And also be able to support the recommendation algorithms that I thought of? I had no idea.
But, wait, there was the new Spring Data project that started in 2010 bringing the convenience of the Spring programming model to NoSQL databases. That should fit my experience and help me getting started. I looked at the list of projects supporting the different NoSQL databases. Only one mentioned the kind of social network I was thinking of - Spring Data Graph for Neo4j, a graph database. Neo4j's pitch of "value in relationships" and the accompanying docs looked like what I needed. I decided to give it a try.
To setup the project I created a public github account and began setting up the infrastructure for a spring web project using maven as build system. So I added the dependencies for the springframework libraries, put the web.xml for the DispatcherServlet and the applicationContext.xml in the webapp directory.
TODO setup code?
With this setup I was ready for the first spike: creating a simple MovieController showing a static view. Check. Next was the setup for Spring Data Graph. I looked at the README at github and then checked it with the manual. Quite a lot of maven setup for aspectj but otherwise not so much to add. I added just a few lines to my spring configuration.
TODO config code?
I spun up jetty to see if there were any obvious issues with the config. Check.
The domain model was the next thing I planned to work on. I wanted to flesh it out first before diving into library details. Going along the ideas outlined before I came up with this. I also peeked in the datamodel of my import data source themoviedb to confirm that it matched my expectations.

In Java code this looked like. Pretty straightforward.
class Movie {
int id;
String title;
int year;
Set<Role> cast;
}
class Actor {
int id;
String name;
Set<Movie> filmography;
Role playedIn(Movie movie, String role);
}
class Role {
Movie movie;
Actor actor;
String role;
}
class User {
String login;
String name;
String password;
Set<Rating> ratings;
Set<User> friends;
Rating rate(Movie movie, int stars, String comment);
void befriend(User user);
}
class Rating {
User user;
Movie movie;
int stars;
String comment;
}
I wrote some basic tests to assure that the basic plumbing worked. Check.
Then came the unknown - how to put these domain objects into the graph. First I read up about graph databases, especially Neo4j. Their datamodel consists of nodes and relationships all of which can have properties. Relationships as first class citizens - I liked that. Then there was the possibility to index both by field, value pairs to quickly get hold of them as starting points for further processing. Other useful operations were manual traversal of relationships and a powerful traversal based on a query like Traversal Description. That all seemed pretty easy.
I also learned that Neo4j was transactional and provided the known ACID guarantees for my data. This was unsual for a NoSQL database but easier for me to get my head around than non-transactional eventual persistence. That also meant that I had to manage transactions somehow. Keep that in mind.
enum RelationshipTypes implements RelationshipType { ACTS_IN };
GraphDatabaseService gds = new EmbeddedGraphDatabase("/path/to/store");
Node forest=gds.createNode();
forest.setProperty("title","Forest Gump");
forest.setProperty("year",1994);
gds.index().forNodes("movies").add(forest,"id",1);
Node tom=gds.createNode();
tom.setProperty("Tom Hanks");
Relationship role=tom.createRelationshipTo(forest,ACTS_IN);
role.setProperty("role","Forest Gump");
Node movie=gds.index().forNodes("movies").get("id",1).getSingle();
print(movie.getProperty("title"));
for (Relationship role : movie.getRelationships(ACTS_IN,INCOMING)) {
Node actor=role.getOtherNode(movie);
print(actor.getProperty("name") +" as " + role.getProperty("role"));
}
But that was the pure graph database. Using this in my domain would pollute my classes with lots of graph database details. I didn't want that. Spring Data Graph promised to do the heavy lifting for me. So I checked that next. Obviously it heavily depended on aspectj magic. So there would be certain behavour that was just observable without being visible in my code. But I was going to give it a try.
I looked at the documentation again, found a simple Hello-World example and tried to understand it. The entities were annotated with @NodeEntity, that was simple, so I added it too. Relationships got their own annotation named @RelationshipEntity. Property fields should be taken care of automatically.
Ok lets put this into a test. How to assure that a field was persisted to the graph store? There seemed to be two possibilities. First was to get a GraphDatabaseContext injected and use its getById() method. The other one was a Finder approach which I ignored for new. Lets keep things simple. How to persist an entity and how to get its id? No idea. So further study of the documentation revealed that there were a bunch of methods introduced to the entities by the aspects. That was not obvious. But I found the two that would help me here - entity.persist() and entity.getNodeId().
So my test looked like this.
@Autowired GraphDatabaseContext graphDatabaseContext;
@Test public void persistedMovieShouldBeRetrievableFromGraphDb() {
Movie forestGump = new Movie("Forest Gump", 1994).persist();
Movie retrievedMovie = graphDatabaseContext.getById(forestGump.getNodeId());
assertEqual("retrieved movie matches persisted one",forestGump,retrievedMovie);
assertEqual("retrieved movie title matches","Forest Gump",retrievedMovie.getTitle());
}
That worked, cool. But what about transactions I didn't declare the test to be transactional? After further reading I learned that persist() creates an implicit transaction - so that was like an EntityManager would behave. Ok for me. I also learned that for more complex operations on the entities I needed external transactions.
Then there was an @Indexed annotation for fields. I wanted to try this too. That would guide the next test. I added an @Indexed to the id field of the movie. This field is intended to represent the external id that will be used in URIs and will stable over database imports and updates. This time I went with the Finder to retrieve my indexed movie.
@NodeEntity
class Movie {
@Indexed
int id;
String title;
int year;
}
@Autowired FinderFactory finderFactory;
@Test public void persistedMovieShouldBeRetrievableFromGraphDb() {
int id=1;
Movie forestGump = new Movie(id, "Forest Gump", 1994).persist();
NodeFinder<Movie> movieFinder = finderFactory.createNodeEntityFinder(Movie.class);
Movie retrievedMovie = movieFinder.getByPropertyValue(id);
assertEqual("retrieved movie matches persisted one",forestGump,retrievedMovie);
assertEqual("retrieved movie title matches","Forest Gump",retrievedMovie.getTitle());
}
TODO This failed with an exception about not being in a transaction. Oh, I forgot to add the @Transactional. So I added it to the test.
That was the first method to add to the repository. So I created a repository for my application, annotated it with @Repository and @Transactional.
@Repository @Transactional
public class CineastsRepostory {
FinderFactory finderFactory;
Finder<Movie> movieFinder;
@Autowired
public CineastsRepostory(FinderFactory finderFactory) {
this.finderFactory = finderFactory;
this.movieFinder = finderFactory.createNodeEntityFinder(Movie.class);
}
public Movie getMovie(int id) {
return movieFinder.getById(id);
}
}
Next were relationships. Direct relationships didn't require any annotation. Unfortunately I had none of those. So I went for the Role relationship between Movie and Actor. It had to be annotated with @RelationshipEntity and the @StartNode and @EndNode had to be marked. So my Role looked like this:
@RelationshipEntity
class Role {
@EndNode
Movie movie;
@StartNode
Actor actor;
String role;
}
When writing a test for that I tried to create the relationship entity with new, but got an exception saying that this was not allowed. Some weird restriction about having only correctly constructed RelationshipEntities. So I remembered a relateTo method from the list of introduced methods on the NodeEntities. After quickly checking it turned out to be exactly what I needed. I added the method for connecting movies and actors to the actor - seemed more natural.
public Role playedIn(Movie movie, String roleName) {
Role role = relateTo(movie, Role.class, "ACTS_IN");
role.setRole(roleName);
return role;
}
What was left - accessing those relationships. I already had the appropriate fields in both classes. Time to annotate them correctly. For the fields providing access to the entities on the other side of the relationship this was straightforward. Providing the target type again (thanks to Java's type erasure) and the relationship type (that I learned from the Neo4j lesson before) there was only the direction left. Which defaults to OUTGOING so only for the movie I had to specify it.
@NodeEntity
class Movie {
@Indexed
int id;
String title;
int year;
@RelatedTo(elementClass = Actor.class, type = "ACTS_IN", direction = Direction.INCOMING)
Set<Actor> cast;
}
@NodeEntity
class Actor {
@Indexed
int id;
String name;
@RelatedTo(elementClass = Movie.class, type = "ACTS_IN")
Set<Movie> cast;
public Role playedIn(Movie movie, String roleName) {
Role role = relateTo(movie, Role.class, "ACTS_IN");
role.setRole(roleName);
return role;
}
}
While reading about those relationship-sets I learned that they are handled by managed collections of spring data graph. So whenever I add something to the set or remove it, it automatically reflects that in the underlying relationships. Neat. But this also meant I mustn't initialize the fields. Something I will certainly forget not to do in the future, so watch out for it.
I didn't forget to add test for those. So I could assure that the collections worked as advertised (and also ran into the intialization problem above).
But I still couldn't access the Role relationships. There was more to read about this. For accessing the relationship in between the nodes there was a separate annotation @RelatedToVia. And I had to declare the field as readonly Iterable<Role>. That should make sure that I never tried to add Roles (which I couldn't create on my own anyway) to this field. Otherwise the annotation attributes were similar to those used for @RelatedTo. So off I went, creating my first real relationship (just kidding).
@NodeEntity
class Movie {
@Indexed
int id;
String title;
int year;
@RelatedTo(elementClass = Actor.class, type = "ACTS_IN", direction = Direction.INCOMING)
Set<Actor> cast;
@RelatedToVia(elementClass = Role.class, type = "ACTS_IN", direction = Direction.INCOMING)
Iterable<Roles> roles;
}
After the tests proved that those relationship fields really mirrored the underlying relationships in the graph and instantly reflected additions and removals I was satisfied with my domain so far and went for some coffee and chocolate.
Time to put this on display. But I needed some test data first. So I wrote a small class for populating the database which could be called from my controller. To make it safe to call it several times I added index lookups to check for existing entries. A simple /populate endpoint for the controller that called it would be enough for now.
TODO code
After filling the database I wanted to see what the graph looked like. So I checked out two tools that are available for inspecting the graph. First Neoclipse, an eclipse RCP application or plugin that connects to existing graph stores and visualizes their content. After getting an exception about concurrent access, I learned that I have to use Neoclipse in readonly mode when my webapp had an active connection to the store. Good to know.
TODO neoclipse image
Besides my movies and actors connected by ACTS_IN relationships there were some other nodes. The reference node which is kind of a root node in Neo4j and can be used to anchor subgraphs for easier access. And Spring Data Graph also represented the type hierarchy of my entities in the graph. Obviously for some internal housekeeping and type checking.
For us console junkies there is also a shell that can reach into a running neo4j store (if that one was started with enableRemoteShell) or provide readonly access to a graph store directory.
neo4j-shell -readonly -path /path/to/my/graphdb
It uses some shell metaphors like cd and ls to navigate the graph. There are also more advanced commands like using indexes and traversals. I tried to play around with them in this shell sesson.
TODO shell session
After I had the means to put some data in the graph database, I also wanted to show it. So adding the controller method to show a single movie with its attributes and cast in a jsp was straightforward. Actually just using the repository to look the movie up and add it to the model. Then forward to the /movies/show view and voilá.
TODO screenshot of movie display
The next thing was to allow users to search for some movies. So I needed some fulltext-search capabilities. As the index provider implementation of Neo4j builds on lucene I was delighted to see that fulltext indexes are supported out of the box.
So I happily annotated the title field of my Movie class with @Index(fulltext=true) and was told with an exception that I have to specify a separate index name for that. So it became @Indexed(fulltext = true, indexName = "search"). The corresponding finder method is called findAllByQuery. So there was my second repository method for searching movies. To restrict the size of the returned set I just added a limit for now that cuts the result after that many entries.
public void List<Movie> searchForMovie(String query, int count) {
List<Movie> movies=new ArrayList<Movie>(count);
for (Movie movie : movieFinder.findAllByQuery("title", query)) {
movies.add(movie);
if (count-- == 0) break;
}
return movies;
}
To use the user in the webapp I had to put it in the session and add login and registration pages. Of course the pages that only worked with a valid user account had to be secured as well.
I used Spring Security to that, writing a simple UserDetailsService that used my repository for looking up the users and validating their credentials. The config is located in a separate applicationContext-security.xml.
<security:global-method-security secured-annotations="enabled">
</security:global-method-security>
<security:http auto-config="true" access-denied-page="/auth/denied"> <!-- use-expressions="true" -->
<security:intercept-url pattern="/admin/*" access="ROLE_ADMIN"/>
<security:intercept-url pattern="/import/*" access="ROLE_ADMIN"/>
<security:intercept-url pattern="/user/*" access="ROLE_USER"/>
<security:intercept-url pattern="/auth/login" access="IS_AUTHENTICATED_ANONYMOUSLY"/>
<security:intercept-url pattern="/auth/register" access="IS_AUTHENTICATED_ANONYMOUSLY"/>
<security:intercept-url pattern="/**" access="IS_AUTHENTICATED_ANONYMOUSLY"/>
<security:form-login login-page="/auth/login" authentication-failure-url="/auth/login?login_error=true"
default-target-url="/user"/>
<security:logout logout-url="/auth/logout" logout-success-url="/" invalidate-session="true"/>
</security:http>
<security:authentication-manager>
<security:authentication-provider user-service-ref="userDetailsService">
<security:password-encoder hash="md5">
<security:salt-source system-wide="cewuiqwzie"/>
</security:password-encoder>
</security:authentication-provider>
</security:authentication-manager>
<bean id="userDetailsService" class="org.neo4j.movies.service.CineastsUserDetailsService"/>
@Service
public class CineastsUserDetailsService implements UserDetailsService, InitializingBean {
@Autowired private FinderFactory finderFactory;
private NodeFinder<User> userFinder;
@Override
public void afterPropertiesSet() throws Exception {
userFinder = finderFactory.createNodeEntityFinder(User.class);
}
@Override
public UserDetails loadUserByUsername(String login) throws UsernameNotFoundException, DataAccessException {
final User user = findUser(login);
if (user==null) throw new UsernameNotFoundException("Username not found",login);
return new CineastsUserDetails(user);
}
public User findUser(String login) {
return userFinder.findByPropertyValue("users","login",login);
}
}
public class CineastsUserDetails implements UserDetails {
private final User user;
public CineastsUserDetails(User user) {
this.user = user;
}
@Override
public Collection<GrantedAuthority> getAuthorities() {
User.Roles[] roles = user.getRoles();
if (roles ==null) return Collections.emptyList();
return Arrays.<GrantedAuthority>asList(roles);
}
@Override
public String getPassword() {
return user.getPassword();
}
@Override
public String getUsername() {
return user.getLogin();
}
....
public User getUser() {
return user;
}
}
After that a logged in user was available in the session and could so be used for all the social interactions. Most of the work done next was adding controller methods and JSPs for the views.
But this was just a plain old movie database (POMD). My idea of socializing this business was not realized.
So I took the User class that I already coded up before and made it a full fledged Spring Data Graph member.
@NodeEntity
class User {
@Indexed
String login;
String name;
String password;
@RelatedTo(elementClass=Movie.class, type="RATED")
Set<Rating> ratings;
@RelatedTo(elementClass=User.class, type="FRIEND")
Set<User> friends;
public Rating rate(Movie movie, int stars, String comment) {
return relateTo(movie, Rating.class, "RATED").rate(stars, comment);
}
public void befriend(User user) {
this.friends.add(user);
}
}
class Rating {
@StartNode User user;
@EndNode Movie movie;
int stars;
String comment;
public Rating rate(int stars, String comment) {
this.stars=stars; this.comment = comment;
return this;
}
}
I also put a ratings field into the movie to be able to show its ratings. And a method to average the stars it got.
class Movie {
@RelatedToVia(elementClass=Rating.class, type="RATED", direction = Direction.INCOMING)
Iterable<Rating> ratings;
public int getStars() {
int stars, int count;
for (Rating rating : ratings) {
stars += rating.getStars(); count++;
}
return count == 0 ? 0 : stars / count;
}
}
Fortunately my tests showed my the division by zero error when calculating the stars for a movie without ratings. I also added a few user and ratings to the database population code. And three methods to rate movies, lookup users and add friends to the repository.
TODO code
Now it was time to pull the data from themoviedb.org. Registering there and getting an API key was simple, using the API on the commandline with curl too. Looking at the JSON returned for movies and people I decided to pimp my domain model and add some more fields so that the representation in the UI was worth the effort.
For the import process I created a separate importer that used HttpClient and JSON to fetch and parse the data and then some transactional methods to actually insert it as movies, roles and actors. User data was not available so I created an anonymous user called 'Cineast' that I attributed all the ratings and comments to. I also created a version of the importer that read the json files from local disk, so that I didn't have to strain the remote API that much and that often.
JSON sample for movie & actor
Code for importer & mapper
In the last part of this exercise I wanted to add some recommendation algorithms to my app. One was the recommendation of movies that my friends liked very much (and their friends in descending importance). The second was recommendations for new friends that also liked the movies that I liked most.
Doing this kind of ranking algorithms is the real fun with graph databases. They are applied to the graph by traversing it in a certain order, collecting information on the go and deciding which paths to follow and what to include in the results.
Lets say I'm only interested in the top 10 recommendations each.
// TODO Work In Progress 1/path.length()*stars
user.breathFirst().relationship(FRIEND, OUTGOING).relationship(RATED, OUTGOING).evaluate(new Evaluator(Path path) {
if (path.length > 5) return EXCLUDE_AND_STOP;
Relationship rating = path.lastRelationship();
if (rating.getType().equals(RATED)) {
rating.getProperty()
return INCLUDE_AND_STOP;
}
return INCLUDE_AND_CONTINUE;
})
This is the reference part of the book. It has information about the programming model, APIs, concepts, and annotations of Spring Data Graph.
The Spring Data Graph project applies core Spring concepts to the development of solutions using a graph style data store. The basic approach is to mark simple POJO entities with Spring Data Graph annotations. That enables the AspectJ aspects that are contained with the framework to adapt the instantiation and field access to have them stored and retrieved from the graph store. Entities are mapped to nodes of the graph, references to other entities are represented by relationships. There are also special relationship entities that provide access to the properties of graph relationships.
For the developer of a Spring Data Graph backed application only the public annotations are relevant, basic knowledge of graph stores is needed to access advanced functionality like traversals. Traversal results can also be mapped to fields of entities.
Neo4j is a graph database. It is a fully transactional database that stores data structured as graphs. A graph consists of nodes, connected by relationships. It is a flexible data structure that allows for high query performance on complex data, while being intuitive for the developer.
Neo4j has been in commercial development for 10 years and in production for over 7 years. It is a mature and robust graph database that:
In addition, Neo4j includes the usual database features: ACID transactions, durable persistence, concurrency control, transaction recovery, high availability and everything else you’d expect from an enterprise database. Neo4j is released under a dual free software/commercial license model.
A graph database is a storage engine that is specialized in storing and retrieving vast networks of data. It efficiently stores nodes and relationship and allows high performance traversal of those structures. With property graphs it is possible to add an arbitrary number of properties to nodes and relationships.
The interface org.neo4j.graphdb.GraphDatabaseService provides access to the storage engine. Its features include creating and retrieving Nodes and Relationships, managing indexes, via an IndexManager, database lifecycle callbacks, transation management and more.
The EmbeddedGraphDatabaseService is an implementation of GraphDatabaseService that is used to embed Neo4j in a Java application. This implmentation is used so as to provide the highest and tightest integration. There are other, remote implementations that provide access to Neo4j stores via REST.
Using the API of GraphDatabaseService it is easy to create nodes and relate them to each other. Relationships are named. Both nodes and relationships can have properties. Property values can be primitive Java types and Strings, byte arrays for binary data, or arrays of other Java primitives or Strings. Node creation and modification has to happen within a transaction, while reading from the graph store can be done with or without a transaction.
GraphDatabaseService graphDb = new EmbeddedGraphDatabase( "helloworld" );
Transaction tx = graphDb.beginTx();
try {
Node firstNode = graphDb.createNode();
Node secondNode = graphDb.createNode();
firstNode.setProperty( "message", "Hello, " );
secondNode.setProperty( "message", "world!" );
Relationship relationship = firstNode.createRelationshipTo( secondNode,
DynamicRelationshipType.of("KNOWS") );
relationship.setProperty( "message", "brave Neo4j " );
tx.success();
} finally {
tx.finish();
}
Getting a single node or relationship and examining it is not the main use case of a graph database. Fast graph traversal and application of graph algorithms are. Neo4j provides means via a concise DSL to define TraversalDescriptions that can then be applied to a start node and will produce a stream of nodes and/or relationships as a lazy result using an Iterable.
TraversalDescription traversalDescription = Traversal.description()
.depthFirst()
.relationships( KNOWS )
.relationships( LIKES, Direction.INCOMING )
.prune( Traversal.pruneAfterDepth( 5 ) );
for ( Path position : traversalDescription.traverse( myStartNode )) {
System.out.println( "Path from start node to current position is " + position );
}
The best way for retrieving start nodes for traversals is using Neo4j's index facilities. The GraphDatabaseService provides access to the IndexManager which in turn retrieves named indexes for nodes and relationships. Both can be indexed with property names and values. Retrieval is done by query methods on Index to return an IndexHits iterator.
IndexManager indexManager = graphDb.index();
Index<Node> nodeIndex = indexManager.forNodes("a-node-index");
nodeIndex.add(node, "property","value");
for (Node foundNode = nodeIndex.get("property","value")) {
assert node.getProperty("property").equals("value");
}
Note: Spring Data Graph provides auto-indexing via the @Indexed annotation, while this still is a manual process when using the Neo4j API.
This chapter covers the fundamentals of the programming model behind Spring Data Graph. It discusses the AspectJ features used and the annotations provided by Spring Data Graph and how to use them. Examples for this section are taken from the imdb project of Spring Data Graph examples.
Behind the scenes Spring Data Graph leverages AspectJ aspects to modify the behavior of simple POJO entities to be able to be backed by a graph store. Each entity is backed by a node that holds its properties and relationships to other entities. AspectJ is used to intercept field access and to reroute it to the backing state (either its properties or relationships). For relationship entities the fields are similarly mapped to properties. There are two specially annotated fields for the start and the end node of the relationship.
The aspect introduces some internal fields and some public methods to the entities for accessing the backing
state via getPersistentState() and creating relationships with relateTo
and retrieving relationship entities viagetRelationshipTo. It also introduces finder methods like
find(Class<? extends NodeEntity>, TraversalDescription)
and equals and hashCode delegation.
Spring Data Graph internally uses an abstraction called EntityState that the field access and instantiation advices of the aspect delegate to, keeping the aspect code very small and focused to the pointcuts and delegation code. The EntityState then uses a number of FieldAccessor factories to create a FieldAccessor instance per field that does the specific handling needed for the concrete field.
Entities are declared using the @NodeEntity annotation. Relationship entities use the
@RelationshipEntity
annotation.
The @NodeEntity annotation is used to declare a POJO entity to be backed by a node in the
graph store. Simple fields on the entity are mapped by default to properties of the node. Object
references to other NodeEntities (whether single or Collection) are mapped via relationships. If
the annotation parameter useShortNames is set to false, the properties and relationship
names used will be prepended with the class name of the entity. If the parameter fullIndex
is set to true, all fields of the entity will be indexed. If the partial
parameter is set to true, this entity takes part in a cross-store setting where only
the parts of the entity not handled by JPA will be mapped to the graph store.
Entity fields can be annotated with @GraphProperty, @RelatedTo, @RelatedToVia, @Indexed and @GraphId
@NodeEntity
public class Movie {
String title;
}
To access the rich data model of graph relationships, POJOs can also be annotated with
@RelationshipEntity. Relationship entities can't be instantiated directly but are rather accessed via
node entities, either by @RelatedToVia fields or by the relateTo or
getRelationshipTo methods.
Relationship entities may contain fields that are mapped to properties and two special fields that are
annotated with @StartNode and @EndNode which point to the start and end node entities respectively. These
fields are treated as read only fields.
@RelationshipEntity
public class Role {
@StartNode
private Actor actor;
@EndNode
private Movie movie;
}
It is not necessary to annotate fields as they are persisted by default; all fields that contain primitive values are persisted directly to the graph. All fields convertible to String using the Spring conversion services will be stored as a string. Transient fields are not persisted. This annotation is mainly used for cross-store persistence.
Relationships to other NodeEntities are mapped to graph relationships. Those can either be single
relationships (1:1) or multiple relationships (1:n). In most cases single relationships to other
node entities don't have to be annotated as Spring Data Graph can extract all necessary information
from the field using reflection. In the case of multiple relationships, the elementClass
parameter of @RelatedTo must be specified because of type erasure. The direction
(default OUTGOING) and type (inferred from field name) parameters of the annotation are
optional.
Relationships to single node entities are created when setting the field and deleted when setting it to null. For multi-relationships the field provides a managed collection (Set) that handles addition and removal of node entities and reflects those in the graph relationships.
@NodeEntity
public class Movie {
private Actor topActor;
}
@NodeEntity
public class Person {
@RelatedTo(type = "topActor", direction = Direction.INCOMING)
private Movie wasTopActorIn;
}
@NodeEntity
public class Actor {
@RelatedTo(type = "ACTS_IN", elementClass = Movie.class)
private Set<Movie> movies;
}
To provide easy programmatic access to the richer relationship entities of the data model a different annotation @RelatedToVia can be declared on fields of Iterables of the relationship entity type. These Iterables then provide read only access to instances of the entity that backs the relationship of this relationship type. Those instances are initialized with the properties of the relationship and the start and end node.
@NodeEntity
public class Actor {
@RelatedToVia(type = "ACTS_IN", elementClass = Role.class)
private Iterable<Role> roles;
}
The @Indexed annotation can be declared on fields that are intended to be indexed by the Neo4j IndexManager, triggered by value modification. The resulting index can be used to later retrieve nodes or relationships that contain a certain property value (for example a name). Often an index is used to establish the start node for a traversal. Indexes are accessed by a Finder for a particular NodeEntity or RelationshipEntity, created via a FinderFactory.
GraphDatabaseContext exposes the indexes for Nodes and Relationships. Indexes can be named, for instance to keep separate domain concepts in separate indexes. That's why it is possible to specifiy an index name with the @Indexed annotation. It can also be specified at the entity level, this name is then the default index name for all fields of the entity. If no index name is specified, it defaults to the one configured with Neo4j ("node" and "relationship").
The @GraphTraversal annotation leverages the delegation infrastructure used by the Spring Data Graph
aspects. It provides dynamic fields which, when accessed, return an Iterable of NodeEntities that are
the result of a traversal starting at the current NodeEntity. The TraversalDescription used for this
is created by a TraversalDescriptionBuilder whose class is referred to by the traversalBuilder
attribute of the annotation. The class of the expected NodeEntities is provided with the
elementClass attribute.
The Neo4j graph database can use different index providers for exact lookups and fulltext searches. Lucene is used as a index provider implementation. There is support for distinct indexes for nodes and relationships which can be configured to be of fulltext or exact types.
Using the standard Neo4j API, Nodes and Relationships and their indexed field-value combinations
have to be added manually to the appropriate index. When using Spring Data Graph, this task is simplified by
eased by applying an @Indexed annotation on entity fields. This will result in updates to the
index on every change. Numerical fields are indexed numerically so that they are available for range queries.
All other fields are indexed with their string representation. The @Indexed annotation can also set the
index-name to be used. If @Indexed annotates the entity class, the index-name for the whole entity is preset
to that value. Not providing index names defaults them to "node" and "relationship" respectively.
Query access to the index happens with the Node- and RelationshipFinders that are created via an instance of
org.springframework.data.graph.neo4j.finder.FinderFactory. The methods
findByPropertyValue and findAllByPropertyValue work on the exact indexes and
return the first or all matches. To do range queries, use findAllByRange (please note that
currently both values are inclusive).
@NodeEntity
class Person {
@Indexed(indexName = "people")
String name;
// automatically indexed numerically
@Indexed
int age;
}
@NodeEntity
@Indexed(indexName="groups")
class Group {
@Indexed
String name;
@RelatedTo(elementClass = Person.class, type = "people" )
Set<Person> people;
}
NodeFinder<Person> finder = finderFactory.createNodeEntityFinder(Person.class);
// exact finder
Person mark = finder.findByProperyValue("people","name","mark");
// numeric range queries
for (Person middleAgedDeveloper : finder.findAllByRange(null, "age", 20, 40)) {
Developer developer=middleAgedDeveloper.projectTo(Developer.class);
}
Neo4jTemplate also offers index support, providing auto-indexing for fields at creation time of nodes and
relationships. There is an autoIndex method that can also add indexes for a set of fields in one
go.
For querying the index, the template offers query-methods that take either the exact match parameters or a query
object / query expression and push the results wrapped uniformly as Paths to the supplied
PathMapper to be converted or collected.
Spring Data Graph also comes with a type bound Repository-like Finder implementation that provides methods for locating nodes and relationships:
using direct access findById(id),
iterating over all nodes of a node entity type (findAll),
counting the instances of a node entity type (count),
iterating over all indexed instances with a certain property value (findAllByPropertyValue),
getting a single instance with a certain property value (findByPropertyValue),
iterating over all indexed instances within a certain numerical range (inclusive) (findAllByRange),
iterating over a traversal result (findAllByTraversal).
The Finder instances are created via a FinderFactory to be bound to a concrete node or relationship entity class. The FinderFactory is created in the Spring context and can be injected.
NodeFinder<Person> finder = finderFactory.createNodeEntityFinder(Person.class);
Person dave=finder.findById(123);
int people = finder.count();
Person mark = finder.findByPropertyValue("name", "mark");
Iterable<Person> devs = finder.findAllByProperyValue("occupation","developer");
Iterable<Person> davesFriends = finder.findAllByTraversal(dave,
Traversal.description().pruneAfterDepth(1)
.relationships(KNOWS).filter(returnAllButStartNode()));
Neo4j is a transactional datastore which only allows modifications within transaction boundaries and fullfills the ACID properties. Reading from the store is also possible outside of transactions.
Spring Data Graph integrates with transaction managers configured using Spring. The simplest scenario of
just running the graph database uses a SpringTransactionManager provided by the Neo4j kernel to be used
with Spring's JtaTransactionManager.
Note: The explicit XML configuration given below is encoded in the Neo4jConfiguration
configuration bean that uses Spring's @Configuration functioanlity. This simplifies the configuration.
An example is shown further below.
<bean id="transactionManager" class="org.springframework.transaction.jta.JtaTransactionManager">
<property name="transactionManager">
<bean class="org.neo4j.kernel.impl.transaction.SpringTransactionManager">
<constructor-arg ref="graphDatabaseService"/>
</bean>
</property>
<property name="userTransaction">
<bean class="org.neo4j.kernel.impl.transaction.UserTransactionImpl">
<constructor-arg ref="graphDatabaseService"/>
</bean>
</property>
</bean>
<tx:annotation-driven mode="aspectj" transaction-manager="transactionManager"/>
For scenarios running multiple transactional resources there are two options. First of all you can have Neo4j participate in the externally set up transaction manager using the new SpringProvider by enabling the configuration parameter for your graph database. Either via the spring config or the configuration file (neo4j.properties).
<context:annotation-config />
<context:spring-configured/>
<bean id="transactionManager" class="org.springframework.transaction.jta.JtaTransactionManager">
<property name="transactionManager">
<bean id="jotm" class="org.springframework.data.graph.neo4j.transaction.JotmFactoryBean"/>
</property>
</bean>
<bean class="org.neo4j.kernel.EmbeddedGraphDatabase" destroy-method="shutdown">
<constructor-arg value="target/test-db"/>
<constructor-arg>
<map>
<entry key="tx_manager_impl" value="spring-jta"/>
</map>
</constructor-arg>
</bean>
<tx:annotation-driven mode="aspectj" transaction-manager="transactionManager"/>
You can configure a stock XA transaction manager to be used with Neo4j and the other resources (e.g. Atomikos,
JOTM, App-Server-TM). For a bit less secure but fast 1 phase commit best effort, use the implementation coming
with Spring Data Graph (ChainedTransactionManager). It takes a list of transaction-managers as
constructor params and will handle them in order for transaction start and commit (or rollback) in the reverse
order.
<bean id="transactionManager"
class="org.springframework.data.graph.neo4j.transaction.ChainedTransactionManager" >
<constructor-arg>
<list>
<bean class="org.springframework.orm.jpa.JpaTransactionManager" id="jpaTransactionManager">
<property name="entityManagerFactory" ref="entityManagerFactory"/>
</bean>
<bean
class="org.springframework.transaction.jta.JtaTransactionManager">
<property name="transactionManager">
<bean class="org.neo4j.kernel.impl.transaction.SpringTransactionManager">
<constructor-arg ref="graphDatabaseService" />
</bean>
</property>
<property name="userTransaction">
<bean class="org.neo4j.kernel.impl.transaction.UserTransactionImpl">
<constructor-arg ref="graphDatabaseService" />
</bean>
</property>
</bean>
</list>
</constructor-arg>
</bean>
By default newly created node entities are in a detached state. When persist() is called on the
entity it is attached to the graph store and its properties and relationships are persisted as well. Changing
an attached entity inside a transaction will write through the changes to the datastore. Whenever an entity
is changed outside of a transaction it will be considered detached. The changed data is stored in the entity
itself and not written back to the datastore.
All entities that are returned by library functions are initially in an attached state. Changing them outside
of a transaction detaches them. For writing the changes back it is necessary to persist() them
again.
Persisting an entity not only persists that single entity but will traverse its existing and new relationships and persist the cluster of detached entities that it is part of. The borders of this cluster are formed by attached entities. The persist operation creates its own, implicit transaction. When it is called withina external transaction it participates otherwise it is an atomic operation.
Please keep in mind that the session handling behaviour is still heavily developed. The defaults and also other aspects of the behaviour are likely to change in subsequent releases. At the moment there is no support for the creation of relationships outside of transactions and also more complex operations like creating whole subgraphs outside of transactions is not supported.
@NodeEntity
class Person {
String name;
}
Person p = new Person().persist();
There are several ways to represent the Java type hierarchy of the data model in the graph. In general for all
node and relationship entities type information is needed to perform certain repository operations. That's
why the hierarchy up to java.lang.Object of all these classes will be persisted in the graph.
Implementations of NodeTypeStrategy take care of persisting this information on entity instance
creation. They also provide the repository methods that use this type information to perform their operations
like findAll, count etc.
The current implementation uses nodes to represent the Java type hierarchy which are connected via SUBCLASS_OF relationships to their superclass nodes and via INSTANCE_OF relationships to the concrete node entity instance node.
An alternative approach could use indexing operations to perform the same functionality. Or one could skip the NodeTypeStrategy altogether if no strict checks on type conformity are needed, which would allow for a much more flexible data model.
The node and relationship aspects introduce (via ITD - inter type declaration) several methods to the entities that make common tasks easier. Unfortunately these methods are not generified yet, so the results have to be casted to the correct return type.
nodeEntity.getNodeId() and relationshipEntity.getRelationshipId()
entity.getPersistentState()
entity.equals() and entity.hashCode()
nodeEntity.relateTo(targetEntity, relationshipClass, relationshipType)
nodeEntity.getRelationshipTo(targetEnttiy, relationshipClass, relationshipType)
nodeEntity.removeRelationshipTo(targetEntity, relationshipType)
entity.remove()
entity.projectTo(targetClass)
nodeEntity.findAllByTraversal(targetType, traversalDescription)
As the underlying data model of a graph database doesn't imply and enforce strict type constraints like a relational model does, it offers much more flexibility on how to model your domain classes and which of those to use in different contexts.
For instance an order can be used in these contexts: customer, procurement, logistics, billing, fulfillment and many more. Each of those contexts requires its distinct set of attributes and operations. As Java doesn't support mixins one would put the sum of all of those into the entity class and thereby making it very big, brittle and hard to understand. Being able to take a basic order and project it to a different (not related in the inheritance hierarchy or even an interface) order type that is valid in the current context and only offers the attributes and methods needed here would be very benefitial.
Spring Data Graph offers initial support for projecting node and relationship entities to different target types. All instances of this projected entity share the same backing node or relationship, so data changes are reflected immediately.
This could for instance also be used to handle nodes of a traversal with a unified (simpler) type (e.g. for reporting or auditing) and only project them to a concrete, more functional target type when the business logic requires it.
// not related to Person at all
@NodeEntity
class Trainee {
String name;
@RelatedTo(elementClass=Training.class);
Set<Training> trainings;
}
for (Person person : finder.findAllByProperyValue("occupation","developer")) {
Developer developer = person.projectTo(Developer.class);
if (developer.isJavaDeveloper()) {
trainInSpringData(developer.projectTo(Trainee.class));
}
}
The Neo4jTemplate offers the convenient API of Spring templates for the Neo4j graph database.
There are methods for creating nodes and relationships that automatically set provided properties and optionally
index certain fields. Other methods (index, autoindex) will index them.
For the querying operations Neo4jTemplate unifies the result with the Path abstraction that
comes from Neo4j. Much like a resultset a path contains nodes() and relationships()
starting at a startNode() and ending with aendNode(), the
lastRelationship() is also available separately. The Path abstraction also wraps
results that contain just nodes or relationships. Using implementations of PathMapper<T>
and PathMapper.WithoutResult (comparable with RowMapper and
RowCallbackHandler) the paths can be converted to Java objects.
Query methods either take a field / value combination to look for exact matches in the index or a lucene query object or string to handle more complex queries.
Traversal methods are the bread and butter of graph operations. As such, they are fully supported in the
Neo4jTemplate. The traverseNext method traverses to the direct neighbours of the
start node filtering the relationships according to its parameters.
The traverse method covers the full fledged traversal operation that takes a powerful
TraversalDescription (most probably built from the Traversal.description()
DSL) and runs it from the start node. Each path that is returned via the traversal is passed to the
PathMapper to be processed accordingly.
The Neo4jTemplate provides configurable implicit transactions for all its methods. By default
it creates a transaction for each call (which is a no-op if there is already a transaction running). If
you call the constructor with the useExplicitTransactions parameter set to true, it won't
create any transactions so you have to provide them using @Transactional or the TransactionTemplate.
Neo4jOperations neo = new Neo4jTemplate(grapDatabase);
Node michael = neo.createNode(_("name","Michael"),"name");
Node mark = neo.createNode(_("name","Mark"));
Node thomas = neo.createNode(_("name","Thomas"));
neo.createRelationship(mark,thomas, WORKS_WITH, _("project","spring-data"));
neo.index("devs",thomas, "name","Thomas");
neo.autoIndex("devs",mark, "name");
assert "Mark".equals(neo.query("devs","name","Mark",new NodeNamePathMapper()));
Spring Data Graph supports property based validation support. So whenever a property is changed, it is checked against the annotated constraints (.e.g @Min, @Max, @Size, etc). Validation errors throw a ValidationException. For evaluating the constraints the validation support that comes with Spring is used. To use it a validator has to be registered with the GraphDatabaseContext, if there is none, no validation will be performed (any registered Validator or (Local)ValidatorFactoryBean will be used).
@NodeEntity
class Person {
@Size(min = 3, max = 20)
String name;
@Min(0)
@Max(100)
int age;
}
To use Spring Data Graph in your application, some setup is required. For building the application the necessary Maven dependencies must be included and for the AspectJ weaving some extensions of the compile goal are necessary. This chapter also discusses the Spring configuration needed to set up Spring Data Graph. Examples for this setup can be found in the Spring Data Graph examples.
As stated in the requirements chapter, Spring Data Graph projects are easiest to build with Apache Maven. The main dependencies are Spring Data Graph itself, Spring Data Commons, some parts of the Spring Framework and of course the Neo4j graph database.
The milestone releases of Spring Data Graph are available from the dedicated milestone repository. Neo4j releases and milestones are available from Maven Central.
<repository> <id>spring-maven-milestone</id> <name>Springframework Maven Repository</name> <url>http://maven.springframework.org/milestone</url> </repository>
The dependency on spring-data-neo4j
should transitively pull in Spring Framework (core, context, aop,
aspects, tx), Aspectj, Neo4j and Spring Data Commons. If you already use these (or different versions of
these) in your project, then include those dependencies on your own.
<dependency> <groupId>org.springframework.data</groupId> <artifactId>spring-data-neo4j</artifactId> <version>1.0.0.M4</version> </dependency> <dependency> <groupId>org.aspectj</groupId> <artifactId>aspectjrt</artifactId> <version>1.6.11.M2</version> </dependency>
As Spring Data Graph uses AspectJ for build time aspect weaving of your entities, it is necessary to add the aspectj-plugin to the build phases. The plugin has its own dependencies. You also need to explicitely specifiy libraries containing aspects (spring-aspects and spring-data-neo4j)
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>aspectj-maven-plugin</artifactId>
<version>1.0</version>
<dependencies>
<!-- NB: You must use Maven 2.0.9 or above or these are ignored (see MNG-2972) -->
<dependency>
<groupId>org.aspectj</groupId>
<artifactId>aspectjrt</artifactId>
<version>1.6.11.M2</version>
</dependency>
<dependency>
<groupId>org.aspectj</groupId>
<artifactId>aspectjtools</artifactId>
<version>1.6.11.M2</version>
</dependency>
</dependencies>
<executions>
<execution>
<goals>
<goal>compile</goal>
<goal>test-compile</goal>
</goals>
</execution>
</executions>
<configuration>
<outxml>true</outxml>
<aspectLibraries>
<aspectLibrary>
<groupId>org.springframework</groupId>
<artifactId>spring-aspects</artifactId>
</aspectLibrary>
<aspectLibrary>
<groupId>org.springframework.data</groupId>
<artifactId>spring-datastore-neo4j</artifactId>
</aspectLibrary>
</aspectLibraries>
<source>1.6</source>
<target>1.6</target>
</configuration>
</plugin>
The concrete configuration for Spring Data Graph is quite verbose as there is no autowiring involved. It sets up the following parts.
GraphDatabaseService, IndexManager for the embedded Neo4j storage engine
Spring transaction manager, Neo4j transaction manager
aspects and instantiators for node and relationship entities
EntityState and FieldAccessFactories needed for the different field handling
Conversion services
Finder factory
an appropriate NodeTypeStrategy
To simplify the configuration we provide a xml namespace datagraph that allows configuration of any
Spring Data Graph project with a single line of xml code. There are three possible parameters. You can use storeDirectory
or the reference to graphDatabaseService alternatively. For cross-store configuration just refer
to an entityManagerFactory.
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:context="http://www.springframework.org/schema/context"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:datagraph="http://www.springframework.org/schema/data/graph"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
http://www.springframework.org/schema/context
http://www.springframework.org/schema/context/spring-context-3.0.xsd
http://www.springframework.org/schema/data/graph
http://www.springframework.org/schema/data/graph/datagraph-1.0.xsd
">
<context:annotation-config/>
<datagraph:config storeDirectory="target/config-test"/>
</beans>
<context:annotation-config/>
<bean id="graphDatabaseService" class="org.neo4j.kernel.EmbeddedGraphDatabase"
destroy-method="shutdown">
<constructor-arg index="0" value="target/config-test" />
</bean>
<datagraph:config graphDatabaseService="graphDatabaseService"/>
<context:annotation-config/>
<datagraph:config storeDirectory="target/config-test"
entityManagerFactory="entityManagerFactory"/>
<bean class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"
id="entityManagerFactory">
<property name="dataSource" ref="dataSource"/>
<property name="persistenceXmlLocation" value="classpath:META-INF/persistence.xml"/>
</bean>
You can also configure Spring Data Graph using Java based bean metadata.
For those not familiar with how to configure the Spring container using Java based bean metadata instead of XML based metadata see the high level introduction in the reference docs here as well as the detailed documentation here.
To help configure Spring Data Graph using Java based bean metadata the class Neo4jConfiguration is registerd with the context either explicitly in the XML config or via classpath scanning for classes that have the @Configuration annotation. The only thing that must be provided in addition is the GraphDatabaseService configured with a datastore directory. The example below shows using XML to register the Neo4jConfiguration @Configuration class as well as Spring's ConfigurationClassPostProcessor that transforms the @Configuration class to bean definitions.
<beans>
...
<tx:annotation-driven mode="aspectj" transaction-manager="transactionManager"/>
<bean class="org.springframework.data.graph.neo4j.config.Neo4jConfiguration"/>
<bean class="org.springframework.context.annotation.ConfigurationClassPostProcessor"/>
<bean id="graphDatabaseService" class="org.neo4j.kernel.EmbeddedGraphDatabase"
destroy-method="shutdown" scope="singleton">
<constructor-arg index="0" value="target/config-test"/>
</bean>
...
</beans>
The Spring Data Graph project support cross-store persistence which allows parts of the data mode to be stored in a traditional JPA datastore (RDBMS) and other parts of the data model (even partial entites, that is some properties or relationships) in a graph store.
This allows existing JPA-based applications to embrace NOSQL data stores to evolve certain parts of their model. Possible use cases are adding social network or geospatial information to existing applications.
Partial graph persistence is achieved by restricting the Spring Data Graph aspects to explicitly annotated parts of the entity. Those fields have to be made transient so that JPA ignores them and won't try to persist those attributes.
A backing node in the graph store is created when the entity has been assigned a JPA id. Only then will the connection between the two stores be kept. Until the entity has been persisted, its state is just kept inside the POJO and flushed to the backing graph store afterwards.
The connection between the two entities is kept via a FOREIGN_ID field in the node that contains the JPA id (currently only single value ids are supported). The entity class can be resolved via the NodeTypeStrategy that preserves the Java type hierarchy within the graph. With the id and class, you can then retrieve the appropriate JPA entity for a given node.
The other direction is handled by indexing the Node with the FOREIGN_ID index which contains a concatenation of the fully qualified class name of the JPA entity and the id. So it is possible on instantiation of a JPA id via the entity manager (or some other means like creating the POJO and setting its id manually) to find the matching node using the index facilities and reconnect them.
Using those mechanisms and the Spring Data Graph aspects a single POJO can contain fields that are handled by JPA and other fields (which might be relationships as well) that are handled by Spring Data Graph.
When annotating an entity with partial true, Spring Data Graph assumes that this is a cross-store entity. So it is only responsible for the fields annotated with Spring Data Graph annotations. JPA should not take care of these fields (they should be annotated with @Transient). In this mode of operation Spring Data Graph also handles the cross-store connection via the content of the JPA id field.
For common fields containing primitive or convertible values that wouldn't have to be annotated in exclusive Spring Data Graph operations this explicit declaration is necessary to be sure that they are intended to be stored in the graph. These fields should then be made transient so that JPA doesn't try to take care of them as well.
The following example is taken from the Spring Data Graph examples, it is contained in the myrestaurant-social project.
@Entity
@Table(name = "user_account")
@NodeEntity(partial = true)
public class UserAccount {
private String userName;
private String firstName;
private String lastName;
@GraphProperty
@Transient
String nickname;
@RelatedTo(type = "friends", elementClass = UserAccount.class)
@Transient
Set<UserAccount> friends;
@RelatedToVia(type = "recommends", elementClass = Recommendation.class)
@Transient
Iterable<Recommendation> recommendations;
@Temporal(TemporalType.TIMESTAMP)
@DateTimeFormat(style = "S-")
private Date birthDate;
@ManyToMany(cascade = CascadeType.ALL)
private Set<Restaurant> favorites;
@Id
@GeneratedValue(strategy = GenerationType.AUTO)
@Column(name = "id")
private Long id;
@Transactional
public void knows(UserAccount friend) {
relateTo(friend, DynamicRelationshipType.withName("friends"));
}
@Transactional
public Recommendation rate(Restaurant restaurant, int stars, String comment) {
Recommendation recommendation = (Recommendation) relateTo(restaurant,
Recommendation.class, "recommends");
recommendation.rate(stars, comment);
return recommendation;
}
public Iterable<Recommendation> getRecommendations() {
return recommendations;
}
}
Configuring cross-store persistence is done similarly to the default Spring Data Graph operations. As soon as you refer
to an entityManagerFactory in the xml-namespace it is set up for cross-store persistence.
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:context="http://www.springframework.org/schema/context"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:datagraph="http://www.springframework.org/schema/data/graph"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
http://www.springframework.org/schema/context
http://www.springframework.org/schema/context/spring-context-3.0.xsd
http://www.springframework.org/schema/data/graph
http://www.springframework.org/schema/data/graph/datagraph-1.0.xsd
">
<context:annotation-config/>
<datagraph:config storeDirectory="target/config-test"
entityManagerFactory="entityManagerFactory"/>
<bean class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"
id="entityManagerFactory">
<property name="dataSource" ref="dataSource"/>
<property name="persistenceXmlLocation" value="classpath:META-INF/persistence.xml"/>
</bean>
</beans>
Spring Data Graph comes with a number of samples. The source code of the samples is found on GitHub. The different sample projects are introduced below.
The Hello Worlds sample application is a simple console application with unit tests, that creates some Worlds (entities / nodes) and Rocket Routes (relationships) in a Galaxy (graph) and then reads them back and prints them out.
The unit tests demonstrate some other features of Spring Data Graph. The sample comes with a minimal configuration for Maven and spring to get up and running quickly.
Executing the application creates the following graph in the Graph Database:

A web application that imports datasets from the Internet Movie Database (IMDB) into the graph database. It allows listings of movies with their actors and actors with their roles in different movies. It also uses graph traversal operations to calculate the Kevin Bacon number (distance to an actor that has acted with Kevin Bacon). This sample application shows the basic usage of Spring Data Graph in a more complex setting with several annotated entities and relationships as well as usage of indices and graph traversal.
See the readme file for instruction on how to compile and run the application.
An excerpt of the data stored in the Graph Database after executing the application:
