relayjs - Relationship between GraphQL and database when using connection and pagination? -
it easy set pagination relay there's small detail unclear me.
both of relevant parts in code marked comments, other code additional context.
const posttype = new graphqlobjecttype({ name: 'post', fields: () => ({ id: globalidfield('post'), title: { type: graphqlstring }, }), interfaces: [nodeinterface], }) const usertype = new graphqlobjecttype({ name: 'user', fields: () => ({ id: globalidfield('user'), email: { type: graphqlstring }, posts: { type: postconnection, args: connectionargs, resolve: async (user, args) => { // getuserposts() in next code block -> gets data db // pass args (e.g "first", "after" etc) , user id (to user posts) const posts = await getuserposts(args, user._id) return connectionfromarray(posts, args) } }, }), interfaces: [nodeinterface], }) const {connectiontype: postconnection} = connectiondefinitions({name: 'post', nodetype: posttype})
exports.getuserposts = async (args, userid) => { try { // using mongodb , mongoose question relevant every db // .limit() -> how many posts return const posts = await post.find({author: userid}).limit(args.first).exec() return posts } catch (err) { return err } }
cause of confusion:
- if pass
first
argument , use in db query limit returned results,hasnextpage
alwaysfalse
. efficient breakshasnextpage
(haspreviouspage
if uselast
) - if don't pass
first
argument , don't use in db query limit returned results,hasnextpage
working expected return all items queried (could thousands)- even if database on same machine (which isn't case bigger apps), seems very, very, inefficient , awful. please prove me im wrong!
- as far know, graphql doesn't have server-side caching therefore there wouldn't point return results (even if did, users don't browse 100% content)
what's logic here?
one solution comes mind add +1
first
value in getuserposts
, retrieve 1 excess item , hasnextpage
work. feels hack , there's excess item returned - grow relatively if there many connections
, requests.
are expected hack that? expected return all results?
or did misunderstand whole relationship between database , grahpql / relay?
what if used fb dataloader , redis? change logic?
cause of confusion
the utility function connectionfromarray
of graphql-relay-js library not solution kinds of pagination needs. need adapt our approach based on our preferred pagination models.
connectionfromarray
function derives values of hasnextpage
, hasprevisouspage
given array. so, observed , mentioned in "cause of confusion" expected behavior.
as confusion whether load data or not, depends on problem @ hand. loading items may make sense in several situations such as:
- the number of items small , can afford memory required store items.
- the items requested , need cache them faster access.
two common pagination models numbered pages , infinite scrolling. graphql connection specification not opinionated pagination model , allows both of them.
for numbered pages, can use field totalpost
in graphql type, can used display links numbered pages on ui. on back-end, can use feature skip
fetch needed items. field totalpost
, current page number eliminates dependency on hasnextpage
or haspreviouspage
.
for infinite scrolling, can use cursor
field, can used value after
in query. on back-end, can use value of cursor
retrieve next items (value of first
). see example of using cursor in relay documention on graphql connection. see this answer graphql connection , cursor. see this , this blog posts, better understand idea of cursor
.
what's logic here?
are expected hack that?
no, ideally we're not expected hack , forget it. leave technical debt in project, cause more problems in long term. may consider implementing own function return connection object. ideas of how in implementation of array-connection in graphql-relay-js.
is expected return results?
again, depends on problem.
what if used fb dataloader , redis? change logic?
you can use facebook dataloader library cache , batch-process queries. redis option caching results. if load (1) items using dataloader or store items in redis , (2) items lightweight, can create array of items (following kiss principle). if items heavy-weight, creating array may expensive operation.