-
Notifications
You must be signed in to change notification settings - Fork 822
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Querying next pages using page numbers? #5086
Comments
I can quickly create a branch for you with this feature to show how you would extend a transformer to do this - see pr #5098 There is an alternative, which is to create a new template that includes the "from" argument in the elasticsearch query, and add this as a new resolver to your schema. This also means you need to create a custom resource definition.
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "An auto-generated nested stack.",
"Metadata": {},
"Parameters": {
"AppSyncApiId": {
"Type": "String",
"Description": "The id of the AppSync API associated with this project."
},
"AppSyncApiName": {
"Type": "String",
"Description": "The name of the AppSync API",
"Default": "AppSyncSimpleTransform"
},
"env": {
"Type": "String",
"Description": "The environment name. e.g. Dev, Test, or Production",
"Default": "NONE"
},
"S3DeploymentBucket": {
"Type": "String",
"Description": "The S3 bucket containing all deployment assets for the project."
},
"S3DeploymentRootKey": {
"Type": "String",
"Description": "An S3 key relative to the S3DeploymentBucket that points to the root\nof the deployment directory."
}
},
"Resources": {
"$RESOURCE_NAME_HERE": {
"Type": "AWS::AppSync::Resolver",
"Properties": {
"ApiId": {
"Ref": "AppSyncApiId"
},
"DataSourceName": "ElasticSearchDomain",
"TypeName": "Query",
"FieldName": "$FIELD_NAME_HERE",
"RequestMappingTemplateS3Location": {
"Fn::Sub": [
"s3://${S3DeploymentBucket}/${S3DeploymentRootKey}/resolvers/Query.$TEMPLATE_NAME_HERE.req.vtl",
{
"S3DeploymentBucket": {
"Ref": "S3DeploymentBucket"
},
"S3DeploymentRootKey": {
"Ref": "S3DeploymentRootKey"
}
}
]
},
"ResponseMappingTemplateS3Location": {
"Fn::Sub": [
"s3://${S3DeploymentBucket}/${S3DeploymentRootKey}/resolvers/Query.$TEMPLATE_NAME_HERE.res.vtl",
{
"S3DeploymentBucket": {
"Ref": "S3DeploymentBucket"
},
"S3DeploymentRootKey": {
"Ref": "S3DeploymentRootKey"
}
}
]
}
}
}
},
"Conditions": {
"HasEnvironmentParameter": {
"Fn::Not": [
{
"Fn::Equals": [
{
"Ref": "env"
},
"NONE"
]
}
]
},
"AlwaysFalse": {
"Fn::Equals": [
"true",
"false"
]
}
}
}
type Query {
MY_FIELD_NAME(from: Int): MY_RESPONSE_TYPE
} |
Thanks! I'll just wait for the PR to be merged and released. |
Would love to see this PR merged asap. Thanks for the work @RossWilliams |
The fix for this has been released in CLI v 4.32.0 |
This issue has been automatically locked since there hasn't been any recent activity after it was closed. Please open a new issue for related bugs. Looking for a help forum? We recommend joining the Amplify Community Discord server |
Is your feature request related to a problem? Please describe.
Currently, search query now returns a total which is the total number of rows there are, but the pagination is using nextToken which requires that you know the "id"/"value" of the starting position of the next page, this only allows you to paginate using a straight-line flow, but what if there are 100 pages and the user wants to jump to page 70? This is quite a common requirement on most applications and leaving this functionality out means that requirement would be unfulfilled, leaving us, devs, on quite a bad side of defending why are we using amplify if it can't do simple pagination like that (I mean no offense).
Describe the solution you'd like
Allow us to provide a page rather than a nextToken, something like
Where the
offset
(starting point of the page) is derived fromlimit * page - page
, so, in above example, if the page is 1, theoffset
would be10 * 1 - 10 = 0
that means start from index 0 and grab 10 items.Describe alternatives you've considered
I haven't found one.
Additional context
N/A
The text was updated successfully, but these errors were encountered: