Dataset API - Getting full dataset list


I'm currently using requests in python to pull in a list of all datasets, however I'm only receving 50 as the max limit and there are >200 total sets. Additionally, about 95% of returned results are created by "DomoBIB" which isn't even part of the dataset that was created by me. 


dataset_endpoint = ""
dataset_response = requests.get(dataset_endpoint, headers={'Authorization': token})


Is there a way to increase the return size so that it actually does return a list of all datasets? Editing the limit parameter gives an error that the number has to be positive and under 50. Additionally, is it possible to filter out all "DomoBIB" sets? 


  • guitarhero23
    guitarhero23 Contributor

     Have you tried messing with offset or sort?


    Sort the list of DataSets by property
    name, description,,


    Paginate the list of DataSets

    **Make sure to like any users posts that helped you and accept the ones who solved your issue.**
  • user04731

    I've messed around with both parameters, only sort gives me any sort of different output but is still not the full list of all datasets. There's datasets from multiple users and I want to grab all of them, except for the auto-generated ones from "DomoBIB".


    Not really sure what offset does but there's no difference in response with different offset parameters. 

  • user07464



    I have started working with the DataSet API as well. I our case we are using the end point to list data sets to look for data sets with a specific prefix in their name. The current end point ( is inefficient to do this as it needs to be called many times to iterate through all of the data sets (sort, limit and offset are of little help). It would be useful to be able to filter the list of data sets by a prefix of the data set name (similar to the AWS S3 API).


    It would also be helpful if sorting would work on the "dataCurrentAt" or "updatedAt" time stamps. Currently I get an error when trying to sort anything other than "name".




This discussion has been closed.