It makes everything very easy to get started.
Very easy to integrate - populate data, fetch data.
Both the app UI and data API are excellent, powerful, and very easy to use.

But, there are some weird janky bits…


As soon as a table has more than 100 records, you will need to make multiple requests, one per page.
For example, if your table has 501 records, the results will be spread across 6 pages, and therefore 6 HTTP requests are needed to fully capture all of that table's records. Your first request might look something like this with your base's app id appxxx and the table name:
You will get the first page of records (under the "records" key), but if there is another page, you'll also get an "offset" that would look something like this:
"offset": "itrSJ1l4jlPyxEG6c/recxxx"
To fetch the next page (in other words, the next 100 records), you'll include that offset as a query parameter. For example:
To get all records from the database, you can use the JS SDK and this becomes much simpler. It still makes multiple network requests, but the code is much simpler.
But to implement pagination for real, like letting the user see what’s on page 2 (row 200-299), it’s weird. The JS SDK actually makes this more awkward and difficult.
Seems the data structure they use in the backend is a linked list of some sort. It’s not a traditional database. It’s definitely a NoSQL type which is a data dump of JSON files with indexing added.

Querying by “slug”

or by any other field other than the system-generated “id”.
Like with the pagination, it’s possible, but weird and inefficient.
Seems like they always query all the records from the entire table. But then to get just one (or several), they run through the list and filter the results. Or they probably do some kind of linked-list and filter or find while accessing the data. Anyway, it’s inefficient.
To do it, you have to pass a filtering function (as an encoded string).