I would love to see some Hubski mobile apps start development, but that won't happen until the API rolls out. So what do we know about it this far? Any ETA?
We don't have good ETA. We all have day jobs, unfortunately. But I can tell you where I am. Right now, Hubski is a monolithic Arc app storing data as flat files. My current list looks something like this: 1. Write an app reading SQL and serving JSON over REST, as an internal API. This will serve things like /post/id rather than /user/posts, and is not only unwieldy as a public API, but also serves private data like mail. 2. Write a script to convert the flat files to SQL. 3. Convert the Arc code to read from the internal API. 4. Write an External API, with endpoints like /user/posts, /posts/global, /tag/, etc. Hard, because it needs authentication to serve private user data. Easy, because we'll already have an internal API app to modify, and data will be in SQL. The internal API is necessary before an external API. It will fix scalability issues we're having, it enables and simplifies a lot of other features we want to add, and most of the work transfers to the external anyway. I'm about 60% done with 1 and 2, bearing in mind the 80–20 rule. But again, I have limited spare time. I'm also on vacation next week, so I'm losing the next two weekends. I'm conservatively hoping to have the internal API deployed for most data in a couple months, and to have an external API by the new year. I know that's long, believe me, it frustrates me more than anyone. If this were my full-time job, I could have it done in two weeks, tops. Incidentally, if anyone is super-anxious to see the new API and conversion code I'm working on, I could be persuaded to put it on github sooner rather than later. It's all Racket LISP. Going thru the Arc code to release it is also on my list. But, writing new Racket is more fun than poring thru old Arc looking for security concerns. You don't need an API. You can always scrape.
Yeah, I should say that I wrote a function to scrape user comments a while ago. Using lxml's xpath function, it's fairly easy to scrape data based on its surrounding HTML content. If any hubski users are looking to build their own clients, this isn't a bad approach to get started. Most data is public anyways, and there's not a whole lot of complexity in terms of different data sources available.You don't need an API. You can always scrape.
I would love an API for Hubski, I bet there's some really interesting info on here about who is following who and what friend groups have cropped up. I'd love to see how user activity changes over time, and ideally would like to see how friend groups have changed over time but I imagine that data wouldn't have been recorded and so it'll have to be a script to take snapshots and put it all together.
It isn't, and we have no plans to. Yep. Though I'm not sure it's as interesting as you think. I suspect most people gain followers and following, and lose very little. You could make a record of who got pissed off at who and when, for whatever that's worth. But I doubt there's much more than that.would like to see how friend groups have changed over time but I imagine that data wouldn't have been recorded
so it'll have to be a script to take snapshots and put it all together.
how friend groups have changed over time
Not sure if this would be a bug or feature request then, but could you all add exact time-stamps as mouse-overs to the "X minutes / hours ago" labels? Then the information is available if someone wants to parse / filter the data on their end, and it adds a slight bit of information for users who care about that sort of thing. (Posting old archives though would probably be annoying to users that edit / delete their comments, too)It isn't, and we have no plans to.