← Back to team overview

syncany-team team mailing list archive

Re: Roadmap?


On Tue, Dec 3, 2013 at 10:57 AM, Fabrice Rossi <Fabrice.Rossi@xxxxxxxxxxx>wrote:

> Hi,
> Le 03/12/2013 00:34, Philipp Heckel a écrit :
> > *Roadmap:*
> > The roadmap Fabrice pointed out was basically what I had in mind all
> > along -- except that I think the database stuff is essential for a
> > cleanup-operation. When 'cleanup' is implemented, Syncany can be
> > considered "feature-complete" for the first iteration. Without 'cleanup'
> > and an SQL-based local database, it's not able to run very long without
> > creating a memory issue (every file/chunk/multichunk in all versions is
> > kept in memory!)
> Indeed, but what about early releasing something for alpha testing by
> volunteers? Also, the cleanup is going to be _very_ difficult to get
> right because of the lack of proper locking primitive (I'll come back to
> this in another email, this might be long).
> > *Wiki vs. Google Drive:*
> > Github has a Wiki, let's use it :-) I'll have time tomorrow (Tuesday)
> > night. If you want, you can start drafting before that.
> I'll leave that to you ;-)
> > *Gradle:*
> > 2. I still don't like that the wrapper jar is in the repo. Is this the
> > only way to do it? It's like putting an exe-file in there -- some people
> > might not trust what it's doing. Is it possible to download it when it's
> > first run?!
> Absolutely, this is touchy.

Unfortunately gradle wrapper has been designed with a very small jar file.
This file is required to be checked
in the repository

Two options :
- get rid of wrapper, like ant or maven (thus developers will have to
install properly gradle)
- check in wrapper.jar to repository

sorry ....

> > 3. We should not base the gradle stuff on a new repo (like in
> > "syncany-gradle"). Instead, once we're sure that syncany-gradle is good
> > and ready, we should use the normal "syncany" repo so we keep the
> > history. We should also make sure all the branches immediately follow
> > this change, because otherwise this is going to be a mess.
> Indeed.

indeed indeed.

> > *Issues with 'real-time file watching'*:
> > Honestly, I don't see this as a big deal anymore. I used to, because I
> > wanted to perfectly follow all files and record/process these changes.
> > This "RecursiveWatcher" was a nightmare back in the old Syncany. I was
> > doing too much. The current implementation is much simpler: events are
> > only registered and trigger a 'sync' (after a settlement of 3 seconds).
> > Look at the WatchOperation for details. However, I agree that this might
> > get very complex and hard to test very quickly, so I agree with taking
> > it slow here.
> Well, you need at least to have one thread pushing things while the
> other one records new modifications. You have also to release unneeded
> watch. You have to come with the very limited signaling system of Java
> (compared to linux ionotify), etc. Really, this is tricky with races
> everywhere. To have fun, watch your eclipse workspace before starting
> eclipse and then be amazed by the number of files this bloody beast is
> modifying at startup ;-) Granted most of the difficulty lies in fine
> grain watching which not what you do. But if you want acceptable
> performances, you must do that in the long run because if you don't,
> syncany will never reach the 1 million files I'm looking for ;-)
> > With regard to the unreliable/unpredictable storage: This should be
> > assumed anyway -- that's why I created the "UnreliableLocalPlugin".
> > There are not many tests yet, but it's really easy to simulate any
> > storage failure with this plugin.
> Does it simulate _eventual consistency_? Because S3 is really
> unreliable: it can reorder your writes which is super annoying. As far
> as I know, the only way to implemented some kind of reliable locking
> above S3 is to use the simple queue system.
> Best regards,
> Fabrice

Vincent Wiencek

Follow ups