acmeattic-devel team mailing list archive
-
acmeattic-devel team
-
Mailing list archive
-
Message #00112
Re: [Blueprint client-side-architecture] Client side architecture
On Mon, Aug 9, 2010 at 1:11 PM, krishnan
<krishnan.parthasarathi@xxxxxxxxx>wrote:
> On Monday 09 August 2010 09:42 PM, Karthik Swaminathan Nagaraj wrote:
>
> I want to step back for a moment to discuss the need for a "separate"
> daemon.
> I agree that the daemon has contained functionality and its a good idea to
> keep it separate from the UI. However, what is the need to have it as a
> different process? Both (I assume) are going to be written in Python and are
> part of the same AcmeAttic binary.
> In the command line context, it makes some sense to have a separate
> process, as the UI console is usually short lived and on user action.
> However, on a GUI app, these can be built into one and the daemon can run as
> a separate thread(?) and the app can just be pushed to the background
> (similar to common apps such as bittorrent, amarok).
>
> If i understand correctly you agree to the daemon and client CLI being two
> different processes running in the user's local machine. GUI app can
> 'define' the look and feel of the interface and use the CLI to get things
> done. The daemon can continue to be a different process. I don't see why we
> should change the architecture with the introduction of GUI app.
>
>
>
> - In order to address concerns on IPC, I always advocate the use of the
> Network. The daemon could listen on some port and we can do a local network
> connection. Look at this discussion on Stackoverflow<http://stackoverflow.com/questions/656933/communicating-with-a-running-python-daemon>.
> Pyro <http://www.xs4all.nl/%7Eirmen/pyro3/> is a possible solution for
> RMI style communication. Btw, Remote Method Invocation (RMI) handles all the
> "hard/unclean" network communication and exports a simpler object based
> interface for OOPS fans. Its slower than direct network, but performance is
> not an issue for small messages. (Pyro is platform independent, and so is
> bare network). Deluge is a popular bittorrent client which is daemon based
> and the clients talk to the daemon through the network.
>
> Pyro looks like a good option. We should then abstract all the commands
> that the CLI offers as methods of a remote service object, 'served' by the
> daemon. This is very similar to DBus.
>
>
> -
> - IMHO, writing protocols that can perform simple atomic operations is
> much simpler than (trying to) thinking of all possible interleaving of
> messages and associated Race conditions. I've done quite a bit of
> Distributed systems debugging, and the problem is always harded when more
> than 2 people talk to each other - message interleaving is primarily why
> distributed systems are hard!
>
> I don't understand what you mean by interleaved messages. The common
> library module which implements the communication between acme-attic server
> software and client machine (composite of daemon and client app) should
> ensure that only one thread/process executes the methods that are
> implemented. This can be done via 'file locking' as described by Aditya in
> the blueprint. The client app calls a method defined in the module. There is
> no message being passed around. To give an example of how we can implement
> the lock based solution,
>
> @getLock # decorator to implement some form of mutual exclusion.
> def push_changes_to_server(args):
> communicate_with_server
> releaseLock
> .
> .
>
Essentially, a process (daemon or UI) needs to request a lock from this
"module" and go ahead to communicate with the server. But implicitly, since
they are separate processes, the lock module must be available to both of
them and hence should be a separate process on its own. (I am not able to
think of another way to implement a lock). Bharath's document on handling
the race condition suggests the use of a data structure, but how can the DS
be commonly accessible? I would not depend on file locks, as this varies
between platforms, and usually projects try to minimize file locks.
Also, the UI needs to be in constant communication with the daemon for
obtaining the current state of the Attic. This would necessitate an API from
the daemon anyway.
>
> All the methods that needs to be protected against race will have a similar
> structure. I don't see why this should be difficult.
>
>
> - Related to the previous idea, the protocol design with just a single
> client talking to the server is *easier*. Also, only a single
> connection is established per client to the server, keeping it easier to
> manage connections.
> - From my impression and initial ideas on my mind, it looks like the UI
> app merely communicates control information to the daemon. Trivially it can
> just send the commands to the daemon and quit with an ack (a dumb UI
> implementation). Even otherwise, file transfers are not required as its on
> the same machine. Since the daemon would anyway have methods/modules to
> handle each user operation, this step is merely exposing this interface to
> the UI app.
>
> In case we are going to use an RMI based IPC solution, then we should not
> be worrying about protocol and message format.
>
>
> - I am planning to write an interface over Twisted python so that a
> message and its related handler can be registered with this interface. The
> message content is simply the arguments to this method. This would mean that
> its pretty easy to encode control messages. And the interface would handle
> all the ugly network communication, failure, (de)serialization, etc.t
>
> Are you planning to use Twisted instead of Pyro? IMO, Pyro abstracts our
> IPC problem better.
> To summarise, I am fine with going ahead with an IPC based solution for
> 'communication' between the daemon and the client app processes, as long as
> it is well abstracted and portable. I am still not convinced with your
> reasons on locking being too hard to handle, esp. for our problem.
>
How about others? If required, we could have a discussion on IPC vs.
Locking.
>
> cheers,
> Krishnan
>
>
>
> Eg:
> *ClientUI* - send(ForceSync(filename)) # Implicitly sent to the daemon. *
> ForceSync* is a message with filename as argument.
> *Daemon* - def ForceSync(filename):
> ...
> (More details in a separate blueprint)
>
>
> On Wed, Aug 4, 2010 at 3:13 PM, krishnan_p <
> krishnan.parthasarathi@xxxxxxxxx> wrote:
>
>> You are now subscribed to the blueprint client-side-architecture -
>> Client side architecture.
>>
>> --
>> https://blueprints.launchpad.net/acmeattic/+spec/client-side-architecture
>>
>> _______________________________________________
>> Mailing list: https://launchpad.net/~acmeattic-devel<https://launchpad.net/%7Eacmeattic-devel>
>> Post to : acmeattic-devel@xxxxxxxxxxxxxxxxxxx
>> Unsubscribe : https://launchpad.net/~acmeattic-devel<https://launchpad.net/%7Eacmeattic-devel>
>> More help : https://help.launchpad.net/ListHelp
>>
>
>
>
> --
> Karthik
>
>
>
--
Karthik
Follow ups
References