← Back to team overview

p2psp team mailing list archive

Re: CIS of rules (GSoC)

 

Hello!,

the graphics make sense. Good work.!
>
thanks! =)

---

One remarkable result here is the fact that having a second trusted peer
> does not have a real impact on malicious peer expulsion. I realize now that
> this is due to how the Str technique is designed. Not for GSoC, but maybe
> we should think of a evolution of the protocol that reflected the
> aggregation of trusted peers. Cristóbal, that might be intersting if it has
> not been researched yet.
>

But adding second trusted peer accelerates the expulsion of attackers, you
can see it in the last graphs (-m 50 -t 1 and -m 50 -t 2) : in the case of
one trusted there not all attackers were expelled from the team for given
time, and in the second case - all attackers were expelled by ~100 round.

In light of this, can I ask you to run another couple of experiments with
> more trusted peers? Say 4 and 8, and see what happens... I just want to
> confirm that adding those peers does not accelerate malicious peer
> expulsion.
>

Ok, I will perform it asap =)

Also, could you run an experiment in which malicious peers kept entering
> the team? I mean, in a real environment an adversary would not give up so
> easily since when computational resources are cheap (by cheap I mean that
> you can launch a lot from a single machine, or you can create a botnet by
> infecting other people's machines and use that botnet to attack the team).
> In that situation the best possible result would be to keep expulsing
> enough adversaries per iteration so that the quality of the stream is
> enough to be played back.
>

Ok.

---

why n are different?


In the first script n is number of regular peers in the team. And in the
second script - n is a team size. You're right, it's not obvious, so I will
fix it. In both cases n will be team size of experiment.

What is the waiting time. It was the same in all the experiments?
>

At first, script initializes the peers - the bunch of regular and trusted
peers, and then initializes a bunch of malicious peers. Then script waits
for *w* seconds. Yes, in all experiments param w was the same. Also, param
w is not related with number of rounds.

This is just a curiosity, for instance in the first case (m=10), 10 rounds
> are needed to have 10 malicious and 100 well intended peers.
> Are they included randomly? The concern is if the initialization will
> affect the results.
>

No, team size is a sum of (trusted + regular + malicious), so there are 90
well-intended, and 10 malicious peers. Also, it took only 5 rounds to
connect 10 malicious peers.
All the peers included one by one. Trusted peers included randomly in this
sequence. So, first script creates a team of regular and trusted peers, and
then adds a bunch of malicious peers.

When malicious start to attack?


Malicious peers attacks at once they connected.

What type of attack is measured?
> How is buffer_correctness calculated? (an average?, best and worst case
> will be also good to know).
>

There is simple type of attack - malicious peers is sending zero chunks.
So, my maliicous peers sending chunk which looks like "0\x00...". Buffer
correctness for one peer is a BC = correct_chunks / (correct_chunks +
bad_chunks). Buffer correctness in the graphs is a average for all team.
Ok, I will log min and max values for buffer_correctness in the next
experiments.

Why well intended peers leave the team?


Well intended peers dont leave the team. It's some misunderstanding, sorry.
As I said above - team size is a sum of (trusted + regular + malicious).

How buffer_correctness is affected by the number of good and malicious
> peers.
> Can we guest it mathematically?
>

I think we can. I will think about it.

What is the number of rounds to expel a given number of malicious?
> Does it depends on the number of good/trusted peers?
>

I think we can also predict it mathematically. Yes, it depends on the
number of trusted peers.

Taking into account the number of trusted peers:
> Comparing -n 100 -m 10 -t 1 with  -n 100 -m 10 -t 2 malicious are expelled
> at 75 and 30 rounds resp.
> This seems logic. More trusted earlier expelled.
> Curiosity, Why in the first case one malicious remains and not in the
> second?
>

Because of waiting time. In the first case one trusted peer couldn't
discover all the malicious peers in time.

In order to compare similar things, please, can you run the experiments
> until no malicious peer are in the team. Can we guess the final number of
> good peers?
>

Ok, the simplest solution is to give more waiting time. Also, I'll try to
stop experiment dynamically, based on team size.

Additionally, due to the non deterministic behavior 5 runs should be done,
> removing the best and worst in terms on the number of rounds to expel all
> malicious peers and provide the average of the other three.
>

Ok, so there will be 3 graphs with trends of number of maliicous and buffer
correctness values, and average for  number of rounds to expel all
malicious peers. Am I right?


Thanks for the feedbacks!

2015-08-04 15:25 GMT+05:00 L.G.Casado <leo@xxxxxx>:

> Dear Ilshat,
>
> Very nice piece of work.
> Now we can move to study the details of the experimentation. In order to
> do so and to facilitate the evaluation you should include a brief
> description of the results and the details about how you get them. Excuse
> me, but I am involved in many other things and I need this level of
> description in order to help to the improvement.
>
> I have many questions. I know I can get them from the code, but I would
> like you to answer them in order to be aware of what is being measured.
>
> In your example:
>
> ./test_strpe.py -n 48 -t 2 -m 50 -w 200
> ./parse_test_strpe.py -n 100 -m 50
>
>
> why n are different?
> What is the waiting time. It was the same in all the experiments?
>
>
> As far I understood from the excel, n is always 100.
> This is just a curiosity, for instance in the first case (m=10), 10 rounds
> are needed to have 10 malicious and 100 well intended peers.
> Are they included randomly? The concern is if the initialization will
> affect the results.
> When malicious start to attack?
> What type of attack is measured?
> How is buffer_correctness calculated? (an average?, best and worst case
> will be also good to know).
> Why well intended peers leave the team?
>
> The interesting information will be the summary from the excel.
> Once the type of attack is fixed. For instance:
> How buffer_correctness is affected by the number of good and malicious
> peers.
> Can we guest it mathematically?
>
> What is the number of rounds to expel a given number of malicious?
> Does it depends on the number of good/trusted peers?
>
> I think there is a direct correlation between/among:
> i)buffer_correctness and number of malicious
> ii)number of rounds to expel malicious- number of good peers -number of
> trusted peers.
>
> Taking into account the number of trusted peers:
> Comparing -n 100 -m 10 -t 1 with  -n 100 -m 10 -t 2 malicious are expelled
> at 75 and 30 rounds resp.
> This seems logic. More trusted earlier expelled.
> Curiosity, Why in the first case one malicious remains and not in the
> second?
>
> Comparing -n 100 -m 25 -t 1 with  -n 100 -m 25 -t 2, malicious are
> expelled at 91 and 84, resp, which is not too much difference You save 7
> rounds adding one trusted peer. This is not logic.
>
> For the last case,  comparing -n 100 -m 25 -t 1 with  -n 100 -m 25 -t 2
> malicious are expelled at 139 (remains 2 malicious, why?)  and 102. They
> are not comparable.
>
> In order to compare similar things, please, can you run the experiments
> until no malicious peer are in the team. Can we guess the final number of
> good peers?
> I think the attack should start when all peers are in the team, to hide
> the initialization phase. And those rounds do not count in determining how
> many rounds were needed to expel them all.
> Additionally, due to the non deterministic behavior 5 runs should be done,
> removing the best and worst in terms on the number of rounds to expel all
> malicious peers and provide the average of the other three.
>
> Regarding
>
> To be honest, I dont see how splitter can determine malicious peers if
> there more than 50% of malicious peers in the team. Could you please advice
> some direction?
>
> We have some ideas, but it is better to know how and why system works
> first.
>
>
>
> I do not know if this can be seen as included in GSoC but the answer to
> previous questions is what we want to know, because from them we can
> determine which code will be finally included.
>
> Best regards,
>
> Leo
>
>
>
>
> El lun, 03-08-2015 a las 12:46 +0500, Ilshat Shakirov escribió:
>
> Hello!,
>
> I've written couple of scripts for testing only STrPe mechanism (for now).
> Here it is:
> https://github.com/ishakirov/p2psp/blob/master/tools/test_strpe.py
> https://github.com/ishakirov/p2psp/blob/master/tools/parse_test_strpe.py
> To run it on not Mac systems, you have to change "runStream" function
> (since it runs mac specific path for vlc).
> Also this commit contains changes for splitter and peer for logging some
> data (buffer correctnes, currenct round, teamsize).
> The fist script run the experiment, it has 4 params:
> n - teamsize without malicious and trusted peers
> t - number of trusted peers
> m - number of malicious peers
> w - wait time in seconds
>
> And the second script parse the result files and produce an output, which
> can be 'copy-pasted' to excel or google sheet.
> It runs with params:
> n - teamsize with malicious and trusted peers
> m - number of malicious peers.
>
> So these scripts can be used like:
>
> ./test_strpe.py -n 48 -t 2 -m 50 -w 200
> ./parse_test_strpe.py -n 100 -m 50
>
>
> And here is some results:
>
> https://docs.google.com/spreadsheets/d/1BEHWLtKTVnZoqjEXNs-uLeH0sRU7SCXmZHSMuWc_-lo/edit#gid=985076992
>
> Now I am doing the same for the STrPe-DS mechanism. Could you please give
> me some feedback for this graphs? =)
> I will write a blogpost later, when STrPe-DS will be ready.
>
> Your reputation system, as many others, may produce false positive and
> false negative.
>
> False positive: a bunch of malicious peers complaint about a well intended
> one.
> False negative: a bunch of malicious peers does not complain about a
> malicious peer.
>
> About time on the team: a malicious peer can perform correctly for a long
> period and suddenly make an attack.
>
> It is not an easy task.
>
> To be honest, I dont see how splitter can determine malicious peers if
> there more than 50% of malicious peers in the team. Could you please advice
> some direction?
>
> Thanks!
>
>
> 2015-07-28 12:36 GMT+05:00 L.G.Casado <leo@xxxxxx>:
>
> Dear ILshat,
>
> Your reputation system, as many others, may produce false positive and
> false negative.
>
> False positive: a bunch of malicious peers complaint about a well intended
> one.
> False negative: a bunch of malicious peers does not complain about a
> malicious peer.
>
> About time on the team: a malicious peer can perform correctly for a long
> period and suddenly make an attack.
>
> It is not an easy task.
>
> Best,
>
> Leo
>
>
>
> El mar, 28-07-2015 a las 12:00 +0500, Ilshat Shakirov escribió:
>
> As I said we can assign some reputation parameter to each peer. Let's say
> that we exclude peer P from the team if Q > X, where X some threshold
> value and
>
> where I_i = 1 if i-th peer marked peer P as malicious and 0 in the other
> case.
> w_i can be assigned by the time of existing peer in the team. So if peer
> exists in team from the start of streaming, and it hasn't complaints from
> the other peers, it has the maximum reputation value (w_i parameter).
> So the question is how to choose w_i more optimal (and the threshold value
> X respectively).
>
> 2015-07-28 1:08 GMT+05:00 Juan Álvaro Muñoz Naranjo <
> juanalvaro83@xxxxxxxxx>:
>
> Hi again,
>
> Also, I wanted to develop heuristic for the excluding malicious peers from
> the team based on the all the team (not only trusted peers). Do you have
> any ideas? I think about smth like: 'exclude peer if more than x% of the
> team marked it as malicious'. Also, we can assign 'reputation' to each
> peer, so some peers will have more influence on the decision of excluding
> peer. What do you think?
>
>
>
>
> Yeah, we've been considering it. The problem with the x% solution is that
> i can easily turn against us. Imagine the attacker controls a high
> percentage of the nodes in the network (that would be easy, just run a huge
> number of peers > x% in a small number of machines and you've got it) and
> starts complaining against valid peers. The valid peers would be expulsed
> by the splitter. That would be an easy DoS.
>
> So, to reduce the impact of attackers, let's say that we set x to 50%: the
> attacker would need to control more than half of the team in order to
> expulse someone. But let's say the attacker controls 45% of the peers. Not
> enough to expulse anyone, but now he can act inversely: he uses that 45% of
> malicious peers to send corrupted chunks to a set of peers smaller than
> 50%. Those affected legal peers will not be able to play 45% of the
> packets, so they will probably abandon the team due to playback quality
> problems. Again a DoS. And the attackers will not be expulsed since the
> splitter did not receive at least 50% of complaints!
>
> Any idea on this?
>
> Juan
>
>
>
> Thanks! =)
>
>
> 2015-07-23 2:01 GMT+05:00 Juan Álvaro Muñoz Naranjo <
> juanalvaro83@xxxxxxxxx>:
>
> Hi Ilshat,
>
> first of all thanks for your update, it was very interesting. Just one
> thing: when the DS technique is completed we'll send the public key under a
> X.509 certificate format. Ideally this certificate should be signed by a
> trusted certificate authority and contain information about the
> organization managing the splitter to offer some degree of trust. The
> certificate might even be distributed with the software, or be given by the
> web page if we were in a web player with WebRTC. Otherwise the attacker
> might send its own public key to the peers impersonating the splitter. But
> for now it is ok like that.
>
> Now, let's get to the point. How to run the experiments. Vicente already
> suggested the use of tools/create_a_team.sh in a previous message (thank
> you Vicente!). Also, Cristóbal suggests this:
> https://github.com/cristobalmedinalopez/p2psp-chunk-scheduling/blob/master/tools/run_experiment.sh
> These solutions are for experiments in one machine of course, which is
> enough for us. If you need more peers you should be able to combine several
> machines by running one script per machine. Of course, we're interested in
> seeing how peers' buffers are filled with chunks and not in video playback:
> as you can see, both scripts send the video signal to /dev/null.
>
> Which experiment to run? We propose the following: we're interested in
> average expulsion times for an attacker, and if all of them are expulsed
> after a given time. Also, the average percentage of gaps in the peers'
> buffers (so we can see if playback is possible in presence of attackers and
> after how long). I think you should measure time in terms of sending rounds
> (you know, one round would be the splitter sending one chunk to every
> member of the team).
>
> So, let's say that you have a team of 100 peers. From that team, a
> percentage of peers will be malicious: 1%, 10%, 25%, 50%. I imagine a plot
> in which the X axis is time (number of rounds) and in which we depict:
> number of remaining malicious peers in the team (because some of them will
> be expulsed) and average filling of peers' buffers. Ideally, as the number
> of remaining malicious peers decreases the filling of buffers should
> increase.
>
> Showing the number of complains from peers in the first technique would be
> also interesting.
>
> Another thing to measure would be the percentage of bandwidth used for
> real multimedia data (this is, how many bytes from the total are really
> used for transmitting the video). You can compare the baseline (no security
> measures, just plain video without malicious attackers) against both
> techniques.
>
> So, for running these experiments you'll need to decide which information
> you want to store from each peer (buffer filling percentage at each
> iteration, how many malicious peers at each iteration, how many bytes were
> sent and how many of them were used for video, how many complains arrived
> to the splitter in every iteration). Am I forgetting anything?
>
> My suggestion is run the experiment for the first technique and see how it
> goes. Make sure to run the experiment more than once, say 5 times, and then
> get the average of them all.
>
> Good work,
>
> Juan
>
> 2015-07-21 20:06 GMT+02:00 Vicente Gonzalez <
> vicente.gonzalez.ruiz@xxxxxxxxx>:
>
> Hi Ilshat,
>
> did you try tools/create_a_team.sh?
>
> (I tested to run up to 100 peers in my 8HG Mac machine)
>
> Regards,
> Vi.
>
>
> On Sun, Jul 19, 2015 at 8:36 PM Ilshat Shakirov <im.shakirov@xxxxxxxxx>
> wrote:
>
> Hello!,
>
> Sorry for the long delay.
>
> Here is status update about CIS of rules project:
> http://shakirov-dev.blogspot.ru/2015/07/5-6-7-week.html
>
> Also, I need some help with testing a big (ie, 20 peers) p2psp-teams. I
> want solution that allows to reproduce testing experiments easily. So the
> commenting lines (to remove need in running vlc) is not suitable for this.
> I've wrote simple script which runs several peers (in one machine) and
> here is result
> <https://www.evernote.com/shard/s427/sh/0b070670-8de9-4a61-acec-562035cfc3ef/7403917d3ca736eea6d60da8ba23543b>.
> I think it's quite hard to understand smth in this (and reproduce). So,
> what is the best solution for testing p2psp-teams and gather some stats?
>
> Thanks!
>
> 2015-06-25 16:13 GMT+05:00 Vicente Gonzalez <
> vicente.gonzalez.ruiz@xxxxxxxxx>:
>
>
>
> On Wed, Jun 24, 2015 at 5:48 PM L.G.Casado <leo@xxxxxx> wrote:
>
> Hi all,
>
> El mié, 24-06-2015 a las 16:44 +0500, Ilshat Shakirov escribió:
>
> Ok; Is there any option run peer without running a player? I'm going to
> run all peers in one local machine, is it right?
>
>
>
>
> At this moment, the easiest way to test a lot of peers in one machine is
> to connect to each peer a NetCat client [http://netcat.sourceforge.net/].
> It is not the most efficient solution, but you should be able to run
> hundreds of peers in a 8GB machine. However, is quite simple to avoid
> sending the stream in each peer. Just comment (temporally) the code that
> feeds the player.
>
> Regards,
> Vi.
> --
> --
> Vicente González Ruiz
> Depto de Informática
> Escuela Técnica Superior de Ingeniería
> Universidad de Almería
>
> Carretera Sacramento S/N
> 04120, La Cañada de San Urbano
> Almería, España
>
> e-mail: vruiz@xxxxxx
> http://www.ual.es/~vruiz
> tel: +34 950 015711
> fax: +34 950 015486
>
>
>
> --
> --
> Vicente González Ruiz
> Depto de Informática
> Escuela Técnica Superior de Ingeniería
> Universidad de Almería
>
> Carretera Sacramento S/N
> 04120, La Cañada de San Urbano
> Almería, España
>
> e-mail: vruiz@xxxxxx
> http://www.ual.es/~vruiz
> tel: +34 950 015711
> fax: +34 950 015486
>
>
>
>
>
>
>
>
>
>
>
>
> --
> Mailing list: https://launchpad.net/~p2psp
> Post to     : p2psp@xxxxxxxxxxxxxxxxxxx
> Unsubscribe : https://launchpad.net/~p2psp
> More help   : https://help.launchpad.net/ListHelp
>
>
>
>

GIF image


Follow ups

References