AUD 145,000 is available across three grant categories, and AUD 10,000 for one award, through an open and competitive process. ISIF Asia focuses on supporting organisations, people and ideas that are contributing to the development of the Internet and highlight its social and economic impact in the Asia Pacific. Applications are open from today until 30 August 2017.
For 2017, we have 3 grants categories and 1 award to be allocated. As every year, we are very excited about all the new ideas that people will be sharing with us at the ISIF Asia secretariat.
1) Internet Operations Research Grants: These grants support the development of an independent Internet research community whose work can improve the availability, reliability, and security of the Internet in the Asia Pacific region, and widen its coverage, applications and benefits to the community. The grants are open to research focused on Internet operations, infrastructure and related protocols, such as network measurement and analysis, IPv6, BGP routing, network security, as well as peering and interconnection.
2) Cybersecurity Grants: These grants support projects focusing on practical solutions around network resiliency and security. The grants are open to all relevant topics of interest around network resiliency and security, focused on one or more of the following areas: naming, routing, measurement, traffic management, confidential communications, data security and integrity, security of IoT devices and applications, security of critical infrastructure such as energy grids, end-user device security, and building security resilience in your local community.
3) Internet for Development Grant: This grant supports the scaling-up of an innovative Internet-based solution to development issues. Innovation and a development focus must be an integral part of the project proposal. Applications areas such as women and girls in IT, diversity and inclusion, access provision, utility services, devices, IoT, IPv6, privacy, democracy enhancement, open data, economic empowerment, poverty alleviation, and health and education will be considered by the Selection Committee.
2017 Internet for Development Award
The 2017 award winner will receive an AUD 5,000 cash prize plus a travel grant to attend the Internet Governance Forum 2017, in Geneva, Switzerland. The 2017 theme of ‘Internet for development’ is intended to cast a wide net to capture truly innovative approaches to development issues. Applications areas such as women and girls in IT, diversity and inclusion, access provision, utility services, devices, IoT, IPv6, privacy, democracy enhancement, open data, economic empowerment, poverty alleviation, and health and education will be considered by the Selection Committee.
A significant number of islands are too remote to make submarine cable Internet connections economical. We’re looking mostly at the Pacific, but islands like these exist elsewhere, too. There are also remote places on some continents, and of course there are cruise ships, which can’t connect to cables while they’re underway. In such cases, the only way to provision Internet commercially at this point are satellites.
Satellite Internet is expensive and sells by the Mbps per month (megabits per second of capacity per month). So places with few people – up to several ten thousand perhaps – usually find themselves with connections clearly this side of a Gbps (gigabit per second). The time it takes for the bits to make their way to the satellite and back adds latency, and so we have all the ingredients for trouble: a bottleneck prone to congestion and long round-trip times (RTT) which give TCP senders an outdated picture of the actual level of congestion that their packets are going to encounter.
I have discussed these effects at length in two previous articles published at the APNIC blog, so won’t repeat this here. Suffice to say: It’s a problem worth investigating in depth, and we’re doing so with help from grants by ISIF Asia in 2014 and 2016 and Internet NZ. This blog post describes how we’ve built our simulator, which challenges we’ve come up against, and where we’re at some way down our journey.
What are the questions we want to answer?
The list is quite long, but our initial goals are:
We’d like to know under which circumstances (link type – GEO or MEO, bandwidth, load, input queue size) various adverse effects such as TCP queue oscillation or standing queues occur.
We’d like to know what input queue size represents the best compromise in different circumstances.
We’d like to know how much improvement we can expect from devices peripheral to the link, such as performance-enhancing proxies and network coders.
We’d like to know how best to parameterise such devices in the scenarios in which one might deploy them.
We’d like to know how devices with optimised parameters behave when loads and/or flow size distributions change.
That’s just a start, of course, and before we can answer any of these, the biggest question to solve is: How do you actually build, configure, and operate such a simulator?
Why simulate, and why build a hybrid software/hardware simulator?
We get this question a lot. There are things we simply can’t try on real satellite networks without causing serious inconvenience or cost. Some of the solutions we are looking at require significant changes in network topology. Any island out there keen to go without Internet for a few days while we try something out? So we’d better make sure it can be done in a controlled environment first.
Our first idea was to try to simulate things in software. There is a generation of engineers and networking people who have been brought up on the likes of ns-2, ns-3, mininet etc., and they swear by it. We’re part of that generation, but the moment we tried to simulate a satellite link in ns-2 with more than a couple of dozen Mbps and more than a handful of simultaneous users generating a realistic flow size distribution, we knew that we were in trouble. The experiment was meant to simulate just a few minutes’ worth of traffic, for just one link configuration, and we were looking at days of simulation. No way. This wasn’t scalable.
Also, with a software simulator, you rely on software simulating a complex system with timing, concurrent processes, etc., in an entirely sequential way. How do you know that it gets the chaos of congestion right?
So we opted for farming out as much as we could to hardware. Here, we’re dealing with actual network components, real packets, and real network stacks.
There’s been some debate as to whether we shouldn’t be calling the thing an emulator rather than a simulator. Point taken. It’s really a bit of both. We take a leaf here from airline flight simulators, which also leverage a lot of hardware.
The tangible assets
Our island-based clients at present are 84 Raspberry Pis, complemented by 10 Intel NUCs. Three Supermicro servers simulate the satellite link and terminal equipment (such as PEPs or network coding encoders and decoders), and another 14 Supermicros of varying vintage act as the servers of the world that provide the data which the clients on the island want.
The whole thing is tied together by a number of switches, and all servers have external Internet access, so we can remotely access them to control experiments without having to load the actual experimental channel. The image in Figure 1 below shows the topology – the “island” is to the right, the satellite “link” in the middle, and the “world” servers on the right.
Simulating traffic: The need for realistic traffic data
High latency is a core difference between satellite networks and, say, LANs or MANs. As I’ve explained in a previous blog, this divides TCP flows (the packets of a TCP connection going in one direction) into two distinct categories: Flows which are long enough to become subject to TCP congestion control, and those that are so short that their last data packet has left the sender by the time the first ACK for data arrives.
In networks where RTT is no more than a millisecond or two, most flows fall into the former category. In a satellite network, most flows don’t experience congestion control – but contribute very little data. Most of the data on satellite networks lives in flows whose congestion window changes in response to ACKs received.
So we were lucky to have a bit of netflow data courtesy of a cooperating Pacific ISP. From this, we’ve been able to extract a flow size distribution to assist us in traffic generation. To give you a bit of an idea as to how long the tail of the distribution is: We’re looking at a median flow size of under 500 bytes, a mean flow size of around 50 kB, and a maximum flow size of around 1 GB.
A quick reminder for those who don’t like statistics: The median is what you get if you sort all flows by size and take the flow size half-way down the list. The mean is what you get by adding all flow sizes and dividing by the number of flows. A distribution with a long tail has a mean that’s miles from the median. Put simply: Most flows are small but most of the bytes sit in large flows.
Simulating traffic: Supplying a controllable load level
Another assumption we make is this: By and large, consumer Internet users are reasonably predictable creatures, especially if they come as a crowd. As a rule of thumb, if we increase the number of users by a factor of X, then we can reasonably expect that the number of flows of a particular size will also roughly increase by X. So if the flows we sampled were created by, say, 500 users, we can approximate the behaviour of 1000 users simply by creating twice as many flows from the same distribution. This gives us a kind of “load control knob” for our simulator.
But how are we creating the traffic? This is where our own purpose-built software comes in. Because we have only 84 Pis and 10 NUCs, but want to be able to simulate thousands of parallel flows, each physical “island client” has to play the role of a number of real clients. Our client software does this by creating a configurable number of “channels”, say 10 or 30 on each physical client machine.
Each channel creates a client socket, randomly selects one of our “world” servers to connect to, opens a connection and receives a certain number of bytes, which the server determines by random pick from our flow size distribution. The server then disconnects, and the client channel creates a new socket, selects another server, etc. Selecting the number of physical machines and client channels to use thus gives us an incremental way of ramping up load on the “link” while still having realistic conditions.
Simulating traffic: Methodology challenges
There are a couple of tricky spots to navigate, though: Firstly, netflow reports a significant number of flows that consist of only a single packet, with or without payload data. These could be rare ACKs flowing back from a slow connection in the opposite direction, or be SYN packets probing, or…
However, our client channels create a minimum amount traffic per flow through their connection handshake. This amount exceeds the flow size of these tiny flows. So we approximate the existence of these flows by pro-rating them in the distribution, i.e., each client channel connection accounts for several of these small single packet flows.
Secondly, the long tail of the distribution means that as we sample from it, our initial few samples are very likely to have an average size that is closer to the median than to the mean. In order to obtain a comparable mean, we need to run our experiments for long enough so that our large flows have a realistic chance to occur. This is a problem in particular with experiments using low bandwidths, high latencies (GEO sats), and a low number of client channels.
For example, a ten minute experiment simulating a 16 Mbps GEO link with 20 client channels will typically generate a total of only about 14,000 flows. The main reason for this is the time it takes to establish a connection via a GEO RTT of over 500 ms. Our distribution contains well over 100,000 flows, with only a handful of really giant flows. So results at this end are naturally a bit noisy, depending on whether, and which, giant flows in the 100’s of MB get picked by our servers. This forces us to run rather lengthy experiments at this end of the scale.
Simulating the satellite link itself
For our purposes, simulating a satellite link mainly means simulating the bandwidth bottleneck and the latency associated with it. More complex scenarios may include packet losses from noise or fading on the link, or issues related to link layer protocol. We’re dedicating an entire server to the simulation (server K in the centre of the topology diagram), so we have enough computing capacity to handle every case of interest. The rest is software, and here the choice is chiefly between a network simulator (such as, e.g., sns-3) and something relatively simple like the Linux tc utility.
The latter lets us simulate bandwidth constraints, delay, sporadic packet loss and jitter: enough for the moment. That said, it’s a complex beast, which exists in multiple versions and – as we found out – is quite quirky and not overly extensively documented.
Following examples given by various online sources, we configured a tc netem qdisc to represent the delay, which we in turn chained to a token bucket filter. The online sources also suggested quality control: ping across the simulated link to ensure the delay is place, then run iperf in UDP mode to see that the token bucket filter is working correctly. Sure enough, the copy-and-paste example passed these two tests with flying colours. It’s just that we then got rather strange results once we ran TCP across the link. So we decided to ping while we were running iperf. Big surprise: Some of the ping RTTs were in the hundreds of seconds – far longer than any buffer involved could explain. Moreover, no matter which configuration parameter we tweaked, the effect wouldn’t go away. So, a bug it seems. We finally found a workaround involving ingress redirection to an intermediate function block device, which passes all tests and produces sensible results for TCP. Just goes to show how important quality control is!
Simulating world latency
We also use a similar technique to add a variety of fixed ingress and egress delays to the “world” servers. This models the fact that TCP connections in real life don’t end at the off-island sat gate, but at a server that’s potentially a continent or two down the road and therefore another few dozen or even hundreds of milliseconds away.
Link periphery and data collection
We already know that we’ll want to try PEPs, network coders etc., so we have another server each on both the “island” (server L) and the “world” (server J) side of the server (K) that takes care of the “satellite link” itself. Where applicable, these servers host the PEPs and / or network coding encoders / decoders. Otherwise, these servers simply act as routers. In all cases, these two servers also function as our observation points.
At each of the two observation points, we run tcpdump on eth0 to capture all packets entering and leaving the link at either end. These get logged into pcap capture files on L and J.
An alternative to data capture here would be to capture and log on the clients and / or “world” servers. However, capture files are large and we expect lots of them, and the SD cards on the Raspberry Pis really aren’t a suitable storage medium for this sort of thing. Besides that, we’d like to let the Pis and servers get on with the job of generating and sinking traffic rather than writing large log files. Plus, we’d have to orchestrate the retrieval of logs from 108 machines with separate clocks, meaning we’d have trouble detecting effects such as link underutilisation.
So servers L and J are really without a lot of serious competition as observation points. After each experiment, we use tshark to translate the pcap files into text files, which we then copy to our storage server (bottom).
For some experiments, we also use other tools such as iperf (so we can monitor the performance of a well-defined individual download) or ping (to get a handle on RTT and queue sojourn times). We run these between the NUCs and some of the more powerful “world” servers.
A basic experiment sequence
Each experiment basically follows the same sequence, which we execute via our core script:
Configure the “sat link” with the right bandwidth, latency, queue capacity etc.
Configure and start any network coded tunnel or PEP link we wish to user between servers L and J.
Start the tcpdump capture at the island end (server L) of the link
Start the tcpdump capture at the world end (server J) of the link with a little delay. This ensures that we capture every packets heading from the world to the island side
Start the iperf server on one of the NUCs. Note that in iperf, the client sends data to the server rather than downloading it.
Start the world servers.
Ping the special purpose client from the special purpose server. This functions as a kind of “referee’s start whistle” for the experiment as it creates a unique packet record in both tcpdump captures, allowing us to synchronise them later.
Start the island clients as simultaneously as possible.
Start the iperf client.
Start pinging – typically, we ping 10 times per second.
Wait for the core experiment duration to expire. The clients terminate themselves.
Ping the special purpose client from the special purpose server again (“stop whistle”).
Terminate pinging (usually, we ping only for part of the experiment period, though)
Terminate the iperf client.
Terminate the iperf server.
Terminate the world servers.
Convert the pcap files on J and L into text log files with tshark
Retrieve text log files, iperf log and ping log to the storage server.
Start the analysis on the storage server.
Between most steps, there is a wait period to allow the previous step to complete. For a low load 8 Mbps GEO link, the core experiment time needs to be 10 minutes to yield a half-way representative sample from the flow size distribution. The upshot is that the pcap log files are small, so need less time for conversion and transfer to storage. For higher bandwidths and more client channels, we can get away with shorter core experiment durations. However, as they produce larger pcap files, conversion and transfer take longer. Altogether, we budget around 20 minutes for a basic experiment run.
Tying it all together
We now have more than 100 machines in the simulator. Even in our basic experiments sequence, we tend to use most if not all of them. This means we need to be able to issue commands to individual machines or groups of machines in an efficient manner, and we need to be able to script this.
Enter the pssh utility. This useful little program lets our scripts establish a number of SSH connections to multiple machines simultaneously, e.g., to start our servers or clients, or to distribute configuration information. It’s not without its pitfalls though: For one, the present version has a hardwired limit of 32 simultaneous connections that isn’t properly document in the man page. If one requests more than 32 connections, pssh quietly runs the first 32 immediately and then delays the next 32 by 60 seconds, the next 32 by 120 seconds, etc.
We wouldn’t have noticed this hadn’t we added a feature to our analysis script that checks whether all clients and servers involved in the experiment are being seen throughout the whole core experiment period. Originally, we’d intended this feature to pick up machines that had crashed or had failed to start. Instead, it alerted us to the fact that quite a few of our machines were late starters, always by exactly a minute or two.
We now have a script that we pass the number of client channels required. It computes how to distribute the load across the Pi and NUC clients, creates subsets of up to 32 machines to pass to pssh, and invokes the right number of pssh instances with the right client parameters. This lets us start up all client machines within typically less than a second. The whole episode condemned a week’s worth of data to /dev/null, but shows again just how important quality assurance is.
Automating the complex processes is vital, so we keep adding scripts to the simulator as we go to assist us in various aspects of analysis and quality assurance.
Observations – and how we use them
Our basic experiment collects four pieces of information:
A log file with information on the packets that enter the link from the “world” side at J (or the PEP or network encoder as the case may be). This file includes a time stamp for each packet, the source and destination addresses and ports, and the sizes of IP packets, the TCP packets they carry, and the size of the payload they contain, plus sequence and ACK numbers as well as the status of the TCP flags in the packet.
A similar log file with information on the packets that emerge at the other end of the link from L and head to the “island” clients.
An iperf log, showing average data rates achieved for the iperf transfer.
A ping log, showing the sequence numbers and RTT values for the ping packets sent.
The first two files allow us to determine the total number of packets, bytes and TCP payload bytes that arrived at and left the link. This gives us throughput, goodput, and TCP byte loss, as well as a wealth of performance information for the clients and servers. For example, we can compute the number of flows achieved and the average number of parallel flows, or the throughput, goodput for and byte loss for each client.
Figure 2 above shows throughput (blue) and goodput (red) in relation to link capacity, taken at 100 ms intervals. The link capacity is the brown horizontal line – 16 Mbps in this case.
Any bit of blue that doesn’t reach the brown line represents idle link capacity – evidence of an empty queue some time during the 100 ms in question. So you’d think there’s be no problem fitting a little bit of download in, right? Well that’s exactly what we’re doing at the beginning of the experiment, and you can indeed see that there’s quite a bit less spare capacity – but still room for improvement.
Don’t believe me? Well, the iperf log gives us an idea as to how a single long download fares in terms of throughput. Remember that our clients and servers aim at creating a flow mix but don’t aim to complete a standardised long download. So iperf is the more appropriate tool here. In this example, our 40 MB download takes over 120 s with an average rate of 2.6 Mbps. If we run the same experiment with 10 client channels instead of 20, iperf might take only a third of the time (41 s) to complete the download. That is basically the time it takes if the download has the link to itself. So adding the extra 10 client channel load clearly has a significant impact.
At 50 client channels, iperf takes 186 seconds, although this figure can vary considerably depending which other randomly selected flows run in parallel. At 100 client channels, the download sometimes won’t even complete – if it does, it’s usually above the 400 second mark & there’s very little spare capacity left (Figure 3).
You might ask why the iperf download is so visible in Figure 1 compared to the traffic contributed by our hundreds of client channels? The answer lies once again in the extreme nature of our flow size distribution and the fact that at any time, a lot of the channels are in connection establishment mode: The 20 client channel experiment above averages only just under 18 parallel flows, and almost all of the 14,000 flows this experiment generates are less than 40 MB: In fact, 99.989% of the flows in our distribution are shorter than our 40 MB download. As we add more load, the iperf download gets more “competition” and also contributes at a lower goodput rate.
The ping log, finally, gives us a pretty good estimate of queue sojourn time. We know the residual RTT from our configuration but can also measure it by pinging after step 2 in the basic experiment sequence. Any additional RTT during the experiment reflects the extra time that the ICMP ping packets spend being queued behind larger data packets waiting for transmission.
One nice feature here is that our queue at server K practically never fills completely: To do so, the last byte of the last packet to be accepted into the queue would have to occupy the last byte of queue capacity. However, with data packets being around 1500 bytes, the more common scenario is that the queue starts rejecting data packets once it has less than 1500 bytes capacity left. There’s generally still enough capacity for the short ping packets to slip in like a mouse into a crowded bus, though. It’s one of the reasons why standard size pings aren’t a good way of detecting whether your link is suffering from packet loss, but for our purposes – measuring sojourn time – it comes in really handy.
Figure 4 shows the ping RTTs for the first 120 seconds of the 100 client channel experiment above. Notice how the maximum RTT tops out at just below 610 ms? That’s 50 ms above the residual RTT of 560 ms (500 satellite RTT and 60 ms terrestrial), +/-5% terrestrial jitter that we’ve configured here. No surprises here: That’s exactly the time it takes to transmit the 800 kbits of capacity that the queue provides. In other words: The pings at the top of the peaks in the plot hit a queue that was, for the purposes of data transfer, overflowing.
The RTT here manages to hit its minimum quite frequently, and this shows in throughput of just under 14 Mbps, 2 Mbps below link capacity.
Note also that where the queue hits capacity, it generally drains again within a very short time frame. This is queue oscillation. Note also that we ping only once every 100 ms, so we may be missing shorter queue drain or overflow events here because they are too short in duration – and going by the throughput, we know that we have plenty of drain events.
This plot also illustrates one of the drawbacks of having a queue: between e.g., 35 and 65 seconds, there are multiple occasions when the RTT doesn’t return to residual for a couple of seconds. This is called a “standing queue” – the phenomenon commonly associated with buffer bloat. At times, the standing queue doesn’t contribute to actual buffering for a couple of seconds but simply adds 20 ms or so of delay. This is undesirable, not just for real-time traffic using the same queue, but also for TCP trying to get a handle on the RTT. Here, it’s not dramatic, but if we add queue capacity, we can provoke an almost continuous standing queue: the more buffer we provide, the longer it will get under load.
Should we be losing packet loss altogether?
There’s one famous observable that’s often talked about but surprisingly difficult to measure here: packet loss. How come, you may ask, given that we have lists of packets from before and after the satellite link?
Essentially, the problem boils down to the question of what we count as a packet, segment or datagram at different stages of the path.
Here’s the gory detail: The maximum size of a TCP packet can in theory be anything that will fit inside a single IP packet. The size of the IP packet in turn has to fit into the Ethernet (or other physical layer) frame and has to be able to be processed along the path.
In our simulator, and in most real connected networks, we have two incompatible goals: Large frames and packets are desirable because they lower overhead. On the other hand, if noise or interference hits on the satellite link, large frames present a higher risk of data loss: Firstly, at a given bit error rate, large packets are more likely to cop bit errors than small ones. Secondly, we lose more data if we have to discard a large packet after a bit error than if we have to discard a small packet only.
Then again, most of our servers sit on Gbps Ethernet or similar, where the network interfaces have the option of using jumbo frames. The jumbo frame size of up to 9000 bytes represents a compromise deemed reasonable for this medium. However, these may not be ideal for a satellite link. For example, given a bit error probability of 0.0000001, we can expect to lose 7 in 1000 jumbo frames, or 0.7% of our packet data. If we use 1500 byte frames instead, we’ll only lose just over 1 in 1000 frames, or 0.12% of our packet data. Why is that important? Because packet loss really puts the brakes on TCP, and these numbers really make a difference.
The number of bytes that a link may transfer in a single IP packet is generally known as the maximum transmission unit (MTU). There are several ways to deal with diversity in MTUs along the path: Either, we can restrict the size of our TCP segment right from the sender to fit into the smallest MTU along the path, or we can rely on devices along the way to split IP packets with TCP segments into smaller IP packets for us. Modern network interfaces do this on the fly with TCP segmentation offload (TSO) and generic segmentation offload (GSO, see https://sandilands.info/sgordon/segmentation-offloading-with-wireshark-and-ethtool). Finally, the emergency option when an oversize IP datagram hits a link is to fragment the IP datagram.
In practice, TSO and GSO are so widespread that TCP senders on a Gbps network will generally transmit jumbo frames and have devices further down the path worry about it. This leaves us with a choice in principle: Allow jumbo frames across the “satellite link”, or break them up?
Enter the token bucket filter: If we want to use jumbo frames, we need to make the token bucket large enough to accept them. This has an undesirable side effect: Whenever the bucket has had a chance to fill with tokens, any arriving packets that are ready to consume them get forwarded immediately, regardless of rate (which is why you see small amounts of excess throughput in the plots above). So we’d “buy” jumbo frame capability by considerably relaxing instantaneous rate control for smaller data packets. That’s not what we want, so it seems prudent to stick with the “usual” MTUs of around 1500 bytes and accept fragmentation of large packets.
There’s also the issue of tcpdump not necessarily seeing the true number of packets/fragments involved, because it captures before segmentation offload etc. (https://ask.wireshark.org/questions/3949/tcpdump-vs-wireshark-differences-packets-merged).
The gist of it all: The packets we see going into the link aren’t necessarily the packets that we see coming out at the other end. Unfortunately that happens in a frighteningly large number of cases.
In principle, we could check from TCP sequence numbers & IP fragments whether all parts of each packet going in are represented in the packets going out. However, with 94 clients all connecting to 14 servers with up to 40-or-so parallel channels, doing the sequence number accounting is quite a computing-intensive task. But is it worth it? For example, if I count a small data packet with 200 bytes as lost when it doesn’t come out the far end, then what happens when I have a jumbo frame packet with 8000 bytes that gets fragmented into 7 smaller packets and one of these fragments gets dropped? Do I count the latter as one packet loss, or 1/7th of a packet loss, or what?
The good news: For our purposes, packet loss doesn’t actually help explain much unless we take it as an estimate of byte loss. But byte loss is an observable we can compute very easily here: We simply compare the number of observed TCP payload bytes on either side of the link. Any missing byte must clearly have been in a packet that got lost.
There is a saying in my native Germany: “Wer misst, misst Mist”. Roughly translated, it’s a warning that those who measure blindly tend to produce rubbish results. We’ve already seen a couple of examples of how an “out-of-left field” effect caused us problems. I’ll spare you some of the others but will say that there were just a few!
So what are we doing to ensure we’re producing solid data? Essentially, we rely on four pillars:
Configuration verification and testing. This includes checking that link setups have the bandwidth configured, that servers and clients are capable of handling the load, and that all machines are up and running at the beginning of an experiment.
Automated log file analysis. When we compare the log files from either side of the link, we also compute statistics about when each client and server was first and last seen, and how much traffic went to/from the respective machine. Whenever a machine deviates from the average by more than a small tolerance or a machine doesn’t show up at all, we issue a warning.
Human inspection of results: Are the results feasible? E.g., are throughput and goodput within capacity limits? Do observables change in the expected direction when we change parameters such as load or queue capacity? Plots such as those discussed above also assist us in assessing quality. Do they show what we’d expect, or do they show artefacts? This also includes discussion of our results so there are at least four eyes looking at data.
Scripting: Configuring an experiment requires the setting of no less than seven parameters for the link simulation, fourteen different RTT latencies for the servers, and load and timeout configurations for 94 client machines, an iperf download size, plus the orchestrated execution of everything with the right timing – see above. Configuring all of this manually would be a recipe for disaster, so we script as much as we can – this takes care of a lot of typos!
Also, an underperforming satellite link could simply be a matter of bad link configuration rather than a fundamental problem with TCP congestion control. It would be all too easy to take a particular combination of link capacity and queue capacity to demonstrate an effect without asking what influence these parameters have on the effect. This is why we’re performing sweeps – when it comes to comparing the performance of different technologies, we want to ensure that we are putting our best foot forward.
So what’s the best queue capacity for a given link capacity? You may remember the old formula for sizing router queue, RTT * bandwidth. However, there’s also Guido Appenzeller’s PhD thesis from Stanford, in which he recommends to divide this figure by the square root of the number of long-lived parallel flows.
This presents us with a problem: We can have hundreds of parallel flows in the scenarios we’re looking at. However, how many of those will qualify as long-lived depends to no small extent on the queue capacity at the token bucket filter!
For example, take the 16 Mbps link with 20 client channels we’ve already looked at before. At 16 Mbps (=2MBps) and 500 ms RTT, the old formula suggests 1 MB queue capacity. We see fairly consistently 17-18 parallel flows (not necessarily long-lived ones, though) regardless of queue capacity. Assuming extremely naively that all of these flows might qualify as long-lived (well, we know they’re not), Guido’s results suggest dividing the 1MB by about a factor of around 4, which just so happens to be a little larger than the 100kB queue we’ve deployed here. But how do we know whether this is the best queue capacity to choose?
A real Internet satellite link generally doesn’t just see a constant load. So how do we know which queue capacity works best under a range of loads?
The only way to get a proper answer is to try feasible combinations of load levels and queue capacities. Which poses the next question: What exactly do we mean by “works best”?
Looking at the iperf download, increasing the queue size at 20 client channels always improves the download time. This would suggest dumping Guido’s insights in favour of the traditional value. Not so fast: Remember those standing queues in Figure 3? At around 20 ms extra delay, they seemed tolerable. Just going to a 200kB queue bumps these up to 80 ms, though, and they’re a lot more common, too. Anyone keen to annoy VoIP users for the sake of a download that could be three times faster? Maybe, maybe not. We’re clearly getting into compromise territory here, but around 100kB-200kB seems to be in the right ballpark.
So how do we get to zero in on a feasible range? Well, in the case of the 16 Mbps link, we looked at (“sweeped”) eleven potential queue capacities between 30 kB and 800 kB. For each capacity, we swept up to nine load levels between 10 and 600 client channels. That’s many dozens of combinations, each of which takes around 20 minutes to simulate, plus whatever time we then take for subsequent manual inspection. Multiply this with the number of possible link bandwidths of interest in GEO and MEO configuration, plus repeats for experiments with quality control issues, and we’ve got our worked carved out. It’s only then that we can get to coding and PEPs.
A lot. If the question on your mind starts with “Have you thought of…” or “Have you considered…,” the answer is possibly yes. Here are a few challenges ahead:
Network coding (TCP/NC): We’ve already got the encoder and decoder ready, and once the sweeps are complete and we have identified the parameter combinations that represent the best compromises, we’ll collect performance data here. Again, this will probably take a few sweeps of possible generation sizes and overhead settings.
Performance-enhancing proxies (PEP): We’ve identified two “free” PEPs, PEPSal and TCPEP, which we want to use both in comparison and – eventually – in combination with network coding.
UDP and similar protocols without congestion control. In our netflow samples, UDP traffic accounts for around 12% of bytes. How will TCP behave in the presence of UDP in our various scenarios? How do we best simulate UDP traffic given that we know observed throughput, but can only control offered load? In principle, we could model UDP as a bandwidth constraint, but under heavy TCP load, we get to drop UDP packets as well, so it’s a constraint that’s a little flexible, too. What impact does this have on parameters such as queue capacities, generation sizes etc.?
Most real links are asymmetric, i.e., the inbound bandwidth is a lot larger than the outbound bandwidth. So far, we have neglected this as our data suggests that the outbound channels tend to have comparatively generous share of the total bandwidth.
Simulating world latencies. At this point, we’re using a crude set of delays on our 14 “world servers”. We haven’t even added jitter. What if we did? What if we replaced our current crude model of “satgate in Hawaii” with a “satgate in X” model, where the latencies from the satgate in X to the servers would be distributed differently?
The Seed Alliance members, FIRE Africa (AFRINIC), FRIDA Program (LACNIC), ISIF Asia (APNIC) will be present at the 2016 Internet Governance Forum, which will take place in Guadalajara, Mexico, on 6-9 December.
During the IGF, the Seed Alliance will organize two workshops, one on cybersecurity and one on innovation and entrepreneurship, hold the Seed Alliance Awards Ceremony, and offer an opportunity to interact with grantees and Award Winners at the Seed Alliance booth in Guadalajara’s Palace of Culture and Communication, home of the Internet Governance Forum.
On Tuesday 6 December, the Seed Alliance will hold its first workshop of the week, which will focus on cybersecurity initiatives developed in and by the Global South. The session will be moderated by Carlos Martínez, LACNIC CTO, and will include noted speakers, all of them cybersecurity experts, including ISOC’s Olaf Kolkmann. This workshop will explore how developing economies are working to address cybersecurity issues, highlighting successful initiatives in their corresponding regions.(https://www.intgovforum.org/cms/igf2016/index.php/proposal/view_public/26).
In this sense, it is worth noting that this year the Seed Alliance included a specific category, funded by the Internet Society, which provided financial support to initiatives seeking to improve Internet security in the region: Protecting the TOR Network against Malicious Traffic in Brazil, BGP Security by RENATA (Colombia’s National Advanced Technology Academic Network) and Developing Tonga National CERT.
Prepared by Campinas State University (Brazil), the project for Protectingthe TOR Network against Malicious Traffic seeks to implement a solution to the growing malicious code traffic operating over this network.
BGP Security by RENATA (Colombia’s National Advanced Technology Academic Network) involves implementing origin validation for BGP routes in RENATA’s network backbone.
In the case of the Tonga CERT, the project lead by Ministry of Meteorology, Energy, Environment, Climate Change, Information, Communication, Disaster Management (MEIDECC will work on creating the first national CERT in the Pacific region.
Award Winners 2016
On Tuesday 6 November, the Seed Alliance members will also present the 2016 Awards recognizing eight innovative initiatives and practices that have contributed to the region’s social and economic development. These are:
AgriNeTT by the University of West Indies (Trinidad and Tobago)
Restoring Connectivity: Movable and Deployable Resource ICT Unit (MDRU) by CVISNET Foundation (The Philippines);
Towards A Fairer Electoral System: 1 Person, 1 Vote, 1 Value by Tindak (Malaysia);
All Girls Tech Camp by Give1ProjectGambia (The Gambia);
Kids Comp Camp (Kenia) and
Tobetsa and WiFi TV Extension Project (South Africa).
To conclude, on Friday 9 December, FIRE, FRIDA, ISIF Asia will hold a second workshop on entrepreneurship and innovation in the Global South. This workshop will analyze the challenges innovators and entrepreneurs must face in developing countries and attempt to identify opportunities for Internet innovation in the countries of the Global South. https://www.intgovforum.org/cms/igf2016/index.php/proposal/view_public/212
Finally, a Seed Alliance booth will be set up at the IGF Village, where FIRE, FRIDA and ISIF Asia Award winners and cybersecurity grant recipients will be available to share with Forum participants.
Tindak Malaysia is the winner of the ISIF Asia 2016 Technical Innovation Award and the Community Choice Award 2016.
TINDAK MALAYSIA: Towards A Fairer Electoral System –
1 Person, 1 Vote, 1 Value
A democracy is reflected in the sovereignty of the people. They are supposed to have the power to choose their leaders under Free and Fair Elections. Unfortunately, those in power will try to manipulate the electoral system to entrench their grip on power. Attempts to manipulate the system could be…
in tweaking the rules of elections in their favour,
in the control of the mainstream media,
through the pollsters to manipulate public perception,
during the vote count,
by making election campaigns so expensive that only the rich or powerful could afford to run or win.
through boundary delineation either by gerrymandering, or through unequal seat size.
The Nov 2016 US Presidential Election threw up all of the above in sharp contrast. There were two front runners, Donald Trump and Hillary Clinton.
Both candidates were disliked by more than half the electorate,
Both candidates generated such strong aversion that a dominant campaign theme was to vote for the lesser evil. The people were caught in the politics-of-no-choice.
Eventually, the winning candidate won, with slightly less votes (0.3%), than the losing candidate, each winning only 27% of the electorate. Yet the delegates won by the winner was 306 (57%) while the loser got 232 (43%), a huge difference!
The winning candidate won with barely a quarter of the total voting population. 43% of the voters did not vote. In other words, only 27% of the electorate decided on the President.
Consider Malaysia. We are located in South-east Asia. We have a population of 31 million with about 13.5 million registered voters. We practise a First-Past-The-Post System of elections, meaning the winner takes all, just like in the US.
In the 2013 General Elections, the Ruling Party obtained 47.4% of the votes and 60% of the seats. Meanwhile the opposition, with 52% of the votes, won only 40% of the seats – more votes, but much fewer seats.
We had all the problems listed above except that no opinion polls were allowed on polling day. But the most egregious problem of all was boundary delimitation, which is the subject of our project.
In 2013, the Ruling Party with 47.4% of the popular vote, secured 60% of the seats. To hang on to power, they resorted to abuse and to change of the laws to suppress the Opposition and the people. Our concern was that continuing oppression of the people in this manner could lead to violent protests. It was our hope to achieve peaceful change in a democratic manner through the Constitution.
From a Problem Tree Analysis, it was found that the problem was cyclic in nature. The root cause was a Fascist Government maintaining power through Fraudulent Elections. See red box opposite. Problem Tree Analysis
Malapportionment! The seats won by the Ruling Party in the chart below are the blue lines with small number of voters in the rural seats. The red lines with huge numbers are in the urban areas won by the Opposition. It was found that they could have won 50% of the seats with merely 20.22% of the votes. Malapportionment in General Elections – GE213
The above computation was based on popular vote. If based on total voting population, BN needed only 17.4% to secure a simple majority.
What is the solution we propose?
The solution was obvious. Equalize the seats.
But for the past 50 years, no one seemed to object to the unfair maps.
Why? The objectors never managed to submit a substantive objection because:
Biased EC stacked with Ruling Party cronies, who actively worked to prevent any objections being made,
Constitution rules of delimitation drafted to make objections difficult, such that the EC had a lot of leeway to interpret it anyway it wished.
Very high barriers to objection,
Insufficient information offered during a Redelineation exercise. Given the 1-month deadline, it was impossible for an ordinary voter to prepare a proper objection.
We start with a Polling District (PD). The PD is the smallest unit of area in a Constituency. It is defined by a boundary, a name and/ID Code, and includes elector population. Map 1 is an example of PD. To avoid clutter, the elector numbers are carried in separate layer which can be overlaid on top.
Districting is conducted by assembling these PD into Constituencies. In theory, the Constituencies are supposed to have roughly the same number of electors, unless variation is permitted in the Constitution.
This was gazetted by the EC on 15th Sept 2016 for public objections. No Polling Districts are identified. In reality, the EC had all the information in digital format under an Electoral Geographical Information System (EGIS) but they kept it from the public.
An elector faced with such a map, is stuck. He would not know where to begin. Neither did he have the technical knowledge to carry out the redistricting even if he wanted to, all within the time limit of 1 month.
This has been the case for the past 50 years. No one could object effectively.
So we had a situation where electors wanted to object but were unable to do so because of insufficient information and lack of expertise.
Studying the problem, we decided that the solution was to bridge the Digital Divide through Technical Innovation as well as to bring the matter out of the jurisdiction of the EC.
Digitize all the PD in Malaysia, about 8000 of them. This took us 1 year.
Learn how to redistrict using digital systems. We used QGIS, an open source GIS system,
Develop a plug-in to semi-automate and speed up the redistricting process.
Bring in legal expertise. Collaborate with lawyers to bring the matter out of the control of the EC and into the jurisdiction of the courts in order to defend the Constitution.
We started this initiative in July 2011 and by Dec 2015, we had digitised all the PD and redistricted the whole country twice, sharpening our expertise and correcting errors in the process. We got the Bar Council (Lawyers Association) to team up with us to guide the public on how to object when the Redelineation exercise by the EC is launched.
Redelineation, 1st Gazette:
On 15th Sept 2016, the EC published the First Gazette of the Redelineation Proposal. For the State of Selangor with 22 Parliamentary seats, they published one map only – MAP 2. We analysed their proposal and found glaring disparities in the seat sizes with elector population ranging from 39% to 200% of the State Electoral Quota (EQ) – MAP 3
At a more detailed level, it looks like MAP 4 below. We can see the densely populated central belt (brown columns) sticking out in sharp contrast to the under-populated outlying regions around the perimeter – ochre areas). Clearly the EC has not addressed the inequalities in the voting strength among the various regions.
Trial Run: We conducted a trial run on the EC maps for a local council in Selangor – MPSJ. See MAP 4. It was found that we could maintain local ties with 6 State and 2 Parliamentary Constituencies, with the elector population kept within +/-20% of the mean. This was much better than the EC’s range of -60% to +100%.
We have submitted objections for the First Gazette and await the call for a public hearing by the EC. Our lawyers are monitoring the EC to ensure they comply with the Constitution and preparing lawsuits in case they don’t.
While conducting our research on how to object, we uncovered yet another area of abuse. The boundaries of the polling districts and electors within, had been shifted to other constituencies unannounced. This was a surreptitious form of redelineation outside the ambit of the constitution and a gross abuse of authority. As part of our next project, we intend to focus on this, to prevent such gerrymandering.
In conclusion, we feel like we are peeling an onion. As we unfold one layer, a new layer of fraud is exposed. It was a never-ending process. But we are determined to keep on digging until we reach the core and achieve our goal of Free and Fair Elections.
The APNIC blog published yesterday an article written by Asanka Sayakkara, Assistant Lecturer at University of Colombo School of Computing (UCSC), about Internet of Things (IoT) solutions to deal with the problems that emerge from the interaction between humans-elephants.
From ISIF Asia, is really great to see how one of the organizations that received one of our first grants, continues to work on innovative solutions that use Internet technologies to address development problems. Kasun de Zoysa from UCSC worked back in 2010, on a Virtual IPv6 application test bed.
Asanka’s article as published at the APNIC blog is below and information about Kasun’s work is linked there. Hope you enjoy!
IoT solutions to help reduce human-elephant conflict in Sri Lanka
Human-elephant conflict is a very serious and destructive problem in rural Sri Lanka.
Each year, around 70 people are killed by elephants who wander into villages and farms in search of food; and nearly four times as many elephants are killed as a result. Elephants wandering into farmland also damage crops.
Presenting at the Internet of Things (IoT) tutorial at the recent APNIC 42 conference held in Colombo Sri Lanka, Dr Kasun de Zoysa from the University of Colombo’s School of Computing, shared with attendees examples of how his team, in collaboration with Sweden’s Uppsala University, are employing simple IoT solutions to protect crops and both human and elephant lives.
“Different people have approached this problem in different ways: biologists and animal conservationists are trying their best to protect local habitats, and the government and villagers have built kilometres of electric fencing around their villages and farms,” says Kasun.
“Our approach seeks to complement these efforts by incorporating sensing and data processing technology.”
Such technologies include making electric fences smarter and improving elephant warning systems.
Smarter electric fences
Electric fencing is a common solution used to protect villagers from elephants, particularly farmlands bordering the jungle.
However, Kasun says elephants have learnt how to avoid electric fences and discovered ways to break them, making the practice less reliable.
Once broken, it takes a significant human effort to find the location of the breakage by walking along the fence wire several kilometers long under the threat of nearby wild elephants.
To overcome this, Kasun’s team have developed a cost-effective electric fence, with small IoT nodes placed along the wire that can communicate with each other using the same wire as the communication medium.
“Their packets are encoded into the high-voltage electric pulses in a way that enables us to identify which node is disconnected from the network,” says Kasun. “When a node is disconnected from the network (part of the fence is broken) we can send alerts to maintenance crews with the exact location of the breakage.”
Infrasonic elephant localization system
Kasun says that although this new system will help with alerting villagers to potential elephant intrusions, it is not by itself a sustainable solution to protect people’s lives.
“This is where our second approach comes in,” says Kasun. “We have been testing an infrasonic localization system to locate elephants.”
“Elephants emit infrasonic (low frequency sounds) which travel further compared to audible frequencies. The system we are working on can accurately locate elephants in the area and alert people via various means including SMS alerts and social media.”
Kasun says that both the infrasonic elephant localization system and the smart electric fence are still in experimental stages; however, they plan to launch a pilot program in the coming months to evaluate their effectiveness.
“Success of this pilot deployment will provide us with the valuable information we need to complete this work and produce a cost-effective, open-source product that anybody can build.”
The road to recovery for a Tuberculosis (TB) patient in Cambodia can be long and arduous. For three months, 65-year old Mr. Nou Pov suffered from coughs, fatigue and night sweats, all of which are symptoms of TB, without being able to obtain an accurate diagnosis. “I became weak. I could not work, so I just sat at home,” recalled Nou. Only when an Operation ASHA staff stopped by his home while screening for TB door-to-door and recognized his symptoms was Nou brought to a local health center for testing. Upon being diagnosed with TB, he finally began his 6-month long treatment course.
Nou was lucky to ultimately obtain a diagnosis, but an estimated 36% of TB cases, or 21,060 individuals, remain undetected in Cambodia, according to the Cambodia Ministry of Health. A socioeconomically disadvantaged patient in Cambodia, like Nou, faces many barriers when it comes to TB detection, such as a lack of awareness about TB and healthcare resources, a lack of access to knowledgeable and trained staff, and a lack of means to travel to health centers.
Operation ASHA seeks to minimize the barriers that TB patients face in seeking timely TB care. In 2014, we implemented a new technological solution called eDetection, an app which would strengthen TB case finding and contact tracing. eDetection uses GPS-mapping to help Operation ASHA field staff locate areas with potential TB suspects. Operation ASHA field staff then travel door-to-door through these regions, prompting individuals to answer TB screening questionnaires that have been programmed into the app in accordance to WHO guidelines. By bringing TB care directly to the doorsteps of the underserved, Operation ASHA hoped to minimize the challenges that keep disadvantaged patients from receiving care. Based on an in-built algorithm, the app prompts the field staff to follow up on certain individuals whose responses suggest that they may have TB. Paper-based monitoring methods, the standard method of keeping track of patient screening, is often compromised by human error, thus resulting in patients being lost to follow up. With our app, when a patient is not followed up with, the system generates an alert to the field team, ensuring that each patient gets the right care at the right time.
With backing from ISIF Asia, Operation ASHA launched a small-scale eDetection pilot in Prey Kabas Operational District, Takeo Province, in 2014 (download report here). Although our concept was simple, bringing the app technology into rural Cambodia proved to be very difficult. “Most field staff do not begin with any previous experience with using tablet technology and require much training,” said Ms. Sinoth Lay, a Team Supervisor responsible for overseeing the activities of Operation ASHA’s field staff. “None of them had even used smartphones before”. Many rural areas also lacked reliable 3G access, resulting in inconsistent connections with the central server system that sometimes hindered the field team’s work efficiency. Additionally, the vast majority of patients lacked prior exposure to technology and many were initially hesitant to share personal health information with the field staff until they became more familiar with OpASHA’s work.
Despite the initial challenges of implementation, eDetection proved to be a valuable asset for TB screening and detection based on early pilot results. In one year, Operation ASHA managed to screen over 17,000 individuals for TB in Prey Kabas OD, of which 406 people tested positive for TB and were enrolled for treatment. Areas in which the technology was used resulted in 10% more patients being screened and 16% more patients being sent to health centers for diagnosis over areas in which paper-based monitoring systems were used. Most of the field staff also viewed the app positively, praising it for increasing data authenticity in the field. Although still in its early stages, eDetection shows great potential in being both easily scalable and financially feasible. Combined with Operation ASHA’s door-to-door TB care delivery model, it holds much promise for providing high-quality, low-cost care to TB patients across Cambodia.
By Robert Mitchell, APNIC
With nominations for the ISIF Asia Awards 2016 now open, we thought we’d check back with some of our previous award winners to understand how the award benefitted their projects and get some advice on what to include in your nominations.
Khairil Yusof is the cofounder and coordinator of the Sinar Project, which received an ISIF Asia Grant in 2013 in recognition of their work using open source technology and applications to systematically make important information public and more accessible to the Malaysian people.
Established in 2011, the Sinar Project aims to improve governance and encourage greater citizen involvement in the public affairs of the nation by making the Malaysian government more open, transparent and accountable.
What are the benefits of these kinds of Grants/Awards?
Here’s what Khairil had to say about ISIF Asia’s Grants and Awards:
These awards and grants recognize the difficult and highly technical work that a few civil society organizations do, which is often not understood or appreciated by other traditional awards or grants (for Rights) programs.
Also, being invited to an award ceremony at large event such as the Internet Governance Forum (IGF), provides you with lots of exposure in an environment where you can meet potential partners and donors that understand your work.
What were three key outcomes that the ISIF Asia Grant allowed you to achieve?
The money from the Grant helped our part-time/volunteer effort to register as a proper organization.
It also helped one of our founding members to work full time on funding applications.
Attending the IGF in Turkey provided us with the opportunity to speak with potential donors, which eventually led to initial funding for the establishment of Malaysia’s first fledgling civic tech NGO, and allowed us to continue our work full time.
How has your project progressed after receiving the Grant?
The opportunity to showcase our work to donors led to further funding, which helped with consolidating open standards government data. In turn, this provided open data via REST APIs.
Other achievement include:
Powering Malaysia’s Open Parliament efforts [1,2] and the same in Myanmar [1, 2, 3]
Uncovering corruption and promoting transparency [1, 2, 3, 4, 5]
Starting a Digital Rights initiative backed by a team with technical capacity, and funded by Access. We are now building partnerships with the TOR Project to collect and report on network interference data and build Computer Emergency Response Team (CERT) like alerts for digital rights incidents. We are also providing policy input on Internet and digital rights issues such as trade agreements
What should nominees include in their applications?
Don’t be shy with sharing your methodology and the insights you’ve learned along the way, even if you might think it is trivial. If you’re a very technical team, run your methodology by non-technical friends or family members to get their insights. What you think is mundane, might be inspiring to others.
Review all the outputs you have done; blogs, reports, software, photos, etc. If you’ve been passionately working on your ideas and project, you will be surprised at how much you have achieved. List the highlights in your proposal and reference the other outputs in an appendix or link.
Do Google alerts for mentions and links to your project. It might feel a bit narcissistic, but again you might be surprised at who is referencing or mentioning your project internationally or is inspired by your project work.
The first CERT in the Pacific, a Peering Strategy for the Pacific, and a mobile app reader to access books in Thailand’s Karen dialects are just some of the initiatives that will receive funding.
This year ISIF Asia will award its largest ever grants pool, across four categories, to support research and development of Internet technologies for the benefit of the Asia Pacific.
APNIC Internet Operations Research Grants
Around AUD 115,000 was awarded to support the following projects:
Realistic simulation of uncoded, coded and proxied Internet satellite links with a flexible hardware-based simulator. The University of Auckland, New Zealand. The main focus of this research is to establish realistic satellite simulator of UDP flows. It also automates experiments run on non-coded and coded configurations. The project builds upon a 2014 ISIF Asia grant to improve connectivity in the Pacific islands (see report).
Rapid detection of BGP anomalies. Centre for Advanced Internet Architectures (CAIA), Swinburne University of Technology, This research focuses on producing techniques for the real-time detection of different types of BGP anomalies that can be used by an operator. The evaluation of this tool will be carried out with a controlled testbed using BGP Replay Tool (BRT) to emulate past BGP events.
A Peering Strategy for the Pacific Islands. Telco2 Limited, New Zealand. This research continues and expands a set of Internet measurements of latency to Pacific Island telecommunications providers from various locations around the world, that when evaluated in conjunction with submarine cable availability, can be used to determine a metric for efficiency of transit that can be considered along with the economic impact of having an efficient transit. The measurements will be made available in real-time via a web interface to help operators, regulators, and funders understand the physical routing of network traffic, availability of content, and benefits of peering to improve availability, reachability and security of the Internet in the Asia Pacific region.
Internet Society Cybersecurity Grant
With the support from the Internet Society, one grant of AUD 56,000 was allocated for this category, plus additional Monitoring , Evaluation and Communications support valued at AUD 2,500 and a travel grant to participate at the Internet Governance Forum in Guadalajara, Mexico where they will be one of the speakers at the workshop “Cybersecurity – Initiatives in and by the Global South“.
Developing Tonga National CERT to the Department of Information & ICT under the Ministry of Meteorology, Energy, Environment, Climate Change, Information, Communication, Disaster Management (MEIDECC), Tonga. The Tonga Computer Emergency Response Team (CERT) launched recently, is the first national CERT in the Pacific region. Tonga CERT was launch with a long-term goal to expand its services to the greater Pacific once fully operational. Tonga CERT will conduct incident handling; perform vulnerability handling; and provide security consultation and advice. Read more from Andrew Toimoana, Director of MEIDECC, Tonga.
Community Impact Grant
The AUD 50,000 Community Impact Grant was awarded to:
Equal Access to the Information Society in Myanmar, the Myanmar Book Aid and Preservation Foundation, Myanmar. This project focuses on women and youth, and benefits 500 people through 20 libraries across the country. The curriculum, developed specifically for Myanmar, focuses on critical thinking in a digital environment of smartphones and tablets. It develops the skills of young female leaders by providing them with specialized information technology training, leadership and job skills, and opportunities to engage in critical public discussion. Myanmar Book Aid and Preservation Foundation will also participate in a three-week mentoring program in Singapore, facilitated by JFDI.Asia, valued at AUD 25,000 plus expenses during their stay.
Technical Innovation Grants
Just over 195,000 AUD was allocated to support five projects under the Technical Innovation category.
Khushi Baby, India. This project improves digital medical records for mothers and children by streamlining data collection, improving decision making in the field, aiding in district resource management, and delivering effective dialect-specific voice call reminders to mothers. Khushi Baby will also participate in a three-week mentoring program in Singapore, facilitated by JFDI.Asia, valued at AUD 25,000 plus expenses during their stay.
Four small technical innovation grants of up to AUD 30,000 were awarded to:
My Community Reader: a Mobile-First Distributed Translation Tool and Reader for Ethnic Minority Languages. The Asia Foundation, Thailand. This project will build, test, and deploy a tool to translate text into minority languages books, significantly expanding the available online library of digital and printable mother-tongue children’s books. It will also deliver a mobile app so people can search the library and download titles on local Android devices.
UAV-Aided Resilient Communications for Post Disaster Applications: Demonstrations and Proofs of Concept. Ateneo de Manila University, Philippines. This project will design and demonstrate UAV-borne radio payloads as critical network nodes in the development of a post-disaster resilient, delay tolerant communications system, using both multi-rotor and fixed wing platforms with long range radio payload to demonstrate the concept. The UAV will act as data aggregators and wireless store-and-forward relays for collecting important information and providing connectivity to evacuation centers, ground teams and concerned agencies. Data can be gathered from multiple sources below and delivered to another ground team or to a central station, while it can use the wireless link to broadcast messages to the ground nodes. Relayed information can include survivor profiles, food supply audits, medicine requests, and images of victims. This system will be used to assist response team coordination, hasten rescue efforts, and deliver timely updates, among others.
Legalese. Legalese Pte. Ltd. Singapore. This is a web application that will enable the growing Asian population of first-time entrepreneurs and first-time investors to transact seed-stage financing with confidence and without expensive legal fees. The app educates end-users about entrepreneurial finance, facilitates choosing and configuring investment agreements, manage signatures through to completion, and develops libraries of contract templates for Asian languages and Asian jurisdictions.
Deployment of Collaborative Modern HoneyNet to improve Regional Cybersecurity Landscape (CMoHN). Institute of Systems Engineering, Riphah International University, Pakistan. The project will deploy and establish the core skills required to manage and integrate different honeynets and design new honeypots for countering cyber-attacks. The project will connect with other honeynets in the region to form a regional collaborative honeynet network, and promote R&D activities to secure network infrastructure through publications and conducting community awareness seminars.
Back in 2011, APNIC and LACNIC were interested to join efforts to strengthen their regional programs for Internet development. Both ISIF Asia and FRIDA had many stories to tell and supported many projects since they were established. Although they operated in different ways, there were several areas where collaboration was possible. As they discussed the benefits and challenges of a collaborative partnership, AFRINIC was also considering the possibility to establish its own program, so an idea started to take shape.
APNIC and LACNIC approached their main donor, IDRC, to explore possibilities for support such partnership. A whole year of negotiations, planning and strategizing followed, to align the objectives of these three Regional Internet Registries operating in very diverse regions, but with a common interest to give back to their communities, with those of IDRC. During the IGF 2011 in Nairobi (Kenya), a meeting was planned to discussed a final draft proposal document, cementing the idea of establishing a partnership to support valuable research and development initiatives that showcased innovation and technical knowledge, through Internet technologies, for social and economic development. The Alliance for Internet Development and Digital Innovation was born.
The Seed Alliance started operating with contributions from all three RIRs and generous support from IDRC, and contributions from regional sponsors. The initiative attracted the interest of other players, looking for a way to talk about innovation, scale and growth on the Internet, from a regional perspective, to support social an economic development. To use technology for good, not necessarily for profit. A year later, the Seed Alliance hosted its first awards ceremony, at the IGF 2012 in Baku (Azerbaijan). By then, Sida, joined the alliance as a new funding partner and thanks to their generous support, the Seed Alliance started a three years program cycle, that concluded last year at the IGF 2015 in Joao Pessoa (Brazil).
This report, published on the Seed Alliance website, offers an overview of the Seed Alliance’s work completed under the three-year program cycle 2012-2015, funded by Sida and the International Development Research Centre (IDRC) which supported a total of 116 projects across 57 economies for around US$ 2.2 million of funding in Grants and Awards throughout Africa, Asia Pacific, and Latin America, helping to strengthen and promote the Information Society within these regions.
From 2012-2015 ISIF Asia was able to support 44 projects across 22 economies in the Asia Pacific region, 22 grants and 22 award winners. Besides direct funding for their projects, ISIF Asia recipients received many mentoring and networking opportunities that increased their knowledge, expanded their network of contacts and provided visibility to their work in a very competitive environment. Our lessons learned, recommendations and challenges are included in the report. As APNIC provided secretariat support to coordinate this three years cycle, we learned a lot about partnerships, about the ingenuity and innovative approaches that are born and bred in our region, about the challenges that the organization we support face. It is a incredibly lucky position to be: to be able to support ideas grow. We continue to do so!
We invite to download the report as well as explore the Seed Alliance website. More information about the report can be found here and the report can be downloaded here.
Internet Niue will forever be remembered for being the first WiFi country. It’s Free WiFi initiative was a bold move especially on a small remote island in the South Pacific.
Back in the late 1990s, IUSN (Internet Users Society of Niue) a charitable organisation, applied and was later delegated as manager of the Niue .nu ccTLD (Country Code Top Level Domain) by IANA. As part of it’s goodwill offer, IUSN set out to provide free Internet access through an initiative called Internet Niue. It began it’s limited services with dial-up and by 2003 it had started testing WiFi in downtown Alofi.
On 5 January 2004 Category 5 Cyclone Heta struck Niue with a force that ravaged the tiny island. Part of the capital was completely wiped out by the waves that rose over the 20m upraised coral cliffs. As a result of this devastation, we had to rebuild our network infrastructure but with better understanding for the forces of nature as well as the environment that our wireless had to go through.
We worked with local organisations known as Village Councils (VC) and used their meeting halls as sites for our access points. We also partnered with some private sector businesses and home owners to enable the distribution of WiFi to be extended across the narrow villages that followed the main road. There’s no mountains or hills so we were able to utilise existing towers to install our major backhaul wireless links. Initially we used empty cat food cans to build our antennaes and these worked well. But advancements in design and technology including the decrease of prices in equipment have allowed us to extend further. We now cover 13 of the 14 villages on the island of Niue.
A lot has changed since our first trial links back in 2003 but the vision has remained the same, to provide WiFi to the local communities. For a long period, the island was able to enjoy free internet but as time passed, we had to adapt the way we operated to be able to cope with changes occurring in the domain name (TLD) world especially with the arrival of new gTLDs (Generic Top Level Domains). Our funding is dependent on the sales of the .nu domain names and we have had several years of having the luxury of free services. The main problem with the Free WiFi setup was that over time with the growth of users, the services was degraded. So a change to the system was needed as we head into the future if we were going to survive.
By the beginning of 2016, plans were activated which allowed us to upgrade our satellite bandwidth with assistance from Speedcast. We started the new venture of charging people and built a system to become a commercial ISP, Kaniu (www.kaniu.nu). We still get subsidised with funding for the satellite bandwidth from IUSN but we’ve had to engage our users and charge them a fee of $50/unlimited per month to cover the local operations. The uptake has been promising and we aim to continue offering more bandwidth to our users.
But when implementing these changes, the Government of Niue felt that we had violated some Niue Telecommunications laws and regulations and requested us to cease services. We adhered to that directive, even though we believed we had not broken any laws or regulations, and gave notification to our 600+ users as we turned off all our services in March 2016. Users that benefited from the Internet access provided, voiced their concerns and later on the same evening we received the authorisation to resume our services much to the delight of our users. We have continued to meet and discuss with the government what their concerns and requirements are as we intend to maintain our operations in Niue, in a small market that is developing.
We have invested a lot of effort and resources so we will continue to do what we do best.
In 2011, Internet Niue won the ISIF Award for Localisation and Capacity Building. I was invited to Nairobi, Kenya to the IGF (Internet Governance Forum) to receive the Award. It was an amazing experience to meet other award winners and share with them, but there were far greater benefits that grew organically from it.
Personally, I was able to leverage the opportunity of winning the award and be able to participate and contribute to the regional PICISOC, Internet Society, ICANN (APRALO) as well as the Pacific IGF and New Zealand NetHui. It has been an exciting journey but moreso the recognition for the work of Internet Niue and Rocket Systems both on the island and internationally. It helped to grow my professional network and enabled my participation and exchange of ideas around the biggest issue in the Pacific Islands, specially for rural and remote locations: connectivity. We have taken up the opportunity with Kacific’s upcoming service and we’re very excited that their first interim service is active in Vanuatu. With this kind of an opportunity including the Hawaiki project underway, the future for our Pacific People looks promising and we can finally realise the dream of becoming more engaged in the digital economy. Even though I still manage our Niue project, I have found more opportunities in the land of the long white clouds, Aotearoa New Zealand. I am currently involved in the Makanet project that will see the use of the Kacific service to deliver broadband to rural and remote locations in New Zealand. This will be a major undertaking and the potential to connect the under-served communities of New Zealand is similar to our own Pacific under-served communities.
The ISIF programme has assisted some great projects in the past and I’m sure it will continue to help others grow to greater heights. So if you’re interested in using this great resource to develop and gain more exposure for your work, please don’t hesitate to apply at https://isif.asia/award
I’ll be happy to connect with anyone who is wanting more information about our ISIF Award experience as well as our ongoing projects in the Pacific.