Report available! Telemetering the telltale signs of power issues of wireless internet relays

The TellTale project was conceived with aim of addressing the problem of measurement and projection of the power uptime duration of wireless internet relays. In rural areas and in areas where such projections are not available, operators often fail to address downtimes in a timely manner, thereby increasing the number and duration of downtimes and/or fail to project the power needs of a relay properly. These issues have direct adverse economic consequences for both providers and users

In line with this, the project objectives were to:
1. Identify an affordable and replicable sensor+SBC + internet uplink power charge and discharge module
2. Create a cloud-based, machine-learning supported, data ingestion, storage, data prep, analysis and reporting system.
3. Develop an easy-to0use reporting and alert system with PC and mobile applications (Android)
4. Measure and report on the cost-saving and improved uptime impact of the project
5. Disseminate the project findings and share the systems design
6. Create a paid support system for interested parties.

The project has achieved most of its objectives. An AirJaldi “TellTale” system, capable of measuring battery voltage and generating indicators and alerts based on the its change over time, has been built, demonstrated and is ready for distribution and sharing. At a device cost of around US $20 (hardware components) the system is affordable, as are the software packages and cloud hosting services required.

AirJaldi will offer TellTale using a Freemium model. Interested users can either download the source codes and manuals at no cost from Github (accessed directly or via our website and those of other partners), or choose one of various models of paid support offered by AirJaldi.

TellTale’s User Interface (UI) was designed to be clear and easy to use and update and is available in both computer and mobile version. An Android APK, offering a stripped-down version of the web UI with a focus on alerts, was also created and made available for users.

We plan to continue working on improving and enriching TellTale in the coming months and will share information and resources.

The final report is available here.

Community LTE in Papua project by Yanobama published Final Report

The project “Community LTE in Papua”, delivered a LTE (CoLTE) network -a lightweight, Internet only LTE core network (EPC)- designed to facilitate the deployment and operation of small-scale, community owned and operated LTE networks, with a particular eye towards expanding Internet access into rural areas with limited and unreliable backhaul.

The CoLTE network comes paired with a basic, IP based network manager as well as basic web services. The key differentiator of CoLTE, when compared to existing LTE solutions, is that in CoLTE the EPC is designed to be located in the field and deployed alongside a small number of cellular radios (eNodeBs), as opposed to the centralized model seen in large-scale telecom networks.

The project also provided performance results and lessons learned from a real world CoLTE network deployed in rural Indonesia. This network has been sustainably operating for over six months, currently serves over 40 active users, and provides measured backhaul reductions of up to 45% when compared to cloud core solutions.

Read their Final Technical Report for all the details about their work in Indonesia.

Simulating satellite Internet traffic to a small island Internet provider

A significant number of islands are too remote to make submarine cable Internet connections economical. We’re looking mostly at the Pacific, but islands like these exist elsewhere, too. There are also remote places on some continents, and of course there are cruise ships, which can’t connect to cables while they’re underway. In such cases, the only way to provision Internet commercially at this point are satellites.

Satellite Internet is expensive and sells by the Mbps per month (megabits per second of capacity per month). So places with few people – up to several ten thousand perhaps – usually find themselves with connections clearly this side of a Gbps (gigabit per second). The time it takes for the bits to make their way to the satellite and back adds latency, and so we have all the ingredients for trouble: a bottleneck prone to congestion and long round-trip times (RTT) which give TCP senders an outdated picture of the actual level of congestion that their packets are going to encounter.

I have discussed these effects at length in two previous articles published at the APNIC blog, so won’t repeat this here. Suffice to say: It’s a problem worth investigating in depth, and we’re doing so with help from grants by ISIF Asia in 2014 and 2016 and Internet NZ. This blog post describes how we’ve built our simulator, which challenges we’ve come up against, and where we’re at some way down our journey.

What are the questions we want to answer?

The list is quite long, but our initial goals are:

  • We’d like to know under which circumstances (link type – GEO or MEO, bandwidth, load, input queue size) various adverse effects such as TCP queue oscillation or standing queues occur.
  • We’d like to know what input queue size represents the best compromise in different circumstances.
  • We’d like to know how much improvement we can expect from devices peripheral to the link, such as performance-enhancing proxies and network coders.
  • We’d like to know how best to parameterise such devices in the scenarios in which one might deploy them.
  • We’d like to know how devices with optimised parameters behave when loads and/or flow size distributions change.

That’s just a start, of course, and before we can answer any of these, the biggest question to solve is: How do you actually build, configure, and operate such a simulator?

Why simulate, and why build a hybrid software/hardware simulator?

We get this question a lot. There are things we simply can’t try on real satellite networks without causing serious inconvenience or cost. Some of the solutions we are looking at require significant changes in network topology. Any island out there keen to go without Internet for a few days while we try something out? So we’d better make sure it can be done in a controlled environment first.

Our first idea was to try to simulate things in software. There is a generation of engineers and networking people who have been brought up on the likes of ns-2, ns-3, mininet etc., and they swear by it. We’re part of that generation, but the moment we tried to simulate a satellite link in ns-2 with more than a couple of dozen Mbps and more than a handful of simultaneous users generating a realistic flow size distribution, we knew that we were in trouble. The experiment was meant to simulate just a few minutes’ worth of traffic, for just one link configuration, and we were looking at days of simulation. No way. This wasn’t scalable.

Also, with a software simulator, you rely on software simulating a complex system with timing, concurrent processes, etc., in an entirely sequential way. How do you know that it gets the chaos of congestion right?

So we opted for farming out as much as we could to hardware. Here, we’re dealing with actual network components, real packets, and real network stacks.

There’s been some debate as to whether we shouldn’t be calling the thing an emulator rather than a simulator. Point taken. It’s really a bit of both. We take a leaf here from airline flight simulators, which also leverage a lot of hardware.

The tangible assets

Our island-based clients at present are 84 Raspberry Pis, complemented by 10 Intel NUCs. Three Supermicro servers simulate the satellite link and terminal equipment (such as PEPs or network coding encoders and decoders), and another 14 Supermicros of varying vintage act as the servers of the world that provide the data which the clients on the island want.

The whole thing is tied together by a number of switches, and all servers have external Internet access, so we can remotely access them to control experiments without having to load the actual experimental channel. The image in Figure 1 below shows the topology – the “island” is to the right, the satellite “link” in the middle, and the “world” servers on the right.

Figure 1: The topology of our simulator. 84 Raspberry Pis and 10 Intel NUCs represent the island clients on the (blue) island network. Three Super Micro servers emulate the satellite link and run the core infrastructure either side (light blue network). A further 14 Super Micros represent the servers of the world that send data to the island (red network). All servers are accessible via our external network (green), so command and control don't interfere with experiments.
Figure 1: The topology of our simulator. 84 Raspberry Pis and 10 Intel NUCs represent the island clients on the (blue) island network. Three Super Micro servers emulate the satellite link and run the core infrastructure either side (light blue network). A further 14 Super Micros represent the servers of the world that send data to the island (red network). All servers are accessible via our external network (green), so command and control don’t interfere with experiments.

 

Simulating traffic: The need for realistic traffic data

High latency is a core difference between satellite networks and, say, LANs or MANs. As I’ve explained in a previous blog, this divides TCP flows (the packets of a TCP connection going in one direction) into two distinct categories: Flows which are long enough to become subject to TCP congestion control, and those that are so short that their last data packet has left the sender by the time the first ACK for data arrives.

In networks where RTT is no more than a millisecond or two, most flows fall into the former category. In a satellite network, most flows don’t experience congestion control – but contribute very little data. Most of the data on satellite networks lives in flows whose congestion window changes in response to ACKs received.

So we were lucky to have a bit of netflow data courtesy of a cooperating Pacific ISP. From this, we’ve been able to extract a flow size distribution to assist us in traffic generation. To give you a bit of an idea as to how long the tail of the distribution is: We’re looking at a median flow size of under 500 bytes, a mean flow size of around 50 kB, and a maximum flow size of around 1 GB.

A quick reminder for those who don’t like statistics: The median is what you get if you sort all flows by size and take the flow size half-way down the list. The mean is what you get by adding all flow sizes and dividing by the number of flows. A distribution with a long tail has a mean that’s miles from the median. Put simply: Most flows are small but most of the bytes sit in large flows.

Simulating traffic: Supplying a controllable load level

Another assumption we make is this: By and large, consumer Internet users are reasonably predictable creatures, especially if they come as a crowd. As a rule of thumb, if we increase the number of users by a factor of X, then we can reasonably expect that the number of flows of a particular size will also roughly increase by X. So if the flows we sampled were created by, say, 500 users, we can approximate the behaviour of 1000 users simply by creating twice as many flows from the same distribution. This gives us a kind of “load control knob” for our simulator.

But how are we creating the traffic? This is where our own purpose-built software comes in. Because we have only 84 Pis and 10 NUCs, but want to be able to simulate thousands of parallel flows, each physical “island client” has to play the role of a number of real clients. Our client software does this by creating a configurable number of “channels”, say 10 or 30 on each physical client machine.

Each channel creates a client socket, randomly selects one of our “world” servers to connect to, opens a connection and receives a certain number of bytes, which the server determines by random pick from our flow size distribution. The server then disconnects, and the client channel creates a new socket, selects another server, etc. Selecting the number of physical machines and client channels to use thus gives us an incremental way of ramping up load on the “link” while still having realistic conditions.

Simulating traffic: Methodology challenges

There are a couple of tricky spots to navigate, though: Firstly, netflow reports a significant number of flows that consist of only a single packet, with or without payload data. These could be rare ACKs flowing back from a slow connection in the opposite direction, or be SYN packets probing, or…

However, our client channels create a minimum amount traffic per flow through their connection handshake. This amount exceeds the flow size of these tiny flows. So we approximate the existence of these flows by pro-rating them in the distribution, i.e., each client channel connection accounts for several of these small single packet flows.

Secondly, the long tail of the distribution means that as we sample from it, our initial few samples are very likely to have an average size that is closer to the median than to the mean. In order to obtain a comparable mean, we need to run our experiments for long enough so that our large flows have a realistic chance to occur. This is a problem in particular with experiments using low bandwidths, high latencies (GEO sats), and a low number of client channels.

For example, a ten minute experiment simulating a 16 Mbps GEO link with 20 client channels will typically generate a total of only about 14,000 flows. The main reason for this is the time it takes to establish a connection via a GEO RTT of over 500 ms. Our distribution contains well over 100,000 flows, with only a handful of really giant flows. So results at this end are naturally a bit noisy, depending on whether, and which, giant flows in the 100’s of MB get picked by our servers. This forces us to run rather lengthy experiments at this end of the scale.

Simulating the satellite link itself

For our purposes, simulating a satellite link mainly means simulating the bandwidth bottleneck and the latency associated with it. More complex scenarios may include packet losses from noise or fading on the link, or issues related to link layer protocol. We’re dedicating an entire server to the simulation (server K in the centre of the topology diagram), so we have enough computing capacity to handle every case of interest. The rest is software, and here the choice is chiefly between a network simulator (such as, e.g., sns-3) and something relatively simple like the Linux tc utility.

The latter lets us simulate bandwidth constraints, delay, sporadic packet loss and jitter: enough for the moment. That said, it’s a complex beast, which exists in multiple versions and – as we found out – is quite quirky and not overly extensively documented.

Following examples given by various online sources, we configured a tc netem qdisc to represent the delay, which we in turn chained to a token bucket filter. The online sources also suggested quality control: ping across the simulated link to ensure the delay is place, then run iperf in UDP mode to see that the token bucket filter is working correctly. Sure enough, the copy-and-paste example passed these two tests with flying colours. It’s just that we then got rather strange results once we ran TCP across the link. So we decided to ping while we were running iperf. Big surprise: Some of the ping RTTs were in the hundreds of seconds – far longer than any buffer involved could explain. Moreover, no matter which configuration parameter we tweaked, the effect wouldn’t go away. So, a bug it seems. We finally found a workaround involving ingress redirection to an intermediate function block device, which passes all tests and produces sensible results for TCP. Just goes to show how important quality control is!

Simulating world latency

We also use a similar technique to add a variety of fixed ingress and egress delays to the “world” servers. This models the fact that TCP connections in real life don’t end at the off-island sat gate, but at a server that’s potentially a continent or two down the road and therefore another few dozen or even hundreds of milliseconds away.

Link periphery and data collection

We already know that we’ll want to try PEPs, network coders etc., so we have another server each on both the “island” (server L) and the “world” (server J) side of the server (K) that takes care of the “satellite link” itself. Where applicable, these servers host the PEPs and / or network coding encoders / decoders. Otherwise, these servers simply act as routers. In all cases, these two servers also function as our observation points.

At each of the two observation points, we run tcpdump on eth0 to capture all packets entering and leaving the link at either end. These get logged into pcap capture files on L and J.

An alternative to data capture here would be to capture and log on the clients and / or “world” servers. However, capture files are large and we expect lots of them, and the SD cards on the Raspberry Pis really aren’t a suitable storage medium for this sort of thing. Besides that, we’d like to let the Pis and servers get on with the job of generating and sinking traffic rather than writing large log files. Plus, we’d have to orchestrate the retrieval of logs from 108 machines with separate clocks, meaning we’d have trouble detecting effects such as link underutilisation.

So servers L and J are really without a lot of serious competition as observation points. After each experiment, we use tshark to translate the pcap files into text files, which we then copy to our storage server (bottom).

For some experiments, we also use other tools such as iperf (so we can monitor the performance of a well-defined individual download) or ping (to get a handle on RTT and queue sojourn times). We run these between the NUCs and some of the more powerful “world” servers.

A basic experiment sequence

Each experiment basically follows the same sequence, which we execute via our core script:

  1. Configure the “sat link” with the right bandwidth, latency, queue capacity etc.
  2. Configure and start any network coded tunnel or PEP link we wish to user between servers L and J.
  3. Start the tcpdump capture at the island end (server L) of the link
  4. Start the tcpdump capture at the world end (server J) of the link with a little delay. This ensures that we capture every packets heading from the world to the island side
  5. Start the iperf server on one of the NUCs. Note that in iperf, the client sends data to the server rather than downloading it.
  6. Start the world servers.
  7. Ping the special purpose client from the special purpose server. This functions as a kind of “referee’s start whistle” for the experiment as it creates a unique packet record in both tcpdump captures, allowing us to synchronise them later.
  8. Start the island clients as simultaneously as possible.
  9. Start the iperf client.
  10. Start pinging – typically, we ping 10 times per second.
  11. Wait for the core experiment duration to expire. The clients terminate themselves.
  12. Ping the special purpose client from the special purpose server again (“stop whistle”).
  13. Terminate pinging (usually, we ping only for part of the experiment period, though)
  14. Terminate the iperf client.
  15. Terminate the iperf server.
  16. Terminate the world servers.
  17. Convert the pcap files on J and L into text log files with tshark
  18. Retrieve text log files, iperf log and ping log to the storage server.
  19. Start the analysis on the storage server.

Between most steps, there is a wait period to allow the previous step to complete. For a low load 8 Mbps GEO link, the core experiment time needs to be 10 minutes to yield a half-way representative sample from the flow size distribution. The upshot is that the pcap log files are small, so need less time for conversion and transfer to storage. For higher bandwidths and more client channels, we can get away with shorter core experiment durations. However, as they produce larger pcap files, conversion and transfer take longer. Altogether, we budget around 20 minutes for a basic experiment run.

Tying it all together

We now have more than 100 machines in the simulator. Even in our basic experiments sequence, we tend to use most if not all of them. This means we need to be able to issue commands to individual machines or groups of machines in an efficient manner, and we need to be able to script this.

Enter the pssh utility. This useful little program lets our scripts establish a number of SSH connections to multiple machines simultaneously, e.g., to start our servers or clients, or to distribute configuration information. It’s not without its pitfalls though: For one, the present version has a hardwired limit of 32 simultaneous connections that isn’t properly document in the man page. If one requests more than 32 connections, pssh quietly runs the first 32 immediately and then delays the next 32 by 60 seconds, the next 32 by 120 seconds, etc.

We wouldn’t have noticed this hadn’t we added a feature to our analysis script that checks whether all clients and servers involved in the experiment are being seen throughout the whole core experiment period. Originally, we’d intended this feature to pick up machines that had crashed or had failed to start. Instead, it alerted us to the fact that quite a few of our machines were late starters, always by exactly a minute or two.

We now have a script that we pass the number of client channels required. It computes how to distribute the load across the Pi and NUC clients, creates subsets of up to 32 machines to pass to pssh, and invokes the right number of pssh instances with the right client parameters. This lets us start up all client machines within typically less than a second. The whole episode condemned a week’s worth of data to /dev/null, but shows again just how important quality assurance is.

Automating the complex processes is vital, so we keep adding scripts to the simulator as we go to assist us in various aspects of analysis and quality assurance.

Observations – and how we use them

Our basic experiment collects four pieces of information:

  1. A log file with information on the packets that enter the link from the “world” side at J (or the PEP or network encoder as the case may be). This file includes a time stamp for each packet, the source and destination addresses and ports, and the sizes of IP packets, the TCP packets they carry, and the size of the payload they contain, plus sequence and ACK numbers as well as the status of the TCP flags in the packet.
  2. A similar log file with information on the packets that emerge at the other end of the link from L and head to the “island” clients.
  3. An iperf log, showing average data rates achieved for the iperf transfer.
  4. A ping log, showing the sequence numbers and RTT values for the ping packets sent.

The first two files allow us to determine the total number of packets, bytes and TCP payload bytes that arrived at and left the link. This gives us throughput, goodput, and TCP byte loss, as well as a wealth of performance information for the clients and servers. For example, we can compute the number of flows achieved and the average number of parallel flows, or the throughput, goodput for and byte loss for each client.

Figure 2: Throughput and goodput on a simulated 16 Mbps satellite link carrying TCP for 20 client sockets with an input queue of 100kB on the satellite uplink. Note clear evidence of link underutilisation - yet the link is already impaired.
Figure 2: Throughput and goodput on a simulated 16 Mbps satellite link carrying TCP for 20 client sockets with an input queue of 100kB on the satellite uplink. Note clear evidence of link underutilisation – yet the link is already impaired.

Figure 2 above shows throughput (blue) and goodput (red) in relation to link capacity, taken at 100 ms intervals. The link capacity is the brown horizontal line – 16 Mbps in this case.

Any bit of blue that doesn’t reach the brown line represents idle link capacity – evidence of an empty queue some time during the 100 ms in question. So you’d think there’s be no problem fitting a little bit of download in, right? Well that’s exactly what we’re doing at the beginning of the experiment, and you can indeed see that there’s quite a bit less spare capacity – but still room for improvement.

Don’t believe me? Well, the iperf log gives us an idea as to how a single long download fares in terms of throughput. Remember that our clients and servers aim at creating a flow mix but don’t aim to complete a standardised long download. So iperf is the more appropriate tool here. In this example, our 40 MB download takes over 120 s with an average rate of 2.6 Mbps. If we run the same experiment with 10 client channels instead of 20, iperf might take only a third of the time (41 s) to complete the download. That is basically the time it takes if the download has the link to itself. So adding the extra 10 client channel load clearly has a significant impact.

At 50 client channels, iperf takes 186 seconds, although this figure can vary considerably depending which other randomly selected flows run in parallel. At 100 client channels, the download sometimes won’t even complete – if it does, it’s usually above the 400 second mark & there’s very little spare capacity left (Figure 3).

 

Figure 3: At 100 client channels, the download does not complete but there is still a little spare capacity left.
Figure 3: At 100 client channels, the download does not complete but there is still a little spare capacity left.

 

You might ask why the iperf download is so visible in Figure 1 compared to the traffic contributed by our hundreds of client channels? The answer lies once again in the extreme nature of our flow size distribution and the fact that at any time, a lot of the channels are in connection establishment mode: The 20 client channel experiment above averages only just under 18 parallel flows, and almost all of the 14,000 flows this experiment generates are less than 40 MB: In fact, 99.989% of the flows in our distribution are shorter than our 40 MB download. As we add more load, the iperf download gets more “competition” and also contributes at a lower goodput rate.

The ping log, finally, gives us a pretty good estimate of queue sojourn time. We know the residual RTT from our configuration but can also measure it by pinging after step 2 in the basic experiment sequence. Any additional RTT during the experiment reflects the extra time that the ICMP ping packets spend being queued behind larger data packets waiting for transmission.

One nice feature here is that our queue at server K practically never fills completely: To do so, the last byte of the last packet to be accepted into the queue would have to occupy the last byte of queue capacity. However, with data packets being around 1500 bytes, the more common scenario is that the queue starts rejecting data packets once it has less than 1500 bytes capacity left. There’s generally still enough capacity for the short ping packets to slip in like a mouse into a crowded bus, though. It’s one of the reasons why standard size pings aren’t a good way of detecting whether your link is suffering from packet loss, but for our purposes – measuring sojourn time – it comes in really handy.

Figure 4 shows the ping RTTs for the first 120 seconds of the 100 client channel experiment above. Notice how the maximum RTT tops out at just below 610 ms? That’s 50 ms above the residual RTT of 560 ms (500 satellite RTT and 60 ms terrestrial), +/-5% terrestrial jitter that we’ve configured here. No surprises here: That’s exactly the time it takes to transmit the 800 kbits of capacity that the queue provides. In other words: The pings at the top of the peaks in the plot hit a queue that was, for the purposes of data transfer, overflowing.

The RTT here manages to hit its minimum quite frequently, and this shows in throughput of just under 14 Mbps, 2 Mbps below link capacity.

Figure 4: Ping RTTs during the first 120 seconds.
Figure 4: Ping RTTs during the first 120 seconds.

Note also that where the queue hits capacity, it generally drains again within a very short time frame. This is queue oscillation. Note also that we ping only once every 100 ms, so we may be missing shorter queue drain or overflow events here because they are too short in duration – and going by the throughput, we know that we have plenty of drain events.

This plot also illustrates one of the drawbacks of having a queue: between e.g., 35 and 65 seconds, there are multiple occasions when the RTT doesn’t return to residual for a couple of seconds. This is called a “standing queue” – the phenomenon commonly associated with buffer bloat. At times, the standing queue doesn’t contribute to actual buffering for a couple of seconds but simply adds 20 ms or so of delay. This is undesirable, not just for real-time traffic using the same queue, but also for TCP trying to get a handle on the RTT. Here, it’s not dramatic, but if we add queue capacity, we can provoke an almost continuous standing queue: the more buffer we provide, the longer it will get under load.

Should we be losing packet loss altogether?

There’s one famous observable that’s often talked about but surprisingly difficult to measure here: packet loss. How come, you may ask, given that we have lists of packets from before and after the satellite link?

Essentially, the problem boils down to the question of what we count as a packet, segment or datagram at different stages of the path.

Here’s the gory detail: The maximum size of a TCP packet can in theory be anything that will fit inside a single IP packet. The size of the IP packet in turn has to fit into the Ethernet (or other physical layer) frame and has to be able to be processed along the path.

In our simulator, and in most real connected networks, we have two incompatible goals: Large frames and packets are desirable because they lower overhead. On the other hand, if noise or interference hits on the satellite link, large frames present a higher risk of data loss: Firstly, at a given bit error rate, large packets are more likely to cop bit errors than small ones. Secondly, we lose more data if we have to discard a large packet after a bit error than if we have to discard a small packet only.

Then again, most of our servers sit on Gbps Ethernet or similar, where the network interfaces have the option of using jumbo frames. The jumbo frame size of up to 9000 bytes represents a compromise deemed reasonable for this medium. However, these may not be ideal for a satellite link. For example, given a bit error probability of 0.0000001, we can expect to lose 7 in 1000 jumbo frames, or 0.7% of our packet data. If we use 1500 byte frames instead, we’ll only lose just over 1 in 1000 frames, or 0.12% of our packet data. Why is that important? Because packet loss really puts the brakes on TCP, and these numbers really make a difference.

The number of bytes that a link may transfer in a single IP packet is generally known as the maximum transmission unit (MTU). There are several ways to deal with diversity in MTUs along the path: Either, we can restrict the size of our TCP segment right from the sender to fit into the smallest MTU along the path, or we can rely on devices along the way to split IP packets with TCP segments into smaller IP packets for us. Modern network interfaces do this on the fly with TCP segmentation offload (TSO) and generic segmentation offload (GSO, see https://sandilands.info/sgordon/segmentation-offloading-with-wireshark-and-ethtool). Finally, the emergency option when an oversize IP datagram hits a link is to fragment the IP datagram.

In practice, TSO and GSO are so widespread that TCP senders on a Gbps network will generally transmit jumbo frames and have devices further down the path worry about it. This leaves us with a choice in principle: Allow jumbo frames across the “satellite link”, or break them up?

Enter the token bucket filter: If we want to use jumbo frames, we need to make the token bucket large enough to accept them. This has an undesirable side effect: Whenever the bucket has had a chance to fill with tokens, any arriving packets that are ready to consume them get forwarded immediately, regardless of rate (which is why you see small amounts of excess throughput in the plots above). So we’d “buy” jumbo frame capability by considerably relaxing instantaneous rate control for smaller data packets. That’s not what we want, so it seems prudent to stick with the “usual” MTUs of around 1500 bytes and accept fragmentation of large packets.

There’s also the issue of tcpdump not necessarily seeing the true number of packets/fragments involved, because it captures before segmentation offload etc. (https://ask.wireshark.org/questions/3949/tcpdump-vs-wireshark-differences-packets-merged).

The gist of it all: The packets we see going into the link aren’t necessarily the packets that we see coming out at the other end. Unfortunately that happens in a frighteningly large number of cases.

In principle, we could check from TCP sequence numbers & IP fragments whether all parts of each packet going in are represented in the packets going out. However, with 94 clients all connecting to 14 servers with up to 40-or-so parallel channels, doing the sequence number accounting is quite a computing-intensive task. But is it worth it? For example, if I count a small data packet with 200 bytes as lost when it doesn’t come out the far end, then what happens when I have a jumbo frame packet with 8000 bytes that gets fragmented into 7 smaller packets and one of these fragments gets dropped? Do I count the latter as one packet loss, or 1/7th of a packet loss, or what?

The good news: For our purposes, packet loss doesn’t actually help explain much unless we take it as an estimate of byte loss. But byte loss is an observable we can compute very easily here: We simply compare the number of observed TCP payload bytes on either side of the link. Any missing byte must clearly have been in a packet that got lost.

Quality control

There is a saying in my native Germany: “Wer misst, misst Mist”. Roughly translated, it’s a warning that those who measure blindly tend to produce rubbish results. We’ve already seen a couple of examples of how an “out-of-left field” effect caused us problems. I’ll spare you some of the others but will say that there were just a few!

So what are we doing to ensure we’re producing solid data? Essentially, we rely on four pillars:

  1. Configuration verification and testing. This includes checking that link setups have the bandwidth configured, that servers and clients are capable of handling the load, and that all machines are up and running at the beginning of an experiment.
  2. Automated log file analysis. When we compare the log files from either side of the link, we also compute statistics about when each client and server was first and last seen, and how much traffic went to/from the respective machine. Whenever a machine deviates from the average by more than a small tolerance or a machine doesn’t show up at all, we issue a warning.
  3. Human inspection of results: Are the results feasible? E.g., are throughput and goodput within capacity limits? Do observables change in the expected direction when we change parameters such as load or queue capacity? Plots such as those discussed above also assist us in assessing quality. Do they show what we’d expect, or do they show artefacts? This also includes discussion of our results so there are at least four eyes looking at data.
  4. Scripting: Configuring an experiment requires the setting of no less than seven parameters for the link simulation, fourteen different RTT latencies for the servers, and load and timeout configurations for 94 client machines, an iperf download size, plus the orchestrated execution of everything with the right timing – see above. Configuring all of this manually would be a recipe for disaster, so we script as much as we can – this takes care of a lot of typos!

Also, an underperforming satellite link could simply be a matter of bad link configuration rather than a fundamental problem with TCP congestion control. It would be all too easy to take a particular combination of link capacity and queue capacity to demonstrate an effect without asking what influence these parameters have on the effect. This is why we’re performing sweeps – when it comes to comparing the performance of different technologies, we want to ensure that we are putting our best foot forward.

Sweeping

So what’s the best queue capacity for a given link capacity? You may remember the old formula for sizing router queue, RTT * bandwidth. However, there’s also Guido Appenzeller’s PhD thesis from Stanford, in which he recommends to divide this figure by the square root of the number of long-lived parallel flows.

This presents us with a problem: We can have hundreds of parallel flows in the scenarios we’re looking at. However, how many of those will qualify as long-lived depends to no small extent on the queue capacity at the token bucket filter!

For example, take the 16 Mbps link with 20 client channels we’ve already looked at before. At 16 Mbps (=2MBps) and 500 ms RTT, the old formula suggests 1 MB queue capacity. We see fairly consistently 17-18 parallel flows (not necessarily long-lived ones, though) regardless of queue capacity. Assuming extremely naively that all of these flows might qualify as long-lived (well, we know they’re not), Guido’s results suggest dividing the 1MB by about a factor of around 4, which just so happens to be a little larger than the 100kB queue we’ve deployed here. But how do we know whether this is the best queue capacity to choose?

A real Internet satellite link generally doesn’t just see a constant load. So how do we know which queue capacity works best under a range of loads?

The only way to get a proper answer is to try feasible combinations of load levels and queue capacities. Which poses the next question: What exactly do we mean by “works best”?

Looking at the iperf download, increasing the queue size at 20 client channels always improves the download time. This would suggest dumping Guido’s insights in favour of the traditional value. Not so fast: Remember those standing queues in Figure 3? At around 20 ms extra delay, they seemed tolerable. Just going to a 200kB queue bumps these up to 80 ms, though, and they’re a lot more common, too. Anyone keen to annoy VoIP users for the sake of a download that could be three times faster? Maybe, maybe not. We’re clearly getting into compromise territory here, but around 100kB-200kB seems to be in the right ballpark.

So how do we get to zero in on a feasible range? Well, in the case of the 16 Mbps link, we looked at (“sweeped”) eleven potential queue capacities between 30 kB and 800 kB. For each capacity, we swept up to nine load levels between 10 and 600 client channels. That’s many dozens of combinations, each of which takes around 20 minutes to simulate, plus whatever time we then take for subsequent manual inspection. Multiply this with the number of possible link bandwidths of interest in GEO and MEO configuration, plus repeats for experiments with quality control issues, and we’ve got our worked carved out. It’s only then that we can get to coding and PEPs.

What’s next?

A lot. If the question on your mind starts with “Have you thought of…” or “Have you considered…,” the answer is possibly yes. Here are a few challenges ahead:

  • Network coding (TCP/NC): We’ve already got the encoder and decoder ready, and once the sweeps are complete and we have identified the parameter combinations that represent the best compromises, we’ll collect performance data here. Again, this will probably take a few sweeps of possible generation sizes and overhead settings.
  • Performance-enhancing proxies (PEP): We’ve identified two “free” PEPs, PEPSal and TCPEP, which we want to use both in comparison and – eventually – in combination with network coding.
  • UDP and similar protocols without congestion control. In our netflow samples, UDP traffic accounts for around 12% of bytes. How will TCP behave in the presence of UDP in our various scenarios? How do we best simulate UDP traffic given that we know observed throughput, but can only control offered load? In principle, we could model UDP as a bandwidth constraint, but under heavy TCP load, we get to drop UDP packets as well, so it’s a constraint that’s a little flexible, too. What impact does this have on parameters such as queue capacities, generation sizes etc.?
  • Most real links are asymmetric, i.e., the inbound bandwidth is a lot larger than the outbound bandwidth. So far, we have neglected this as our data suggests that the outbound channels tend to have comparatively generous share of the total bandwidth.
  • Simulating world latencies. At this point, we’re using a crude set of delays on our 14 “world servers”. We haven’t even added jitter. What if we did? What if we replaced our current crude model of “satgate in Hawaii” with a “satgate in X” model, where the latencies from the satgate in X to the servers would be distributed differently?

 

As you can see, lots of interesting work ahead!

TINDAK MALAYSIA: Towards A Fairer Electoral System

Tindak Malaysia is the winner of the ISIF Asia 2016 Technical Innovation Award and the Community Choice Award 2016.

TINDAK MALAYSIA: Towards A Fairer Electoral System –
1 Person, 1 Vote, 1 Value

A democracy is reflected in the sovereignty of the people. They are supposed to have the power to choose their leaders under Free and Fair Elections. Unfortunately, those in power will try to manipulate the electoral system to entrench their grip on power. Attempts to manipulate the system could be…

  • in tweaking the rules of elections in their favour,
  • in the control of the mainstream media,
  • through threats,
  • through bribery,
  • through the pollsters to manipulate public perception,
  • during the vote count,
  • by making election campaigns so expensive that only the rich or powerful could afford to run or win.
  • through boundary delineation either by gerrymandering, or through unequal seat size.

The Nov 2016 US Presidential Election threw up all of the above in sharp contrast. There were two front runners, Donald Trump and Hillary Clinton.

Both candidates were disliked by more than half the electorate,

Both candidates generated such strong aversion that a dominant campaign theme was to vote for the lesser evil. The people were caught in the politics-of-no-choice.
Eventually, the winning candidate won, with slightly less votes (0.3%), than the losing candidate, each winning only 27% of the electorate. Yet the delegates won by the winner was 306 (57%) while the loser got 232 (43%), a huge difference!

The winning candidate won with barely a quarter of the total voting population. 43% of the voters did not vote. In other words, only 27% of the electorate decided on the President.

Consider Malaysia. We are located in South-east Asia. We have a population of 31 million with about 13.5 million registered voters. We practise a First-Past-The-Post System of elections, meaning the winner takes all, just like in the US.

In the 2013 General Elections, the Ruling Party obtained 47.4% of the votes and 60% of the seats. Meanwhile the opposition, with 52% of the votes, won only 40% of the seats – more votes, but much fewer seats.

We had all the problems listed above except that no opinion polls were allowed on polling day. But the most egregious problem of all was boundary delimitation, which is the subject of our project.

In 2013, the Ruling Party with 47.4% of the popular vote, secured 60% of the seats. To hang on to power, they resorted to abuse and to change of the laws to suppress the Opposition and the people. Our concern was that continuing oppression of the people in this manner could lead to violent protests. It was our hope to achieve peaceful change in a democratic manner through the Constitution.

From a Problem Tree Analysis, it was found that the problem was cyclic in nature. The root cause was a Fascist Government maintaining power through Fraudulent Elections. See red box opposite.
Problem Tree Analysis

 

problem-tree-analysis-of-the-rat-race_a

If current conditions prevail without any changes, they can still win power with just 39% of the votes.
50-Year General Elections Voting Trend

historical-ge-records-up-to-ge13_comments

What happened?

Malapportionment! The seats won by the Ruling Party in the chart below are the blue lines with small number of voters in the rural seats. The red lines with huge numbers are in the urban areas won by the Opposition. It was found that they could have won 50% of the seats with merely 20.22% of the votes.
Malapportionment in General Elections – GE213

 

ge13-voter-size-graph_2

The above computation was based on popular vote. If based on total voting population, BN needed only 17.4% to secure a simple majority.

What is the solution we propose?

The solution was obvious. Equalize the seats.
But for the past 50 years, no one seemed to object to the unfair maps.

Why? The objectors never managed to submit a substantive objection because:

  • Biased EC stacked with Ruling Party cronies, who actively worked to prevent any objections being made,
  • Constitution rules of delimitation drafted to make objections difficult, such that the EC had a lot of leeway to interpret it anyway it wished.
  • Very high barriers to objection,
  • Insufficient information offered during a Redelineation exercise. Given the 1-month deadline, it was impossible for an ordinary voter to prepare a proper objection.

How are Constituencies Drawn – Districting?

map-1-selangor-pd2013

We start with a Polling District (PD). The PD is the smallest unit of area in a Constituency. It is defined by a boundary, a name and/ID Code, and includes elector population. Map 1 is an example of PD. To avoid clutter, the elector numbers are carried in separate layer which can be overlaid on top.

Districting is conducted by assembling these PD into Constituencies. In theory, the Constituencies are supposed to have roughly the same number of electors, unless variation is permitted in the Constitution.

What happens when the Election Commission presents a map without any PD as shown in Map 2 below.
MAP 2 – EC’S SELANGOR REDELINEATION PROPOSAL 2016

map-2-selangor-redelineation-proposal-2016-syor1

This was gazetted by the EC on 15th Sept 2016 for public objections. No Polling Districts are identified. In reality, the EC had all the information in digital format under an Electoral Geographical Information System (EGIS) but they kept it from the public.

An elector faced with such a map, is stuck. He would not know where to begin. Neither did he have the technical knowledge to carry out the redistricting even if he wanted to, all within the time limit of 1 month.

This has been the case for the past 50 years. No one could object effectively.

So we had a situation where electors wanted to object but were unable to do so because of insufficient information and lack of expertise.

Studying the problem, we decided that the solution was to bridge the Digital Divide through Technical Innovation as well as to bring the matter out of the jurisdiction of the EC.

Technical:

  1. Digitize all the PD in Malaysia, about 8000 of them. This took us 1 year.
  2. Learn how to redistrict using digital systems. We used QGIS, an open source GIS system,
  3. Develop a plug-in to semi-automate and speed up the redistricting process.

Legal:

  1. Bring in legal expertise. Collaborate with lawyers to bring the matter out of the control of the EC and into the jurisdiction of the courts in order to defend the Constitution.

We started this initiative in July 2011 and by Dec 2015, we had digitised all the PD and redistricted the whole country twice, sharpening our expertise and correcting errors in the process. We got the Bar Council (Lawyers Association) to team up with us to guide the public on how to object when the Redelineation exercise by the EC is launched.

Redelineation, 1st Gazette:

On 15th Sept 2016, the EC published the First Gazette of the Redelineation Proposal. For the State of Selangor with 22 Parliamentary seats, they published one map only – MAP 2. We analysed their proposal and found glaring disparities in the seat sizes with elector population ranging from 39% to 200% of the State Electoral Quota (EQ) – MAP 3

MAP 3 – SELANGOR MALAPPORTIONMENT OF PROPOSED PARLIAMENT SEATS 2016

6d-selangor-malapportionment

At a more detailed level, it looks like MAP 4 below. We can see the densely populated central belt (brown columns) sticking out in sharp contrast to the under-populated outlying regions around the perimeter – ochre areas). Clearly the EC has not addressed the inequalities in the voting strength among the various regions.

MAP 4 – SELANGOR VOTER DENSITY

map-4-selangor-voter-density-danesh20161107

Trial Run: We conducted a trial run on the EC maps for a local council in Selangor – MPSJ. See MAP 4. It was found that we could maintain local ties with 6 State and 2 Parliamentary Constituencies, with the elector population kept within +/-20% of the mean. This was much better than the EC’s range of -60% to +100%.

MAP 5 – LOCAL COUNCIL MPSJ

map-5-mpsj-redistricting_1

We have submitted objections for the First Gazette and await the call for a public hearing by the EC. Our lawyers are monitoring the EC to ensure they comply with the Constitution and preparing lawsuits in case they don’t.

While conducting our research on how to object, we uncovered yet another area of abuse. The boundaries of the polling districts and electors within, had been shifted to other constituencies unannounced. This was a surreptitious form of redelineation outside the ambit of the constitution and a gross abuse of authority. As part of our next project, we intend to focus on this, to prevent such gerrymandering.

In conclusion, we feel like we are peeling an onion. As we unfold one layer, a new layer of fraud is exposed. It was a never-ending process. But we are determined to keep on digging until we reach the core and achieve our goal of Free and Fair Elections.

Google is Bringing Project Loon to Indonesia

project-loon

The challenge of providing adequate Internet service in countries with vast populations that are spread out over large geographical areas is a difficult one. Rich and poor countries alike are dealing with this difficult problem. The task of providing access to the Internet infrastructure is compounded in developing countries. Not only do these countries face the burden of delivering broadband services to a large population that is spread over numerous remote islands and is isolated by mountainous terrain, but also even if the geographic conditions were ideal, the Internet infrastructure is typically under developed and insufficient to meet the growing population’s needs.

Satellite Connectivity

In many cases satellites have been utilized to enable developing countries to leapfrog in their Internet infrastructure development. Many developing countries tend to lack much of the traditional terrestrial infrastructure such as cable, fiber and other critical equipment, facilities and resources that have been invested and deployed in the broadband Infrastructure of developed countries over several decades. Satellites provide developing countries with the potential to by pass the expense and resources involved with more typical terrestrial Internet infrastructure development.

However satellite technologies present many disadvantages as well. For example there are line of sight limitations, which makes broadband service over satellites unsuitable for mountainous areas where the rugged terrain gets in the way of the signal. Alternatively the distances that the signal has to travel on satellite systems make them less than ideal for today’s high speed Internet networks.

Project Loon

Google Asia Pacific recently announced a technological solution to the intractable problem of providing Internet access in countries without sufficient existing broadband infrastructure. This technology entitled Project Loon is designed to provide Internet services via high-altitude balloons that act like floating mobile towers in Indonesia. While the planning for Project Loon began over two years ago, it was recently able to announce that Indonesia’s three largest mobile operators – Indosat, Telkomsel and XL Axiata – will begin testing balloon-powered internet services.

Indonesia has the geographic and demographic traits that make it an ideal fit for Google’s Project Loon. For example, it has a population of over 250 million that is spread out over 740,000 square miles and more than 17,000 islands. Moreover its mountainous terrain and large swaths of land covered by jungle create the types of limitations to the provision of sufficient broadband access that Project Loon’s technology is specifically engineered to address.

The introduction of this project should pose numerous benefits for Indonesia. In terms of Internet connectivity Indonesia lags behind many developing nations in Asia and around the world. In a recent study conducted by the Internet Society Indonesia ranks 135th in the world with 15.8 percent internet user penetration. This project should help to improve this ranking. Another benefit is that the project would not be dependent on the time consuming and expensive process of allocating spectrum just for Project Loon. The three participating mobile operators – Indosat, Telkomsel and XL Axiata – agreed to contribute their own stockpiles of spectrum for this project.

By Siddhartha Menon, a Research Developer and Social Media Strategist

Overcoming Connectivity Challenges in Rural Schools with Content Servers

education-connect-access-point-wall

Many schools in Asia Pacific have deployed computer labs to increase ICT literacy. However, it is challenging for schools to move beyond teaching basic computer skills such as typing, and office productivity tools without good internet access. There are incredible rich educational resources available in the cloud including Wikipedia, Khan Academy, and PhET. However, remote schools either lack adequate access to the internet or cannot afford it due to the high cost of broadband connectivity.

So, how can schools, particular remote and rural schools move beyond teaching basic ICT literacy and integrate ICT into core subjects such as English, Math and Science? The answer is a content server.

Content Server Options

While the concept of a content server is not new, it has evolved in recent years with more affordable hardware and relevant content. There are two key features that define a content server. First, it has to be simple to use. Second, it has to be robust and require zero or little maintenance.

Students and teachers access the content server using a standard web browser on a device that is available to them whether it be a desktop, laptop, tablet or even a smart phone. Content on the server must not require access to the internet but yet provide an experience that mirrors applications on the net.

Nowadays, more and more such content is available. For example, Kiwix started by making Wikipedia available in an offline mode but now has moved to offer additional resources such as wikitionary and TED® talks. KA-LITE from Learning Equality has developed a version of Khan Academy that is easily deployed on a local content server and provides a similar learning experience to the cloud version Khan Academy. PHeT has a server version that provides their rich simulation apps to be accessed in the browser. World Possible has curated content and made it available through Rachel, their off-line educational content portal.

These are just some examples of popular cloud education resources that are now available in versions suitable for a content server. Often it only take some minor tweaks to migrate country specific educational content into a format suitable to be placed on a content server.

A content server must be robust and require little or no maintenance. Schools do not have the IT support to maintain a traditionally server. Thus, ideally a content server only has an on/off button with no keyboard and monitor. This prevents “misuse”, which can introduce viruses or unintended configuration changes. For example, I have seen “servers” used as an additional desktop only to have configurations changed inadvertently rendering the server unusable.

A content server is a not a traditional server requiring large “server” hardware. Rather it can be implemented on different hardware including small form factor desktops, a network attached storage, or even a Raspberry Pi. One interesting implementation is an integrated unit called Intel Education Content Access point which combines a server with a Wi-Fi Access point, a 3/4G connection and a battery which makes it ideal for schools with unreliable power. Most implementations are running a version of Linux operating system due to its stability and small footprint.

Content Server Deployment

server lab

So, how does a content server look like in practice and what impact does it have? Marilog Elementary School is a primary school in Mindanao, Philippines. While not remote, the school has no internet access, and cell phone coverage is poor. The school had a small computer lab for several years. Two years ago, the school received a content server along with several tablets.

In this implementation the content server was a C3 unit from Critical Links. A small form factor desktop measuring only 12cm X 12 cm X 5cm integrates a server with a Wi-Fi access point. Thus students can access it directly without the need of WLAN. However, the major advantage of the C3 is that content can be updated remotely. That is, once in a while the content server is connected to the internet at an office, and the content can get updated.

Students access the content using tablets. The off-line version of Wikipedia has proven to be one of the most popular application with both students and teachers. Students use it as a traditional encyclopedia, while teachers particular like the photos within Wikipedia to supplement their teaching materials. More recently, teachers started augmenting the content server by posting their own educational documents allowing for an easy and efficient way to distribute information to their students.

The concept of a content server is not new, but technology has evolved so that a content server can now run on very affordable hardware and requires minimum maintenance so that schools without access to the internet and without IT resources can now provide some of the educational applications of the 21st century.

Bernd Nordhausen advises organizations and governments on how to effectively utilize technology to bridge the digital divide.

Which Are Better: Computers or Mobile Tablets for Education in Rural India?

india-tablet

Rural India has a challenging context with respect to quality education. In India, rural areas lack basic infrastructure facilities, and universalization of schooling in India is one of the most urgent development issues in the world today. The major challenges of quality education in rural areas include

  • Absence of students in schools
  • Absence of teachers in schools
  • Insufficient good teachers
  • Absence of schools.

The listed issues cannot be solved easily as there are 600,000 villages in India. Information and Communication Technologies (ICTs) have raised some hopes to tackle these challenges to some extent.

ICTs offer greater access to quality learning resources. The Internet is a forerunner in the technology front with respect to get access to educational resources. Rural areas are still not able to exploit the benefits of the Internet due to the absence of technological infrastructure.

In rural areas, two major ICTs seem to have the greatest potential to deliver quality education, namely, computers and mobile tablets. These technologies can be easily taken to rural areas.

The i-Saksham Project

Understanding the potential of ICTs, a group of entrepreneurs in ‘Jamui’ and ‘Munger’ districts of Bihar, India have implemented i-Saksham project in the remote rural areas. The project aims to give access to quality education using mobile tablets. Mobile tablets contain relevant educational content with respect to primary education.

A local educated youth is selected and trained to use a mobile tablet to deliver primary education to village children. In return, tutors charge fees from the child’s parent. The tutors, in turn, give a fixed amount to the project implementers.

Tablets vs. Computers

The main reasons for using mobile tablets, in comparison to computers, as a platform for the initiative are:

  1. Computers need more maintenance compared to mobile tablets. Computers can’t be moved from one place to the other comfortably. However, tablets are highly portable. If there is a hardware issue, tablets can be taken easily to the project main center. It will be difficult in the case of computers.
  2. In rural areas, there is a lack of electricity and therefore, it will be difficult to run computers. However, mobile tablets once charged can be used any time in the day/night
  3. Computer with an accessory like UPS, increases the cost of investment. This has an effect on the overall sustainability of the project. As the number of tutors rise, investments will also rise if computers are used.
  4. Mobile tablets used in the project are not very expensive. They cost around Rs. 4000-5000 (1$= Rs. 65.00 as on 22/10/2015) with basic features.
  5. With a single computer, all students in a class can’t active part in learning. However, with mobile tablets, students are able to work in groups, play educational games. With increased number of tablets, students’ interaction with the technology and content will be higher.

Computers as a delivery medium has many advantages over the mobile tablet. For example, good video and audio quality, larger memory to store content and better performance. Computers also give opportunity for tutors to enhance their computer skills.

However, in the context of rural areas, mobile tablets because of their portability, lesser cost and maintenance, provides sustainable technological solution to the issue of quality education in rural areas. The mobile tablets complement books during a teaching session. Availability of educational apps enhances the overall ability of the tablet as a medium for education.

Nevertheless, content is a major challenge for the project officials. There is a need for localized content so that students learn faster and better.

Gaurav Mishra is an Assistant Professor at Development Management Institute (DMI) – Patna

National Portal Delivers eServices for Bangladesh Citizens

np

The Government of Bangladesh has made substantial strides towards achieving its long-term Perspective Plan (2010-2021) by introducing the National Portal, or NP, which is primarily intended to serve as an information dissemination mechanism for the population, especially the underserved.

The National Portal’s journey started in 2007 when the government introduced a central portal by way of a preliminary endeavour. In 2010, a countrywide initiative was undertaken to introduce portals for all of the country’s 64 districts. Based on the lessons learned from these experiences, the ‘Guidelines on Content Preparation’ and ‘Training Guidelines’ on the same subject were prepared to widen the portals’ scope and reach.

Subsequently, some 22,000 government officials were trained on developing and maintaining the Portal, which created an enabling environment to further advance the effort. Finally, in 2013 and 2014, some 25,000+ websites, adhering to a common architecture, design, and structure in terms of their contents, were integrated within the National Portal and introduced in all tiers of public offices (Union Parishad, the lowest tier of local government, Upazila or Sub-district, district, division, directorate and ministries).

In the new National Portal, citizens are finding a convenient channel for obtaining information from public offices at lower cost and with less hassle. The Portal is also mobile-friendly, thereby ensuring greater access to information since the country enjoys over 70 per cent mobile penetration, with over 80 per cent of Internet access happening over mobile phones. Citizens who are unable to access the websites directly can go to the nearest digital centre, of which there are some 5000+ countrywide.

It is worth noting here that there are complementary initiatives in progress to upgrade the Portal further so that it can host all electronic versions of government services. Mobile applications are also being developed to make it easily accessible to persons with disabilities.

At present, some 100+ services (selected on the basis of importance and public demand), including online passport applications and electricity bill payments, have already been incorporated, and more services will soon be fully automated and provided via the Portal further to a mandatory government directive that will shortly be coming into effect.

Complied from WSIS Stocktaking: Success Stories 2015

ISIF Asia Award Winners for 2015 announced and Community Choice Award open

The Awards recognize initiatives from organizations that have already been implemented, or are in the final stages of implementation, and have been successful in addressing their communities’ needs.

During the 2015 call for nominations, four award winners were selected out of the 78 nominations received across four categories, covering 12 economies in the Asia Pacific. Proposals from Bangladesh, Cambodia, China, India, Indonesia, Malaysia, Nepal, Pakistan, Philippines, Singapore, Sri Lanka and Thailand were assessed by the Selection Committee.

The commitment and continuous support from the Selection Committee to choose the best projects is key to provide legitimacy to this award. We thank Phet Sayo (IDRC), Gaurab Raj Upadhaya (APNIC EC), Rajnesh Singh (Internet Society), Edmon Chung (Dot Asia Organization), George Michaelson (APNIC staff), and David Rowe (ROWETEL, former ISIF Asia grant recipient) for their time, their comments and their eye for detail.

Each winner has received a cash prize of AUD 3,000 to support their work and a travel grant for a project representative to participate at the 10th Internet Governance Forum (Joao Pessoa, Brazil – November 2015) to participate at the awards ceremony, showcase their project, make new professional contacts, and participate in discussions about the future of the Internet.

This year was particularly interesting to receive an application from China, for the very first time since the inception of the ISIF Asia program.

31 applications were accepted for the selection process and are publicly available for anyone interested to learn more about the ingenuity and practical approaches that originate from our region. 16 applications were selected as finalists.

53% for nominations came from private sector and social enterprises, 24% from non-profits, 13% from the academic sector and 10% from government agencies.

The category that received more applications was Innovation on learning and localization with 38%, followed by Code for the common good with 28%, Rights 24% and Innovation on access provision 9%.

86% of the nominated projects are lead by men, only 14% lead by women.

One winner was awarded for each category, three from non-profits and one from private sector and three projects will be represented by women at the Awards Ceremony.

One of the four award winners will receive the Community Choice Award, an additional AUD 1000 for the project with more online votes from the community. The online vote opened on 9 September until 9 November. The winner of the Community Choice Award will be announced at the Awards ceremony. Cast your vote and support the winners!

DocHers  Batik Fractal  Jaroka  I change my city

Awards winners were selected in four categories, as follows:

  • Innovation on access provision: doctHERs – Pakistan, NAYA JEEVAN. doctHERs is a novel healthcare marketplace that connects home-restricted female doctors to millions of underserved patients in real-time while leveraging technology. doctHERs circumvents socio-cultural barriers that restrict women to their homes, while correcting two market failures: access to quality healthcare and women’s inclusion in the workforce. doctHERs leapfrogs traditional market approaches to healthcare delivery and drives innovative, sytems change.
  • Code for the common good: Batik Fractal – Indonesia, Piksel Indonesia Company. Piksel Indonesia is creative social enterprise founded in 2007 and registered as legal entity in 2009. Piksel Indonesia is the creator of Batik Fractal and jBatik Software. Through a yearlong research about batik and science, we then developed a modeling software application to create batik design generatively and presented the innovation in 10th Generative Art International Conference in Milan Italy. In 2008, this innovation funded by Business Innovation Fund SENADA USAID and created jBatik v.1 and focus to empower batik artisans in Bandung. Since that time, Piksel Indonesia is working to empower batik and craft artisans in all Indonesia especially in Java and Bali. Currently, we have trained around 1400 artisans to use jBatik software. The training was firstly organized by the local government in each rural area and villages where batik artisans usually live. As an innovation, the use of the software into traditional art needs intensive training and continued the effort. Through several training levels in mastering the use of jBatik software, the artisans can incorporate technology to develop their traditional craft work. The artisans are not only now have access to affordable technology and use the technology to develop their batik, but also have been proven to contribute to increase productivity, bring more sales and increase their profit which lead to improved income.
  • Innovation on learning and localization: Jaroka Mobile Based Tele-Healthcare – Pakistan, UM Healthcare Trust. We aim to devise newer and effective ways for bringing a rapid change in healthcare domain for rural communities. We have launched Jaroka to lower the cost of delivering care dramatically by leveraging ICT to deliver the scarcest resource, medical expertise, remotely. Jaroka Tele-Healthcare model utilizes internet and mobile platform to extend tele-healthcare services in rural Pakistan. This includes voice, Short Text Messaging (SMS),Multimedia Messaging (MMS),GPRS/Edge and VSAT to quickly and efficiently extend medical advice to Rural Health Workers (RHWs) in the field by connecting them to our network of specialists in cities and abroad. This model also includes Pakistan’s First Health Map through which the latest and live healthcare information is shared with relevant stakeholder across Pakistan to improve the healthcare in Pakistan.Through this project over 130,000 has been provided treated at hospitals and in fields.
  • Rights: I Change My City – India, Janaagraha Centre for Citizenship and Democracy. Ichangemycity.com is a hyper-local social change network that has created communities of citizens in Bengaluru, keen on solving city centric problems and has resolved around 10,000 complaints by connecting them to various government agencies. The site has tried to help solve issues ranging from garbage collection, poor street lighting, potholes and security related issue in the suburbs. It has also provided citizens with useful information on how much funds have been allocated to wards and constituencies and how the same has been uitilised. The unique power of ichangemycity.com is that it networks people locally to address issues of common concerns. It connects people on-line to bring them together off-line for civic engagement on the ground. The multiplicity of various government departments and the paperwork involved acts as a deterrent for many individuals to connect with civic agencies. Ichangemycity.com tries to address this problem by being a seamless bridge between government and citizens. Ichangemycity.com works on the 4C mantra- Complaint, Community, Connect, and Content.

Event Participation: APrIGF 2015

head-slider

The Asia Pacific Regional Internet Governance Forum (APrIGF) 2015 was held from 30 June to 3 July at Macau University of Science & Technology, Macao, hosted by HNET.Asia and Macau High Technology Industry Chamber. Gathering 150 regional delegates including 30 youth participants, the APrIGF 2015 continued the objective of advancing Internet governance development and engaging the next generation of Internet leaders, see http://2015.rigf.asia/. The recordings of each session are available to be downloaded at http://2015.rigf.asia/archives/. One aspect to highlight about the overall program was that there were more workshops that discussed Human Rights and Gender compared to last year.

Four ISIF Asia funding recipients participated at the conference thanks to the support from partners and sponsors. Bishakha Datta from Point of View (India), Nica Dumlao from Foundation for Media Alternatives (Philippines), Jonathan Brewer from Telco2/Network Startup Resource Center (New Zealand), and Ulrich Speidel from University of Auckland (New Zealand) shared their views and experiences as part of the program of the event.

Bishakha Datta (Point of View, India) represented the Civil Society groups at the Opening Plenary. PoV contributes to amplify the voices of women and remove barriers to free speech and expression.

Picture 2 all-filipina delegation
Nica and her colleagues from the Filipinas delegation actively involved in IG discussions

Nica Dumlao (Foundation for Media Alternatives, Philippines) is FMA’s Internet Rights Coordinator, working on the intersection of technology and human rights in the Philippines. Nica has been very active on Internet Governance both globally and regionally, contributing her experience at two global IGFs and two APrIGFs. FMA organized two sessions: 1) “Gender and Internet Exchange (gigX)” a gender pre-event workshop in collaboration with the Association for Progressive Communications (APC), and 2) “Human Rights & Governance in ASEAN Cyberspace”. She was also part of the panelists of three sessions of the main program: “Threats in Expression in Asia”, “Bridging Gender Digital Divide”, and “Localizing Internet Governance”. In these sessions, they shared the experience in working on issues around digital rights, privacy, and women’s rights in the Philippines. In addition, they realized how the other panelists and participants look at issues of Internet rights and governance in the region. “The APrIGF provided a space for us Filipinos to have meaningful exchange with other stakeholders in the region and to plan for further collaboration”, Nica reported.

Jonathan Brewer (Telco2/NSRC, New Zealand) attended several sessions and also had a presentation in the session entitled “Broadband Infrastructure and Services for the Next Billion Users”. In his words, Jon left the sessions “enriched with new information, viewpoints, and concerns”. Some of the highlights for the sessions he attended were as follows:

  • In the session “Universal Acceptance: Been there, done that”, discussing Internationalized Domain Names, it was highlighted that one of the main challenges for developing applications is the support for multiple languages.”
  • “Net Neutrality in the Asia-Pacific” discussed a range of separate but inter-related topics including Network Neutrality, Peering, Sending Party Pays, and Zero Rating.
  • “Smart Cities in Asia and the Deployment of Big Data: Privacy and Security Challenges” provided an overview of Internet of Things (IoT) technologies and how their increased use in cities could have significant impacts on privacy and security for residents of these cities.
IMG_1156
Prof. Ang Peng Hwa from NTU in Singapore, presenting during one of the APrIGF sessions in Macau

Ulrich Speidel (University of Auckland, New Zealand) shared about his experiences during the implementation improve Internet user experience in Pacific Island countries with network coded TCP. Ulrich is a firm believer that network coding may help improved goodput in places where TCP faces difficulties coping with high latencies across bottlenecks in the presence of a large number of TCP senders and the APrIGF was a great place to share about this exciting developments, funded by ISIF Asia, along with other partners. He was also one of the speakers of “Broadband Infrastructure and Services for the Next Billion Users”. He attended several sessions and highlighted the following: “Broadband and Infrastructure Services for the Next Billion Users”, “Information Security and Privacy in the IoT Era”, and “Smart Cities in Asia and Development of Big Data”.

Surprisingly, although both are based in New Zealand and both work on Internet infrastructure issues, Jon and Ulrich did not have collaborated in the past, and thanks to the opportunity to attend the APrIGF 2015, they visited Auckland together and commissioned a new RIPE Atlas probe as part of deployment of network coding equipment on Aitutaki in the Cook Islands.

The APrIGF has taken the challenge to produce for the first time an “Outcomes Document”, which aims to identify the key issues and priorities within the Asia Pacific region that were discussed at the conference. The final document will serve as an input to feed into the wider global IGF discussions and also other relevant forums on Internet governance discussions. The document was open for comments at http://comment.rigf.asia to reflect the community views and encourage participation and engagement in Internet Governance issues. The Finalized Outcomes Document is expected to be further developed and finalized by August 14, 2015.