Recent blog entries

16 Jun 2017 LaForge   » (Master)

FOSS misconceptions, still in 2017

The lack of basic FOSS understanding in Telecom

Given that the Free and Open Source movement has been around at least since the 1980ies, it puzzles me that people still seem to have such fundamental misconceptions about it.

Something that really triggered me was an article at LightReading [1] which quotes Ulf Ewaldsson, a leading Ericsson excecutive with

"I have yet to understand why we would open source something we think is really good software"

This completely misses the point. FOSS is not about making a charity donation of a finished product to the planet.

FOSS is about sharing the development costs among multiple players, and avoiding that everyone has to reimplement the wheel. Macro-Economically, it is complete and utter nonsense that each 3GPP specification gets implemented two dozens of times, by at least a dozen of different entities. As a result, products are way more expensive than needed.

If large Telco players (whether operators or equipment manufacturers) were to collaboratively develop code just as much as they collaboratively develop the protocol specifications, there would be no need for replicating all of this work.

As a result, everyone could produce cellular network elements at reduced cost, sharing the R&D expenses, and competing in key areas, such as who can come up with the most energy-efficient implementation, or can produce the most reliable hardware, the best receiver sensitivity, the best and most fair scheduling implementation, or whatever else. But some 80% of the code could probably be shared, as e.g. encoding and decoding messages according to a given publicly released 3GPP specification document is not where those equipment suppliers actually compete.

So my dear cellular operator executives: Next time you're cursing about the prohibitively expensive pricing that your equipment suppliers quote you: You only have to pay that much because everyone is reimplementing the wheel over and over again.

Equally, my dear cellular infrastructure suppliers: You are all dying one by one, as it's hard to develop everything from scratch. Over the years, many of you have died. One wonders, if we might still have more players left, if some of you had started to cooperate in developing FOSS at least in those areas where you're not competing. You could replicate what Linux is doing in the operating system market. There's no need in having a phalanx of different proprietary flavors of Unix-like OSs. It's way too expansive, and it's not an area in which most companies need to or want to compete anyway.

Management Summary

You don't first develop and entire product until it is finished and then release it as open source. This makes little economic sense in a lot of cases, as you've already invested into developing 100% of it. Instead, you actually develop a new product collaboratively as FOSS in order to not have to invest 100% but maybe only 30% or even less. You get a multitude of your R&D investment back, because you're not only getting your own code, but all the other code that other community members implemented. You of course also get other benefits, such as peer review of the code, more ideas (not all bright people work inside one given company), etc.

[1]that article is actually a heavily opinionated post by somebody who appears to be pushing his own anti-FOSS agenda for some time. The author is misinformed about the fact that the TIP has always included projects under both FRAND and FOSS terms. As a TIP member I can attest to that fact. I'm only referencing it here for the purpose of that that Ericsson quote.

Syndicated 2017-06-15 22:00:00 from LaForge's home page

16 Jun 2017 Pizza   » (Master)

Mitsubishi CP-D70 family, now with more Raspberry Pi Goodness!

Back in March, my reverse-engineered processing library for the Mitsubishi CP-D70DW family of printers reached quality pairity with the official Windows/OSX drivers and their SDK. This was a huge milestone; it's now possible to drive these printers using entirely Free Software, with no quality compromises, on non-x86 CPU architectures. To do this, you will need Gutenprint 5.2.13-pre1 or newer, along with an up-to-date lib70x snapshot.

Here are the happy models:

  • Mitsubishi CP-D70DW
  • Mitsubishi CP-D707DW
  • Mitsubishi CP-D80DW
  • Mitsubishi CP-K60DW-S
  • Kodak 305
  • Fujifilm ASK-300 [completely untested]

I held off on announcing this achivement due to one rather annoying problem. It seems most folks interested in using this family of printers wanted to do so using the popular Raspberry Pi systems.
Unfortunately, for (at the time) unknown reasons, the CP-D70DW spectacularly failed to print with RPi systems, with the image data failing to transfer to the printer.

To further complicate things, I was unable to recreate the problem on the Kodak 305 (aka a rebadged CP-K60DW-S) and an RPi2. Software-based sniffs were useless, and things stayed at this impasse for quite some time.

What finally broke the logjam was one particularly motivated user running into a similar problem with an RPi3 and a CP-D80DW model -- but only when there was nothing plugged into the ethernet controller. This was a bizarre development, and a working theory that there was something funky going on with the RPi's onboard SMSC USB hub/ethernet chip began to develop.

He convinced Mitsubishi/Europe that it was in their interests to help figure this problem out, and I soon found myself with a CP-D80DW and several boxes of media on my doorstep and a backchannel to their technical team.

Try as I might, I was unable to recreate the problem on a RPi1 or RPi2, but on an RPi3, I was finally able to trigger the failure. Armed with a true USB sniffer and a copy of the USB spec, I was able to rapidly identify what was acutally going on.

The USB2 spec (section 8.5.1) dictates that, when in high-speed mode, a flow control mechanism (NYET/PING/[N]ACK) be used to maximize transfer throughput. NYET is sent by the device after a successful transfer to signal that it can't accept more. The controller then starts polling the device with PING requests, with the printer responding with a NACK until it's ready, at which point it responds with an ACK.

The problem, in a nutshell, is that the RPi's USB controller ends up sending the PING requests so rapidly -- about 400 nanoseconds apart) that it effectively triggers a denial-of-service (DoS) situation in the printer. The funny thing is that the RPi's controller starts with a PING interval of about 1.2 microseconds, and everything is fine until it pulls in the timing, triggering the DoS.

(A regular PC with an xHCI controller demonstrates variable PING timing; it starts at about 400ns, but backs off to about a 4us interval)

If other USB traffic is present, then the bus is too busy to send the PINGs so rapidly, and the printer is fine. Notably, simply establishing a physical lin with the onboard (USB-attached) ethernet controller (even if the interface is "down") generates sufficient traffic to mitigate the problem. Plugging in a thumbdrive and dumping data back or forth is also more than adequate.

I've reported this problem to both Mitsubishi and the RPi kernel folks.
I don't think one can expect much progress on the root cause of this problem, but it the above workarounds are at least adequate to using these printers with RPi systems.

...Enjoy, folks, and happy printing.

Syndicated 2017-06-15 23:04:06 from I Dream of Rain (free_software)

15 Jun 2017 zeenix   » (Journeyer)

Help me test gps-share

For gps-share to be useful to people, it needs to be tested against various GPS dongles. If you have a GPS dongle, I'd appreciate it if you could test gps-share. If you don't use the hardware, please consider donating it to me and that way I'll ensure that it keeps working with gps-share.

Thanks!

Syndicated 2017-06-15 09:04:00 (Updated 2017-06-15 09:04:25) from zeenix

13 Jun 2017 rcaden   » (Journeyer)

Welcome, Readers of the Future

I'm working on the next edition of Sams' Teach Yourself Java in 24 Hours. Java 9 has a new HTTP client package, jdk.incubator.http, that makes it a lot easier to GET and POST to web servers and other software that communicates over HTTP.

For a demo, I needed a simple server that could take POST requests and do something with them without requiring a user login. I was about to write one when I realized I already had. This blog takes comments submitted over POST.

When the book comes out, I'll be able to see from these comments that readers have reached Hour 22.

Syndicated 2017-06-13 00:19:21 from Workbench

6 Jun 2017 olea   » (Master)

Almería bid for hosting GUADEC in 2018

Well, «Alea iacta est». The deadline for biding to host GUADEC 2018 closed 4th Jun. And we proposed Almería.

GUADEC is the annual developers conference in the European zone, but with common attendants from Asia and North-South America, of the GNOME Project. This is another step to promote Almería as an international technology place and to estimulate new local techies, approaching students and proffesionals to world-class development communities and to the ethos and practice of opensource development. In the age of a «Github portfolio» as a CV resume for developers, the practice in opensource development it’s probably the best to train new programmers, to enhance their employability, and the learning of development practices along veteran programmers and mature communities. And it’s important to show how opensource software is a key component for our digitally connected society.

Again our legal instrument, absolutely key to these activities, is the UNIA CS students association at the University of Almería. The same association I helped fund in 1993 lives a new golden age thanks to a new group of entusiast members. Their support and collaboration made possible to host PyConES 2016, which, as far as we know, has been the biggest software development conference never made in our city. This conference has been a moral milestone for us and now we feel we can, at least, be hosts of other world-class conferences with the appropiate quality. And another key component of these proposals is the current university campus endownment and the support of the present university gobernm team to which we are grateful for their help and support. This is another step to increase the projection of the University of Almería as an international knowledge hub.

Finally I want to express my hope to do a significative contribution to the GNOME project, which I’m related to for more than a decade. Hopefully for 2018 I would have updated my pending GNOME packages in Fedora 🙈

So, here it is, the Almería candidacy to host GUADEC 2018.

Syndicated 2017-06-05 22:00:00 from Ismael Olea

6 Jun 2017 benad   » (Apprentice)

Pull The Plug Testing

You’ve head about the “Have you tried turning it off and on again” method of fixing software, but what if rebooting your computer actually made the problem worse? System administrators often brag about their computers’ uptime, but what if that’s to hide that they never tested rebooting their systems?

Contrast this to how processes are handled on iOS and Android: They can be killed at any time for any reason, and if fact some users took the habit of killing background processes at regular intervals to save on battery life. The net effect is that the system as a whole is more stable and doesn’t have to be rebooted often; When was the last time you rebooted your phone?

On an individual process level, the risk is with the corruption of the process’ external state, wherever it is stored. If the external state is transient, for example the other process in a client-server system, then you can restart all involved. For storage, the solution can be as simple as using a transaction-aware storage library, such as BerkleyDB or SQLite. In fact, SQLite is the primary mechanism in iOS and Android to safely store app data, as it is quite reliable and works quite well on small embedded devices.

I would go as far as recommending SQLite for all software to reliably store files unless you have to use a specific format or by design you don’t care about data loss. While SQLite cannot guarantee protection against “data rot” if random bytes on the storage are changed, or data loss if the process is killed in the middle of a transaction, at least the data can be recovered to a stable state. Rolling up your own logic to safely saving files is a lot more difficult than it seems, so at the very least SQLite is a good place to start.

And what about rebooting the whole system? We don’t think twice about powering off the embedded systems running on our TVs and other home appliances, yet for PCs we have to carefully shut down and reboot them otherwise they can just break. Windows still has those messages about not turning off your PC during system updates, and even during normal use I’ve had my fair share of Windows corrupting itself because I turned off the machine at unexpected times.

To protect against bad system updates or reboots, you can use a file system that supports creating complete “shapshots” (instant backups) of the system’s files. This is something that is commonly done with ZFS on Solaris 10, and could be done with ZFS on Linux or with Btrfs. If your storage has some redundancy, then using RAID can not only prevent data loss, but in some setups the computer will keep running as long as enough redundant drives are still available. Last time I set up a storage server with OpenSolaris and ZFS, one of my tests was to litterally pull the plug out of the drives while the system was running, and everything kept running without interruption. It’s a scary test, but it’s worth doing it before storing valuable data in it.

On larger-scale clusters of servers, setting up the system in “high availability” is one thing, but few are confident enough to use a process to randomly kill a server from time to time in production, like Netflix does with its “Chaos Monkey”. In fact, you can take advantage of reliable redundancy to deploy gradual updates in production, be it to replace your drives with bigger ones on ZFS or to deploy security patches on your cluster of servers.

Syndicated 2017-06-05 23:12:27 from Benad's Blog

5 Jun 2017 MikeGTN   » (Journeyer)

Tom Jeffreys - Signal Failure

Somewhere in the chaos and confusion of elections, referenda, terrorist attacks and transatlantic tumult in the last several years, HS2 has all but disappeared from the debate. This emblem of a modern Britain dragging itself towards the future was to scythe across green and pleasant England, linking London and Birmingham in hitherto unimaginably short journey times. Despite being a tiny country, we would finally reap the benefits of High Speed rail as a way of linking the metropolis of London with first the rapidly gentrifying Midlands, then the Northern Powerhouse. Eventually, it might even reach Scotland - still in the...

Syndicated 2017-06-05 15:06:00 from Lost::MikeGTN

2 Jun 2017 MikeGTN   » (Journeyer)

Carol Donaldson - On The Marshes

My relationship with Kent has been a troubled one over the years. I remember deciding on a railway excursion in the mid 1990s, somewhere on the interminably dull straight line between Tonbridge and Ashford on some dirty old slam-door stock, that I was convinced I didn't like Kent at all. However, over the years as I've explored more I've begun to truly appreciate the diversity of Kent. The sheer size of the county, taking in the broad sweep of land from the English Channel to the Thames Estuary, means that my early churlish observations were of course utterly invalid when...

Syndicated 2017-05-21 14:05:00 from Lost::MikeGTN

1 Jun 2017 glyph   » (Master)

The Sororicide Antipattern

Composition is better than inheritance.”. This is a true statement. “Inheritance is bad.” Also true. I’m a well-known compositional extremist. There’s a great talk you can watch if I haven’t talked your ear off about it already.

Which is why I was extremely surprised in a recent conversation when my interlocutor said that while inheritance might be bad, composition is worse. Once I understood what they meant by “composition”, I was even more surprised to find that I agreed with this assertion.

Although inheritance is bad, it’s very important to understand why. In a high-level language like Python, with first-class runtime datatypes (i.e.: user defined classes that are objects), the computational difference between what we call “composition” and what we call “inheritance” is a matter of where we put a pointer: is it on a type or on an instance? The important distinction has to do with human factors.

First, a brief parable about real-life inheritance.


You find yourself in conversation with an indolent heiress-in-waiting. She complains of her boredom whiling away the time until the dowager countess finally leaves her her fortune.

“Inheritance is bad”, you opine. “It’s better to make your own way in life”.

“By George, you’re right!” she exclaims. You weren’t expecting such an enthusiastic reversal.

“Well,”, you sputter, “glad to see you are turning over a new leaf”.

She crosses the room to open a sturdy mahogany armoire, and draws forth a belt holstering a pistol and a menacing-looking sabre.

“Auntie has only the dwindling remnants of a legacy fortune. The real money has always been with my sister’s manufacturing concern. Why passively wait for Auntie to die, when I can murder my dear sister now, and take what is rightfully mine!”

Cinching the belt around her waist, she strides from the room animated and full of purpose, no longer indolent or in-waiting, but you feel less than satisfied with your advice.

It is, after all, important to understand what the problem with inheritance is.


The primary reason inheritance is bad is confusion between namespaces.

The most important role of code organization (division of code into files, modules, packages, subroutines, data structures, etc) is division of responsibility. In other words, Conway’s Law isn’t just an unfortunate accident of budgeting, but a fundamental property of software design.

For example, if we have a function called multiply(a, b) - its presence in our codebase suggests that if someone were to want to multiply two numbers together, it is multiply’s responsibility to know how to do so. If there’s a problem with multiplication, it’s the maintainers of multiply who need to go fix it.

And, with this responsibility comes authority over a specific scope within the code. So if we were to look at an implementation of multiply:

1
2
3
def multiply(a, b):
    product = a * b
    return product

The maintainers of multiply get to decide what product means in the context of their function. It’s possible, in Python, for some other funciton to reach into multiply with frame objects and mangle the meaning of product between its assignment and return, but it’s generally understood that it’s none of your business what product is, and if you touch it, all bets are off about the correctness of multiply. More importantly, if the maintainers of multiply wanted to bind other names, or change around existing names, like so, in a subsequent version:

1
2
3
4
5
def multiply(a, b):
    factor1 = a
    factor2 = b
    result = a * b
    return result

It is the maintainer of multiply’s job, not the caller of multiply, to make those decisions.

The same programmer may, at different times, be both a caller and a maintainer of multiply. However, they have to know which hat they’re wearing at any given time, so that they can know which stuff they’re still repsonsible for when they hand over multiply to be maintained by a different team.

It’s important to be able to forget about the internals of the local variables in the functions you call. Otherwise, abstractions give us no power: if you have to know the internals of everything you’re using, you can never build much beyond what’s already there, because you’ll be spending all your time trying to understand all the layers below it.

Classes complicate this process of forgetting somewhat. Properties of class instances “stick out”, and are visible to the callers. This can be powerful — and can be a great way to represent shared data structures — but this is exactly why we have the ._ convention in Python: if something starts with an underscore, and it’s not in a namespace you own, you shouldn’t mess with it. So: other._foo is not for you to touch, unless you’re maintaining type(other). self._foo is where you should put your own private state.

So if we have a class like this:

1
2
3
class A(object):
    def __init__(self):
        self._note = "a note"

we all know that A()._note is off-limits.

But then what happens here?

1
2
3
4
class B(A):
    def __init__(self):
        super().__init__()
        self._note = "private state for B()"

B()._note is also off limits for everyone but B, except... as it turns out, B doesn’t really own the namespace of self here, so it’s clashing with what A wants _note to mean. Even if, right now, we were to change it to _note2, the maintainer of A could, in any future release of A, add a new _note2 variable which conflicts with something B is using. A’s maintainers (rightfully) think they own self, B’s maintainers (reasonably) think that they do. This could continue all the way until we get to _note7, at which point it would explode violently.


So that’s why Inheritance is bad. It’s a bad way for two layers of a system to communicate because it leaves each layer nowhere to put its internal state that the other doesn’t need to know about. So what could be worse?

Let’s say we’ve convinced our junior programmer who wrote A that inheritance is a bad interface, and they should instead use the panacea that cures all inherited ills, composition. Great! Let’s just write a B that composes in an A in a nice clean way, instead of doing any gross inheritance:

1
2
3
4
class Bprime(object):
    def __init__(self, a):
        for var in dir(a):
            setattr(self, var, getattr(a, var))

Uh oh. Looks like composition is worse than inheritance.


Let’s enumerate some of the issues with this “solution” to the problem of inheritance:

  • How do we know what attributes Bprime has?
  • How do we even know what type a is?
  • How is anyone ever going to grep for relevant methods in this code and have them come up in the right place?

We briefly reclaimed self for Bprime by removing the inheritance from A, but what Bprime does in __init__ to replace it is much worse. At least with normal, “vertical” inheritance, IDEs and code inspection tools can have some idea where your parents are and what methods they declare. We have to look aside to know what’s there, but at least it’s clear from the code’s structure where exactly we have to look aside to.

When faced with a class like Bprime though, what does one do? It’s just shredding apart some apparently totally unrelated object, there’s nearly no way for tooling to inspect this code to the point that they know where self.<something> comes from in a method defined on Bprime.

The goal of replacing inheritance with composition is to make it clear and easy to understand what code owns each attribute on self. Sometimes that clarity comes at the expense of a few extra keystrokes; an __init__ that copies over a few specific attributes, or a method that does nothing but forward a message, like def something(self): return self.other.something().

Automatic composition is just lateral inheritance. Magically auto-proxying all methods1, or auto-copying all attributes, saves a few keystrokes at the time some new code is created at the expense of hours of debugging when it is being maintained. If readability counts, we should never privilege the writer over the reader.


  1. It is left as an exercise for the reader why proxyForInterface is still a reasonably okay idea even in the face of this criticism.2 

  2. Although ironically it probably shouldn’t use inheritance as its interface. 

Syndicated 2017-06-01 06:25:00 from Deciphering Glyph

1 Jun 2017 pabs3   » (Master)

FLOSS Activities May 2017

Changes

Issues

Review

Administration

  • Debian: discuss mail bounces with a hoster, check perms of LE results, add 1 user to a group, re-sent some TLS cert expiry mail, clean up mail bounce flood, approve some debian.net TLS certs, do the samhain dance thrice, end 1 samhain mail flood, diagnose/fix LDAP update issue, relay DebConf cert expiry mails, reboot 2 non-responsive VM, merged patches for debian.org-sources.debian.org meta-package,
  • Debian mentors: lintian/security updates & reboot
  • Debian wiki: delete stray tmp file, whitelist 14 email addresses, disable 1 accounts with bouncing email, ping 3 persons with bouncing email
  • Debian website: update/push index/CD/distrib
  • Debian QA: deploy my changes, disable some removed suites in qadb
  • Debian PTS: strip whitespace from existing pages, invalidate sigs so pages get a rebuild
  • Debian derivatives census: deploy changes
  • Openmoko: security updates & reboots.

Communication

  • Invite Purism (on IRC), XBian (also on IRC), DuZeru to the Debian derivatives census
  • Respond to the shutdown of Parsix
  • Report BlankOn fileserver and Huayra webserver issues
  • Organise a transition of Ubuntu/Endless Debian derivatives census maintainers
  • Advocate against Debian having a monopoly on hardware certification
  • Advocate working with existing merchandise vendors
  • Start a discussion about Debian membership in other organisations
  • Advocate for HPE to join the LVFS & support fwupd

Sponsors

All work was done on a volunteer basis.

Syndicated 2017-06-01 00:44:15 from Advogato

30 May 2017 zeenix   » (Journeyer)

Introducing gps-share

So yesterday, I rolled out the first release of gps-share.

gps-share is a utility to share your GPS device on local network. It has two goals:

  • Share your GPS device on the local network so that all machines in your home or office can make use of it.
  • Enable support for standalone (i-e not part of a cellular modem) GPS devices in Geoclue. Since Geoclue has been able to make use of network NMEA sources since 2015, gps-share works out of the box with Geoclue.

The latter means that it is a replacement for GPSD and Gypsy. While "why not GPSD?" has already been documented, Gypsy has been unmaintained for many years now. I did not feel like reviving a dead project and I really wanted to code in Rust language so I decided to create gps-share.

Dependencies


While cargo manages the Rust crates gps-share depend on, you'll also
need the following on your host:

  • libdbus
  • libudev
  • libcap
  • xz-libs

Supported devices


gps-share currently only supports GPS devices that present themselves as serial port (RS232). Many USB are expected to work out of the box but bluetooth devices need manual intervention to be mounted as serial port devices through rfcomm command. The following command worked on my Fedora 25 machine for a TomTom Wireless GPS MkII.


sudo rfcomm connect 0 00:0D:B5:70:54:75

gps-share can autodetect the device to use if it's already mounted as a serial port but it assumes a baudrate of 38400. You can manually set the device node to use by passing the device node path as argument and set the baudrate using the '-b' commandline option. Pass '--help' for a full list of supported options.

Syndicated 2017-05-30 10:57:00 (Updated 2017-06-08 17:06:19) from zeenix

29 May 2017 LaForge   » (Master)

Playing back GSM RTP streams, RTP-HR bugs

Chapter 0: Problem Statement

In an all-IP GSM network, where we use Abis, A and other interfaces within the cellular network over IP transport, the audio of voice calls is transported inside RTP frames. The codec payload in those RTP frames is the actual codec frame of the respective cellular voice codec. In GSM, there are four relevant codecs: FR, HR, EFR and AMR.

Every so often during the (meanwhile many years of ) development of Osmocom cellular infrastructure software it would have been useful to be able to quickly play back the audio for analysis of given issues.

However, until now we didn't have that capability. The reason is relatively simple: In Osmocom, we genally don't do transcoding but simply pass the voice codec frames from left to right. They're only transcoded inside the phones or inside some external media gateway (in case of larger networks).

Chapter 1: GSM Audio Pocket Knife

Back in 2010, when we were very actively working on OsmocomBB, the telephone-side GSM protocol stack implementation, Sylvain Munaut wrote the GSM Audio Pocket Knife (gapk) in order to be able to convert between different formats (representations) of codec frames. In cellular communcations, everyoe is coming up with their own representation for the codec frames: The way they look on E1 as a TRAU frame is completely different from how RTP payload looks like, or what the TI Calypso DSP uses internally, or what a GSM Tester like the Racal 61x3 uses. The differences are mostly about data types used, bit-endinanness as well as padding and headers. And of course those different formats exist for each of the four codecs :/

In 2013 I first added simplistic RTP support for FR-GSM to gapk, which was sufficient for my debugging needs back then. Still, you had to save the decoded PCM output to a file and play that back, or use a pipe into aplay.

Last week, I picked up this subject again and added a long series of patches to gapk:

  • support for variable-length codec frames (required for AMR support)
  • support for AMR codec encode/decode using libopencore-amrnb
  • support of all known RTP payload formats for all four codecs
  • support for direct live playback to a sound card via ALSA

All of the above can now be combined to make GAPK bind to a specified UDP port and play back the RTP codec frames that anyone sends to that port using a command like this:

$ gapk -I 0.0.0.0/30000 -f rtp-amr -A default -g rawpcm-s16le

I've also merged a chance to OsmoBSC/OsmoNITB which allows the administrator to re-direct the voice of any active voice channel towards a user-specified IP address and port. Using that you can simply disconnect the voice stream from its normal destination and play back the audio via your sound card.

Chapter 2: Bugs in OsmoBTS GSM-HR

While going through the exercise of implementing the above extension to gapk, I had lots of trouble to get it to work for GSM-HR.

After some more digging, it seems there are two conflicting specification on how to format the RTP payload for half-rate GSM:

In Osmocom, we claim to implement RFC5993, but it turned out that (at least) osmo-bts-sysmo (for sysmoBTS) was actually implementing the ETSI format instead.

And even worse, osmo-bts-sysmo gets event the ETSI format wrong. Each of the codec parameters (which are unaligned bit-fields) are in the wrong bit-endianness :(

Both the above were coincidentially also discovered by Sylvain Munaut during operating of the 32C3 GSM network in December 2015 and resulted the two following "work around" patches: * HACK for HR * HACK: Fix the bit order in HR frames

Those merely worked around those issues in the rtp_proxy of OsmoNITB, rather than addressing the real issue. That's ok, they were "quick" hacks to get something working at all during a four-day conference. I'm now working on "real" fixes in osmo-bts-sysmo. The devil is of course in the details, when people upgrade one BTS but not the other and want to inter-operate, ...

It yet remains to be investigated how osmo-bts-trx and other osmo-bts ports behave in this regard.

Chapter 3: Conclusions

Most definitely it is once again a very clear sign that more testing is required. It's tricky to see even wih osmo-gsm-tester, as GSM-HR works between two phones or even two instances of osmo-bts-sysmo, as both sides of the implementation have the same (wrong) understanding of the spec.

Given that we can only catch this kind of bug together with the hardware (the DSP runs the PHY code), pure unit tests wouldn't catch it. And the end-to-end test is also not very well suited to it. It seems to call for something in betewen. Something like an A-bis interface level test.

We need more (automatic) testing. I cannot say that often enough. The big challenge is how to convince contributors and customers that they should invest their time and money there, rather than yet-another (not automatically tested) feature?

Syndicated 2017-05-28 22:00:00 from LaForge's home page

29 May 2017 mikal   » (Journeyer)

Configuring docker to use rexray and Ceph for persistent storage

For various reasons I wanted to play with docker containers backed by persistent Ceph storage. rexray seemed like the way to do that, so here are my notes on getting that working...

First off, I needed to install rexray:

    root@labosa:~/rexray# curl -sSL https://dl.bintray.com/emccode/rexray/install | sh
    Selecting previously unselected package rexray.
    (Reading database ... 177547 files and directories currently installed.)
    Preparing to unpack rexray_0.9.0-1_amd64.deb ...
    Unpacking rexray (0.9.0-1) ...
    Setting up rexray (0.9.0-1) ...
    
    rexray has been installed to /usr/bin/rexray
    
    REX-Ray
    -------
    Binary: /usr/bin/rexray
    Flavor: client+agent+controller
    SemVer: 0.9.0
    OsArch: Linux-x86_64
    Branch: v0.9.0
    Commit: 2a7458dd90a79c673463e14094377baf9fc8695e
    Formed: Thu, 04 May 2017 07:38:11 AEST
    
    libStorage
    ----------
    SemVer: 0.6.0
    OsArch: Linux-x86_64
    Branch: v0.9.0
    Commit: fa055d6da595602715bdfd5541b4aa6d4dcbcbd9
    Formed: Thu, 04 May 2017 07:36:11 AEST
    


    Which is of course horrid. What that script seems to have done is install a deb'd version of rexray based on an alien'd package:

      root@labosa:~/rexray# dpkg -s rexray
      Package: rexray
      Status: install ok installed
      Priority: extra
      Section: alien
      Installed-Size: 36140
      Maintainer: Travis CI User <travis@testing-gce-7fbf00fc-f7cd-4e37-a584-810c64fdeeb1>
      Architecture: amd64
      Version: 0.9.0-1
      Depends: libc6 (>= 2.3.2)
      Description: Tool for managing remote & local storage.
       A guest based storage introspection tool that
       allows local visibility and management from cloud
       and storage platforms.
       .
       (Converted from a rpm package by alien version 8.86.)
      


      If I was building anything more than a test environment I think I'd want to do a better job of installing rexray than this, so you've been warned.

      Next to configure rexray to use Ceph. The configuration details are cunningly hidden in the libstorage docs, and aren't mentioned at all in the rexray docs, so you probably want to take a look at the libstorage docs on ceph. First off, we need to install the ceph tools, and copy the ceph authentication information from the the ceph we installed using openstack-ansible earlier.

        root@labosa:/etc# apt-get install ceph-common
        root@labosa:/etc# scp -rp 172.29.239.114:/etc/ceph .
        The authenticity of host '172.29.239.114 (172.29.239.114)' can't be established.
        ECDSA key fingerprint is SHA256:SA6U2fuXyVbsVJIoCEHL+qlQ3xEIda/MDOnHOZbgtnE.
        Are you sure you want to continue connecting (yes/no)? yes
        Warning: Permanently added '172.29.239.114' (ECDSA) to the list of known hosts.
        rbdmap                       100%   92     0.1KB/s   00:00    
        ceph.conf                    100%  681     0.7KB/s   00:00    
        ceph.client.admin.keyring    100%   63     0.1KB/s   00:00    
        ceph.client.glance.keyring   100%   64     0.1KB/s   00:00    
        ceph.client.cinder.keyring   100%   64     0.1KB/s   00:00    
        ceph.client.cinder-backup.keyring   71     0.1KB/s   00:00  
        root@labosa:/etc# modprobe rbd
        


        You also need to configure rexray. My first attempt looked like this:

          root@labosa:/var/log# cat /etc/rexray/config.yml
          libstorage:
            service: ceph
          


          And the rexray output sure made it look like it worked...

            root@labosa:/etc# rexray service start
            ● rexray.service - rexray
               Loaded: loaded (/etc/systemd/system/rexray.service; enabled; vendor preset: enabled)
               Active: active (running) since Mon 2017-05-29 10:14:07 AEST; 33ms ago
             Main PID: 477423 (rexray)
                Tasks: 5
               Memory: 1.5M
                  CPU: 9ms
               CGroup: /system.slice/rexray.service
                       └─477423 /usr/bin/rexray start -f
            
            May 29 10:14:07 labosa systemd[1]: Started rexray.
            


            Which looked good, but /var/log/syslog said:

              May 29 10:14:08 labosa rexray[477423]: REX-Ray
              May 29 10:14:08 labosa rexray[477423]: -------
              May 29 10:14:08 labosa rexray[477423]: Binary: /usr/bin/rexray
              May 29 10:14:08 labosa rexray[477423]: Flavor: client+agent+controller
              May 29 10:14:08 labosa rexray[477423]: SemVer: 0.9.0
              May 29 10:14:08 labosa rexray[477423]: OsArch: Linux-x86_64
              May 29 10:14:08 labosa rexray[477423]: Branch: v0.9.0
              May 29 10:14:08 labosa rexray[477423]: Commit: 2a7458dd90a79c673463e14094377baf9fc8695e
              May 29 10:14:08 labosa rexray[477423]: Formed: Thu, 04 May 2017 07:38:11 AEST
              May 29 10:14:08 labosa rexray[477423]: libStorage
              May 29 10:14:08 labosa rexray[477423]: ----------
              May 29 10:14:08 labosa rexray[477423]: SemVer: 0.6.0
              May 29 10:14:08 labosa rexray[477423]: OsArch: Linux-x86_64
              May 29 10:14:08 labosa rexray[477423]: Branch: v0.9.0
              May 29 10:14:08 labosa rexray[477423]: Commit: fa055d6da595602715bdfd5541b4aa6d4dcbcbd9
              May 29 10:14:08 labosa rexray[477423]: Formed: Thu, 04 May 2017 07:36:11 AEST
              May 29 10:14:08 labosa rexray[477423]: time="2017-05-29T10:14:08+10:00" level=error
              msg="error starting libStorage server" error.driver=ceph time=1496016848215
              May 29 10:14:08 labosa rexray[477423]: time="2017-05-29T10:14:08+10:00" level=error
              msg="default module(s) failed to initialize" error.driver=ceph time=1496016848216
              May 29 10:14:08 labosa rexray[477423]: time="2017-05-29T10:14:08+10:00" level=error
              msg="daemon failed to initialize" error.driver=ceph time=1496016848216
              May 29 10:14:08 labosa rexray[477423]: time="2017-05-29T10:14:08+10:00" level=error
              msg="error starting rex-ray" error.driver=ceph time=1496016848216
              


              That's because the service is called rbd it seems. So, the config file ended up looking like this:

                root@labosa:/var/log# cat /etc/rexray/config.yml
                libstorage:
                  service: rbd
                
                rbd:
                  defaultPool: rbd
                


                Now to install docker:

                  root@labosa:/var/log# sudo apt-get update
                  root@labosa:/var/log# sudo apt-get install linux-image-extra-$(uname -r) \
                      linux-image-extra-virtual
                  root@labosa:/var/log# sudo apt-get install apt-transport-https \
                      ca-certificates curl software-properties-common
                  root@labosa:/var/log# curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
                  root@labosa:/var/log# sudo add-apt-repository \
                      "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
                      $(lsb_release -cs) \
                      stable"
                  root@labosa:/var/log# sudo apt-get update
                  root@labosa:/var/log# sudo apt-get install docker-ce
                  


                  Now let's make a rexray volume.

                    root@labosa:/var/log# rexray volume ls
                    ID  Name  Status  Size
                    root@labosa:/var/log# docker volume create --driver=rexray --name=mysql \
                        --opt=size=1
                    A size of 1 here means 1gb
                    mysql
                    root@labosa:/var/log# rexray volume ls
                    ID         Name   Status     Size
                    rbd.mysql  mysql  available  1
                    


                    Let's start the container.

                      root@labosa:/var/log# docker run --name some-mysql --volume-driver=rexray \
                          -v mysql:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql
                      Unable to find image 'mysql:latest' locally
                      latest: Pulling from library/mysql
                      10a267c67f42: Pull complete 
                      c2dcc7bb2a88: Pull complete 
                      17e7a0445698: Pull complete 
                      9a61839a176f: Pull complete 
                      a1033d2f1825: Pull complete 
                      0d6792140dcc: Pull complete 
                      cd3adf03d6e6: Pull complete 
                      d79d216fd92b: Pull complete 
                      b3c25bdeb4f4: Pull complete 
                      02556e8f331f: Pull complete 
                      4bed508a9e77: Pull complete 
                      Digest: sha256:2f4b1900c0ee53f344564db8d85733bd8d70b0a78cd00e6d92dc107224fc84a5
                      Status: Downloaded newer image for mysql:latest
                      ccc251e6322dac504e978f4b95b3787517500de61eb251017cc0b7fd878c190b
                      


                      And now to prove that persistence works and that there's nothing up my sleeve... />
                        root@labosa:/var/log# docker run -it --link some-mysql:mysql --rm mysql \
                            sh -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" \
                            -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"'
                        mysql: [Warning] Using a password on the command line interface can be insecure.
                        Welcome to the MySQL monitor.  Commands end with ; or \g.
                        Your MySQL connection id is 3
                        Server version: 5.7.18 MySQL Community Server (GPL)
                        
                        Copyright (c) 2000, 2017, Oracle and/or its affiliates. All rights reserved.
                        
                        Oracle is a registered trademark of Oracle Corporation and/or its
                        affiliates. Other names may be trademarks of their respective
                        owners.
                        
                        Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
                        
                        mysql> show databases;
                        +--------------------+
                        | Database           |
                        +--------------------+
                        | information_schema |
                        | mysql              |
                        | performance_schema |
                        | sys                |
                        +--------------------+
                        4 rows in set (0.00 sec)
                        
                        mysql> create database demo;
                        Query OK, 1 row affected (0.03 sec)
                        
                        mysql> use demo;
                        Database changed
                        mysql> create table foo(val char(5));
                        Query OK, 0 rows affected (0.14 sec)
                        
                        mysql> insert into foo(val) values ('a'), ('b'), ('c');
                        Query OK, 3 rows affected (0.08 sec)
                        Records: 3  Duplicates: 0  Warnings: 0
                        
                        mysql> select * from foo;
                        +------+
                        | val  |
                        +------+
                        | a    |
                        | b    |
                        | c    |
                        +------+
                        3 rows in set (0.00 sec)
                        


                        Now let's re-create the container and prove the data remains.

                          root@labosa:/var/log# docker stop some-mysql
                          some-mysql
                          root@labosa:/var/log# docker rm some-mysql
                          some-mysql
                          root@labosa:/var/log# docker run --name some-mysql --volume-driver=rexray \
                              -v mysql:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql
                          99a7ccae1ad1865eb1bcc8c757251903dd2f1ac7d3ce4e365b5cdf94f539fe05
                          
                          root@labosa:/var/log# docker run -it --link some-mysql:mysql --rm mysql \
                              sh -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -\
                              P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"'
                          mysql: [Warning] Using a password on the command line interface can be insecure.
                          Welcome to the MySQL monitor.  Commands end with ; or \g.
                          Your MySQL connection id is 3
                          Server version: 5.7.18 MySQL Community Server (GPL)
                          
                          Copyright (c) 2000, 2017, Oracle and/or its affiliates. All rights reserved.
                          
                          Oracle is a registered trademark of Oracle Corporation and/or its
                          affiliates. Other names may be trademarks of their respective
                          owners.
                          
                          Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
                          
                          mysql> use demo;
                          Reading table information for completion of table and column names
                          You can turn off this feature to get a quicker startup with -A
                          
                          Database changed
                          mysql> select * from foo;
                          +------+
                          | val  |
                          +------+
                          | a    |
                          | b    |
                          | c    |
                          +------+
                          3 rows in set (0.00 sec)
                          
                          So there you go.

                          Tags for this post: docker ceph rbd rexray
                          Related posts: So you want to setup a Ceph dev environment using OSA; Juno nova mid-cycle meetup summary: containers

                          Comment

                          Syndicated 2017-05-28 18:45:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

28 May 2017 MikeGTN   » (Journeyer)

Eastern Promises Revisted: Two Walks and Three Rivers

The luxury of waking in London wasn't lost on me this morning. As I tramped towards an early breakfast which I took with a view over the Tower of London and bright sunlight glaring off the panels of The Shard, I pondered the day ahead and its odd division between places old and new. I've walked and written about London and its environs for years in some way, but there was a distinct tempo change in 2012 when I felt like the city was on the brink of its next seismic lurch into the future. It was a tumultuous year...

Syndicated 2017-05-20 21:05:00 from Lost::MikeGTN

28 May 2017 mikal   » (Journeyer)

So you want to setup a Ceph dev environment using OSA

Support for installing and configuring Ceph was added to openstack-ansible in Ocata, so now that I have a need for a Ceph development environment it seems logical that I would build it by building an openstack-ansible Ocata AIO. There were a few gotchas there, so I want to explain the process I used.

First off, Ceph is enabled in an openstack-ansible AIO using a thing I've never seen before called a "Scenario". Basically this means that you need to export an environment variable called "SCENARIO" before running the AIO install. Something like this will do the trick?L:

    export SCENARIO=ceph
    


    Next you need to set the global pg_num in the ceph role or the install will fail. I did that with this patch:

      --- /etc/ansible/roles/ceph.ceph-common/defaults/main.yml       2017-05-26 08:55:07.803635173 +1000
      +++ /etc/ansible/roles/ceph.ceph-common/defaults/main.yml       2017-05-26 08:58:30.417019878 +1000
      @@ -338,7 +338,9 @@
       #     foo: 1234
       #     bar: 5678
       #
      -ceph_conf_overrides: {}
      +ceph_conf_overrides:
      +  global:
      +    osd_pool_default_pg_num: 8
       
       
       #############
      @@ -373,4 +375,4 @@
       # Set this to true to enable File access via NFS.  Requires an MDS role.
       nfs_file_gw: true
       # Set this to true to enable Object access via NFS. Requires an RGW role.
      -nfs_obj_gw: false
      \ No newline at end of file
      +nfs_obj_gw: false
      


      That of course needs to be done after the Ceph role has been fetched, but before it is executed, so in other words after the AIO bootstrap, but before the install.

      And that was about it (although of course that took a fair while to work out). I have this automated in my little install helper thing, so I'll never need to think about it again which is nice.

      Once Ceph is installed, you interact with it via the monitor container, not the utility container, which is a bit odd. That said, all you really need is the Ceph config file and the Ceph utilities, so you could move those elsewhere.

        root@labosa:/etc/openstack_deploy# lxc-attach -n aio1_ceph-mon_container-a3d8b8b1
        root@aio1-ceph-mon-container-a3d8b8b1:/# ceph -s
            cluster 24424319-b5e9-49d2-a57a-6087ab7f45bd
             health HEALTH_OK
             monmap e1: 1 mons at {aio1-ceph-mon-container-a3d8b8b1=172.29.239.114:6789/0}
                    election epoch 3, quorum 0 aio1-ceph-mon-container-a3d8b8b1
             osdmap e20: 3 osds: 3 up, 3 in
                    flags sortbitwise,require_jewel_osds
              pgmap v36: 40 pgs, 5 pools, 0 bytes data, 0 objects
                    102156 kB used, 3070 GB / 3070 GB avail
                          40 active+clean
        root@aio1-ceph-mon-container-a3d8b8b1:/# ceph osd tree
        ID WEIGHT  TYPE NAME       UP/DOWN REWEIGHT PRIMARY-AFFINITY 
        -1 2.99817 root default                                      
        -2 2.99817     host labosa                                   
         0 0.99939         osd.0        up  1.00000          1.00000 
         1 0.99939         osd.1        up  1.00000          1.00000 
         2 0.99939         osd.2        up  1.00000          1.00000 
        


        Tags for this post: openstack osa ceph openstack-ansible

        Comment

        Syndicated 2017-05-27 18:30:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

25 May 2017 badvogato   » (Master)

‘The Real Life Mowgli’: The Girl Who Was ...

At a young age, she began running her own website where she declared “My name is Tippi. I am African and I was born 10 years ago in Namibia.” Sylvie told The Telegraph, “Tippi believes she is African and she wants to get a Namibian passport. She wants to become an ambassador for Namibia. It is like Mowgli’s story, but Tippi’s is true.”


" The Seabird Islands are located off the coast of South Africa. It is a very important breeding site for coastal birds. Therefore, it is no surprise that the Degrés took a trip to the island. Also, no surprise, Tippi was able to become one with the flock. Here she is seen at age 6, with arms outstretched. She is completely in sync with her surroundings."

Many Of The Animals Were Tamed

Although Tippi was pictured with many wild animals, her parents admit that many of the animals were tamed by local farmers. Sylvie told The Telegraph, “In the arid or semi desert regions of Southern Africa people have farms of 10,000 to 20,000 hectares. The farmers often keep orphan animals and raise them in their house. Sometimes they are tame or used to humans and so this is how Tippi was able to be so close with them.”


Her Life Chronicled

When the “outside world,” began learning about Tippi, the public immediately wanted to know more. Multiple documentaries have been made about her and her experiences. Le Monde Selon Tippi (The World According to Tippi) was released in 1997, Tippi en Afrique was released in 2002, and Around the World with Tippi was released in 2004. Around the World with Tippi included six wildlife and environmental TV documentaries, which premiered on the Discovery Channel.

She’s Not The Only One

There have been several other cases where a child was raised by animals. For example, Marina Chapman, was believed to be kidnapped and then abandoned. For 5 years, she found refuge with a group of capuchin monkeys. The monkeys taught Marina how to catch birds and rabbits with her bare hands. In 2001, a young Chilean boy was found amongst a pack of dogs. He had been living amongst them for 2 years. Officials report the dogs protected him and helped him scavenge for food.


By Taylor McCann, Apr 19, 2017

http://www.greeningz.com/entertainment/girl-who-was-raised-by-animals/40/

23 May 2017 LaForge   » (Master)

Power-cycling a USB port should be simple, right?

Every so often I happen to be involved in designing electronics equipment that's supposed to run reliably remotely in inaccessible locations,without any ability for "remote hands" to perform things like power-cycling or the like. I'm talking about really remote locations, possible with no but limited back-haul, and a very high cost of ever sending somebody there for remote maintenance.

Given that a lot of computer peripherals (chips, modules, ...) use USB these days, this is often some kind of an embedded ARM (rarely x86) SoM or SBC, which is hooked up to a custom board that contains a USB hub chip as well as a line of peripherals.

One of the most important lectures I've learned from experience is: Never trust reset signals / lines, always include power-switching capability. There are many chips and electronics modules available on the market that have either no RESET, or even might claim to have a hardware RESET line which you later (painfully) discover just to be a GPIO polled by software which can get stuck, and hence no way to really hard-reset the given component.

In the case of a USB-attached device (even though the USB might only exist on a circuit board between two ICs), this is typically rather easy: The USB hub is generally capable of switching the power of its downstream ports. Many cheap USB hubs don't implement this at all, or implement only ganged switching, but if you carefully select your USB hub (or in the case of a custom PCB), you can make sure that the given USB hub supports individual port power switching.

Now the next step is how to actually use this from your (embedded) Linux system. It turns out to be harder than expected. After all, we're talking about a standard feature that's present in the USB specifications since USB 1.x in the late 1990ies. So the expectation is that it should be straight-forward to do with any decent operating system.

I don't know how it's on other operating systems, but on Linux I couldn't really find a proper way how to do this in a clean way. For more details, please read my post to the linux-usb mailing list.

Why am I running into this now? Is it such a strange idea? I mean, power-cycling a device should be the most simple and straight-forward thing to do in order to recover from any kind of "stuck state" or other related issue. Logical enabling/disabling of the port, resetting the USB device via USB protocol, etc. are all just "soft" forms of a reset which at best help with USB related issues, but not with any other part of a USB device.

And in the case of e.g. an USB-attached cellular modem, we're actually talking about a multi-processor system with multiple built-in micro-controllers, at least one DSP, an ARM core that might run another Linux itself (to implement the USB gadget), ... - certainly enough complex software that you would want to be able to power-cycle it...

I'm curious what the response of the Linux USB gurus is.

Syndicated 2017-05-23 22:00:00 from LaForge's home page

22 May 2017 MikeGTN   » (Journeyer)

London's Other Orbitals: Walking the A110

I realised as I sat on the near-empty Northern Line train, shuddering noisily into the light somewhere in Finchley, that my previous visit to this part of the London had been a very long time ago. In fact my last traversal of this part of the Underground network was before I kept records of such things. In the mid-1990s I'd embarked on a project to cover as much of the Tube as I could - despite a crippling phobia regarding escalators - but this involved nothing more fastidious than marking the lines on a map. Now I was clanking into...

Syndicated 2017-05-06 22:05:00 from Lost::MikeGTN

18 May 2017 mikal   » (Journeyer)

The Collapsing Empire




ISBN: 076538888X
LibraryThing
This is a fun fast read, as is everything by Mr Scalzi. The basic premise here is that of a set of interdependent colonies that are about to lose their ability to trade with each other, and are therefore doomed. Oh, except they don't know that and are busy having petty trade wars instead. It isn't a super intellectual read, but it is fun and does leave me wanting to know what happens to the empire...

Tags for this post: book john_scalzi
Related posts: The Last Colony ; The End of All Things; Zoe's Tale; Agent to the Stars; Redshirts; Fuzzy Nation


Comment

Syndicated 2017-05-17 21:46:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

14 May 2017 lloydwood   » (Journeyer)

Keyboard logging on Hewlett-Packard laptops

Versions of the Conexant audio driver on HP laptops can log every keystroke to disk, writing to a visible file in C:\Users\Public. Your passwords, everything.

So HP issued a driver update. But that driver update is reported to still have the logging capability, turned off. Logging can be reactivated with a simple registry hack.

My future plans do not include buying devices from Hewlett-Packard, or investing in Conexant stock.

Update: HP's official security bulletin on the issue. Meanwhile, the logger can be reused and exploited.

13 May 2017 mones   » (Journeyer)

Disabling "flat-volumes" in pulseaudio

Today I've just faced another of those happy ideas some people implements in software, which can be useful for some cases, but can also also be bad as default behaviour.

The problems caused were already posted to Debian mailing lists, fortunately, as well as its solution, which basically in a default Debian configuration means to:

$ sudo echo "flat-volumes = no" >> /etc/pulse/daemon.conf
$ pulseaudio -k && pulseaudio

And I think the default for Stretch should be set as above: raising volume to 100% just because of a system notification, while useful for some, it's not what common users expect.

Syndicated 2017-05-13 15:12:48 from Ricardo Mones

12 May 2017 badvogato   » (Master)

12 May 2017 mikal   » (Journeyer)

Python3 venvs for people who are old and grumpy

I've been using virtualenvwrapper to make venvs for python2 for probably six or so years. I know it, and understand it. Now some bad man (hi Ramon!) is making me do python3, and virtualenvwrapper just isn't a thing over there as best as I can tell.

So how do I make a venv? Its really not too bad...

First, install the dependencies:

10 May 2017 MikeGTN   » (Journeyer)

London's Other Orbitals

I set out to write a fairly ordinary report on a long, satisfying walk I recently completed - and as the introduction grew, I knew I was in fact writing something else. A justification, a manifesto, or just a project plan perhaps - in any case an explanation of what led me to decide to walk along an uncelebrated North London A-road from somewhere to somewhere else. I didn't feel the need to justify this to anyone except perhaps myself - but I felt the uneasy stirrings of a project forming - and that's always dangerous. So, the ramblings below...

Syndicated 2017-05-06 21:05:00 from Lost::MikeGTN

10 May 2017 mones   » (Journeyer)

Building on a RPi without disc

Nothing like a broken motherboard to experiment alternative ways of building software. This time I've tried to use a Raspberry Pi and, to avoid wearing out the SD card too much, a NFS mount on a Synology NAS. It happens both items were generously donated by two different Claws Mail users some years ago, thanks to them! ;-)

So, after installing all build dependencies and a build helper, how long it took?

configure-claws: Tue May 9 13:28:55 UTC 2017
cd b-claws && env PKG_CONFIG_PATH=/opt/claws/lib/pkgconfig ./configure --enable-maintainer-mode --prefix=/opt/claws > /home/mones/nfs/claws/log-configure-claws.txt 2>&1 && cd ..
configure-claws: Tue May 9 13:34:09 UTC 2017
compile-claws: Tue May 9 13:34:09 UTC 2017
cd b-claws && make -j2 > /home/mones/nfs/claws/log-compile-claws.txt 2>&1 && cd ..
compile-claws: Tue May 9 15:44:28 UTC 2017

Yep, that is more than 5 minutes for configuring and more than 130 minutes for compiling. Not for being in a hurry, but I've built kernels which took more, some decades ago :-)

And if you want to know how to break a motherboard...


One day you're converting some raw photos to JPEG with RawTherapee and the computer shuts down. Then you try again and notice the temperature of the CPU is too high, and it shuts down again, of course. You boot into BIOS and then realize the thermal protection shutdown was enabled (and you thank your past self for having enabled it!). The next day is Sunday and you try to clean inside the case, but there's not much dirt to clean. Dismounting the CPU cooler reveals that the thermal compound has nearly gone though.

The following day you try to buy some thermal grease, but the corner store only has a "P.R.C." labeled syringe and some thermal pad from CoolBox. The thermal pad seems to work fine on first boot, until you try RawTherapee and it shuts down again. Crying doesn't help as you see the temperature monitor increase one degree per second while you're staring at the BIOS (and it shuts down again).

Another day passes and you go to another local store, a bit further than the first, and firmly determined to get a real thermal compound. Nevertheless the store only has two options: expensive one and a cheap one. Store employee says the cheaper works fine, and the label shows indeed better specs than the expensive one. So, not without some hesitation, you buy the cheaper one, which is made by (you figured it out) CoolBox.

Back at home you remove the thermal pad and try to clean the cooler and the processor and to apply not too much compound. Somehow here is where the things go wrong. Maybe while trying to put the cooler in place, maybe while applying compound a second time. The fact is that now there's no video output anymore, and no power is being delivered to USB ports. No video, no keyboard and no idea about what's next.

Anyway, there's not much alternatives, the problem is to know which is the damaged part: CPU, motherboard or both. Ideas welcome ;-)

Syndicated 2017-05-09 23:32:56 from Ricardo Mones

9 May 2017 mjg59   » (Master)

Intel AMT on wireless networks

More details about Intel's AMT vulnerablity have been released - it's about the worst case scenario, in that it's a total authentication bypass that appears to exist independent of whether the AMT is being used in Small Business or Enterprise modes (more background in my previous post here). One thing I claimed was that even though this was pretty bad it probably wasn't super bad, since Shodan indicated that there were only a small number of thousand machines on the public internet and accessible via AMT. Most deployments were probably behind corporate firewalls, which meant that it was plausibly a vector for spreading within a company but probably wasn't a likely initial vector.

I've since done some more playing and come to the conclusion that it's rather worse than that. AMT actually supports being accessed over wireless networks. Enabling this is a separate option - if you simply provision AMT it won't be accessible over wireless by default, you need to perform additional configuration (although this is as simple as logging into the web UI and turning on the option). Once enabled, there are two cases:

  1. The system is not running an operating system, or the operating system has not taken control of the wireless hardware. In this case AMT will attempt to join any network that it's been explicitly told about. Note that in default configuration, joining a wireless network from the OS is not sufficient for AMT to know about it - there needs to be explicit synchronisation of the network credentials to AMT. Intel provide a wireless manager that does this, but the stock behaviour in Windows (even after you've installed the AMT support drivers) is not to do this.
  2. The system is running an operating system that has taken control of the wireless hardware. In this state, AMT is no longer able to drive the wireless hardware directly and counts on OS support to pass packets on. Under Linux, Intel's wireless drivers do not appear to implement this feature. Under Windows, they do. This does not require any application level support, and uninstalling LMS will not disable this functionality. This also appears to happen at the driver level, which means it bypasses the Windows firewall.
Case 2 is the scary one. If you have a laptop that supports AMT, and if AMT has been provisioned, and if AMT has had wireless support turned on, and if you're running Windows, then connecting your laptop to a public wireless network means that AMT is accessible to anyone else on that network[1]. If it hasn't received a firmware update, they'll be able to do so without needing any valid credentials.

If you're a corporate IT department, and if you have AMT enabled over wifi, turn it off. Now.

[1] Assuming that the network doesn't block client to client traffic, of course

comment count unavailable comments

Syndicated 2017-05-09 20:18:21 from Matthew Garrett

8 May 2017 zeenix   » (Journeyer)

Rust Memory Management

In the light of my latest fascination with Rust programming language, I've started to make small presentation about Rust at my office, since I'm not the only one at our company who is interested in Rust. My first presentation in Feb was about a very general introduction to the language but at that time I had not yet really used the language for anything real myself so I was a complete novice myself and didn't have a very good idea of how memory management really works. While working on my gps-share project in my limited spare time, I came across quite a few issues related to memory management but I overcame all of them with help from kind folks at #rust-beginners IRC channel and the small but awesome Rust-GNOME community.

Having learnt some essentials of memory management, I thought I share my knowledge/experience with folks at the office. The talk was not well-attended due to conflicts with other meetings at office but the few folks who attended were very interested and asked some interesting and difficult questions (i-e the perfect audience). One of the questions was if I could put this up as a blog post so here I am. :)

Basics


Let's start with some basics: In Rust,

  1. stack allocation is preferred over the heap allocation and that's where everything is allocated by default.
  2. There is strict ownership semantics involved so each value can only and only have one owner at a particular time.
  3. When you pass a value to a function, you move the ownership of that value to the function argument and similarly, when you return a value from a function, you pass the ownership of the return value to the caller.

Now these rules make Rust very secure but at the same time if you had no way to allocate on the heap or be able to share data between different parts of your code and/or threads, you can't get very far with Rust. So we're provided with mechanisms to (kinda) work around these very strict rules, without compromising on safety these rules provide. Let's start with a simple code that will work fine in many other languages:

fn add_first_element(v1: Vec<i32>, v2: Vec<i32>) -> i32 {
return v1[0] + v2[0];
}

fn main() {
let v1 = vec![1, 2, 3];
let v2 = vec![1, 2, 3];

let answer = add_first_element(v1, v2);

// We can use `v1` and `v2` here!
println!("{} + {} = {}", v1[0], v2[0], answer);
}

This gives us an error from rustc:

error[E0382]: use of moved value: `v1`
--> sample1.rs:13:30
|
10 | let answer = add_first_element(v1, v2);
| -- value moved here
...
13 | println!("{} + {} = {}", v1[0], v2[0], answer);
| ^^ value used here after move
|
= note: move occurs because `v1` has type `std::vec::Vec<i32>`, which does not implement the `Copy` trait

error[E0382]: use of moved value: `v2`
--> sample1.rs:13:37
|
10 | let answer = add_first_element(v1, v2);
| -- value moved here
...
13 | println!("{} + {} = {}", v1[0], v2[0], answer);
| ^^ value used here after move
|
= note: move occurs because `v2` has type `std::vec::Vec<i32>`, which does not implement the `Copy` trait

What's happening is that we passed 'v1' and 'v2' to add_first_element() and hence we passed its ownership to add_first_element() as well and hence we can't use it afterwards. If Vec was a Copy type (like all primitive types), we won't get this error because Rust will copy the value for add_first_element and pass those copies to it. In this particular case the solution is easy:

Borrowing


fn add_first_element(v1: &Vec<i32>, v2: &Vec<i32>) -> i32 {
return v1[0] + v2[0];
}

fn main() {
let v1 = vec![1, 2, 3];
let v2 = vec![1, 2, 3];

let answer = add_first_element(&v1, &v2);

// We can use `v1` and `v2` here!
println!("{} + {} = {}", v1[0], v2[0], answer);
}

This one compiles and runs as expected. What we did was to convert the arguments into reference types. References are Rust's way of borrowing the ownership. So while add_first_element() is running, it owns 'v1' and 'v2' but not after it returns. Hence this code works.

While borrowing is very nice and very helpful, in the end it's temporary. The following code won't build:

struct Heli {
reg: String
}

impl Heli {
fn new(reg: String) -> Heli {
Heli { reg: reg }
}

fn hover(& self) {
println!("{} is hovering", self.reg);
}
}

fn main() {
let reg = "G-HONI".to_string();
let heli = Heli::new(reg);

println!("Registration {}", reg);
heli.hover();
}

rustc says:

error[E0382]: use of moved value: `reg`
--> sample3.rs:20:33
|
18 | let heli = Heli::new(reg);
| --- value moved here
19 |
20 | println!("Registration {}", reg);
| ^^^ value used here after move
|
= note: move occurs because `reg` has type `std::string::String`, which does not implement the `Copy`

If String had Copy trait implemented for it, this code would have compiled. But if efficiency is a concern at all for you (it is for Rust), you wouldn't want most values to be copied around all the time. We can't use a reference here as Heli::new() above needs to keep the passed 'reg'. Also note that the issue here is not that 'reg' was passed to Heli:new() and used afterwards by Heli::hover() afterwards but the fact that we tried to use 'reg' after we have given its ownership to Heli instance through Heli::new().

I realize that the above code doesn't make use of borrowing but if we were to make use of that, we'll have to declare lifetimes for the 'reg' field and the code still won't work because we want to keep the 'reg' in our Heli struct. There is a better solution here:

Rc


use std::rc::Rc;                                                                                         

struct Heli {
reg: Rc<String>
}

impl Heli {
fn new(reg: Rc<String>) -> Heli {
Heli { reg: reg }
}

fn hover(& self) {
println!("{} is hovering", self.reg);
}
}

fn main() {
let reg = Rc::new("G-HONI".to_string());
let heli = Heli::new(reg.clone());

println!("Registration {}", reg);
heli.hover();
}

This code builds and runs successfully. Rc stands for "Reference Counted" so by putting data into this generic container, adds reference counting to the data in question. Note that while you had to explicitly call clone() method of Rc to increment its refcount, you don't need to do anything to decrease the refcount. Each time an Rc reference goes out of scope, the reference is decremented automatically and when it reaches 0, the container Rc and its contained data are freed.

Cool, Rc is super easy to use so we can just use it in all situations where we need shared ownership? Not quite! You can't use Rc to share data between threads. So this code won't compile:

use std::rc::Rc;                                                                                         
use std::thread;

struct Heli {
reg: Rc<String>
}

impl Heli {
fn new(reg: Rc<String>) -> Heli {
Heli { reg: reg }
}

fn hover(& self) {
println!("{} is hovering", self.reg);
}
}

fn main() {
let reg = Rc::new("G-HONI".to_string());
let heli = Heli::new(reg.clone());

let t = thread::spawn(move || {
heli.hover();
});
println!("Registration {}", reg);

t.join().unwrap();
}

It results in:

error[E0277]: the trait bound `std::rc::Rc<std::string::String>: std::marker::Send` is not satisfied in `[closure@sample5.rs:22:27: 24:6 heli:Heli]`
--> sample5.rs:22:13
|
22 | let t = thread::spawn(move || {
| ^^^^^^^^^^^^^ within `[closure@sample5.rs:22:27: 24:6 heli:Heli]`, the trait `std::marker::Send` is not implemented for `std::rc::Rc<std::string::String>`
|
= note: `std::rc::Rc<std::string::String>` cannot be sent between threads safely
= note: required because it appears within the type `Heli`
= note: required because it appears within the type `[closure@sample5.rs:22:27: 24:6 heli:Heli]`
= note: required by `std::thread::spawn`

The issue here is that to be able to share data between more than one threads, the data must be of a type that implements Send trait. However not only implementing Send for all types would be very impractical solution, there is also performance penalties associated with implementing Send (which is why Rc doesn't implement Send).

Introducing Arc


Arc stands for Atomic Reference Counting and it's the thread-safe sibling of Rc.

use std::sync::Arc;                                                                                      
use std::thread;

struct Heli {
reg: Arc<String>
}

impl Heli {
fn new(reg: Arc<String>) -> Heli {
Heli { reg: reg }
}

fn hover(& self) {
println!("{} is hovering", self.reg);
}
}

fn main() {
let reg = Arc::new("G-HONI".to_string());
let heli = Heli::new(reg.clone());

let t = thread::spawn(move || {
heli.hover();
});
println!("Registration {}", reg);

t.join().unwrap();
}

This one works and the only difference is that we used Arc instead of Rc. Cool, so now we have a very efficient by thread-unsafe way to share data between different parts of the code but also a thread-safe mechanism as well. We're done then? Not quite! This code won't work:

use std::sync::Arc;                                                                                      
use std::thread;

struct Heli {
reg: Arc<String>,
status: Arc<String>
}

impl Heli {
fn new(reg: Arc<String>, status: Arc<String>) -> Heli {
Heli { reg: reg,
status: status }
}

fn hover(& self) {
self.status.clear();
self.status.push_str("hovering");
println!("{} is {}", self.reg, self.status);
}
}

fn main() {
let reg = Arc::new("G-HONI".to_string());
let status = Arc::new("".to_string());
let mut heli = Heli::new(reg.clone(), status.clone());

let t = thread::spawn(move || {
heli.hover();
});
println!("main: {} is {}", reg, status);

t.join().unwrap();
}

This gives us two errors:

error: cannot borrow immutable borrowed content as mutable
--> sample7.rs:16:9
|
16 | self.status.clear();
| ^^^^^^^^^^^ cannot borrow as mutable

error: cannot borrow immutable borrowed content as mutable
--> sample7.rs:17:9
|
17 | self.status.push_str("hovering");
| ^^^^^^^^^^^ cannot borrow as mutable

The issue is that Arc is unable to handle mutation of data from difference threads and hence doesn't give you mutable reference to contained data.

Mutex


For sharing mutable data between threads, you need another type in combination with Arc: Mutex. Let's make the above code work:

use std::sync::Arc;                                                                                      
use std::sync::Mutex;
use std::thread;

struct Heli {
reg: Arc<String>,
status: Arc<Mutex<String>>
}

impl Heli {
fn new(reg: Arc<String>, status: Arc<Mutex<String>>) -> Heli {
Heli { reg: reg,
status: status }
}

fn hover(& self) {
let mut status = self.status.lock().unwrap();
status.clear();
status.push_str("hovering");
println!("thread: {} is {}", self.reg, status.as_str());
}
}

fn main() {
let reg = Arc::new("G-HONI".to_string());
let status = Arc::new(Mutex::new("".to_string()));
let heli = Heli::new(reg.clone(), status.clone());

let t = thread::spawn(move || {
heli.hover();
});

println!("main: {} is {}", reg, status.lock().unwrap().as_str());

t.join().unwrap();
}

This code will work. Notice how you don't have to explicitly unlock the mutex after using. Rust is all about scopes. When the unlocked value goes out of the scope, mutex is automatically unlocked.

Other container types


Mutexes are rather expensive and sometimes you have shared date between threads but not all threads are mutating it (all the time) and that's where RwLock becomes useful. I won't go into details here but it's almost identical to Mutex, except that threads can take read-only locks and since it's possible to safely share non-mutable state between threads, it's a lot more efficient than threads locking other threads each time they access the data.

Another container types I didn't mention above, is Box. The basic use of Box is that it's a very generic and simple way of allocating data on the heap. It's typically used to turn an unsized type into a sized type. The module documentation has a simple example on that.

What about lifetimes


One of my colleagues who had had some experience with Rust was surprised that I didn't cover lifetimes in my talk. Firstly, I think it deserves a separate talk of it's own. Secondly, if you make clever use of the container types available to you and described above, most often you don't have to deal with lifetimes. Thirdly, lifetimes is Rust is something that I still struggle with, each time I have to deal with it so I feel a bit unqualified to teach others about how they work.

The end


I hope you find some of the information above useful. If you are looking for other resources on learning Rust, the Rust book is currently your best bet. I am still a newbie at Rust so if you see some mistakes in this post, please do let me know in the comments section.

Happy safe hacking!

Syndicated 2017-05-08 07:00:00 (Updated 2017-05-08 20:49:43) from zeenix

8 May 2017 mikal   » (Journeyer)

Things I read today: the best description I've seen of metadata routing in neutron

I happened upon a thread about OVN's proposal for how to handle nova metadata traffic, which linked to this very good Suse blog post about how metadata traffic is routed in neutron. I'm just adding the link here because I think it will be useful to others. The OVN proposal is also an interesting read.

Tags for this post: openstack nova neutron metadata ovn
Related posts: Juno nova mid-cycle meetup summary: nova-network to Neutron migration; Nova vendordata deployment, an excessively detailed guide; One week of Nova Kilo specifications; Specs for Kilo; Juno Nova PTL Candidacy; Juno nova mid-cycle meetup summary: scheduler

Comment

Syndicated 2017-05-07 17:52:00 from stillhq.com : Mikal, a geek from Canberra living in Silicon Valley (no blather posts)

6 May 2017 eMBee   » (Journeyer)

Everlasting Life Through Cyberspace

The idea has been brought up a few times, that we would upload our minds into a computer and in this way would be able to live forever.

How would that look like?

At it's base, every uploaded mind would represent a program with access to data storage and computing resources. These programs can interact with each other not unlike programs on the internet today. For example like in a virtual reality game world. (Second Life, etc)

In uploaded form we would manipulate our environment through other programs, that we create ourselves or that others create for us. There might be a market to trade these programs and its products.

Now, anything that is programmable will have bugs and is exploitable, therefore it will be necessary to protect against such attacks. Much like we use encryption today to keep things private, encryption and self protection will play a large role in such a cyberworld.

Unlike today where we can rely on physical protection, healtcare, etc to support us, in cyberspace, all protection comes in form of programs, and that means instead of relying on others to act in our behalf in order to pretect us, every one of us will be able to get a copy of a protection program that we then will be able to control by ourselves. It's all bits and bytes, so there is no reason to assume that we would not be able to have full control over our environment.

We could build cyberspace without these protections, but it is inconceivable that anyone would accept a world where their own well being is not ensured. Either from the outside, or from the inside. But if everyone is uploaded, then there is no more outside, and therefore all protection must come from the inside, through programs. And since even now, most people are not skilled programmers, and that is unlikely to change much in the future, it is hard to imagine that people would willingly enter a world where their life depends on their programming skills. No, people must be able to trust that no harm will come to them in this new world where they otherwise would not have the skill to protect themselves.

The reason we feel safe in this world is because we agreed to a set of laws which we enforce, and for the most part, crimes are visible and can be persecuted. People who live in places where this is not the case don't feel safe, and noone would willingly leave their save home to move to such an area.

In a cyberworld, such safety can only be achieved by making crime impossible to begin with, because given enough resources, a computer program can do damage without leaving a trace.

This has a few severe implications.

If real crime is impossible and we further have full control over our own protection, controlling what data we receive that could possibly offend us, then effectively we can no longer be hurt. There is no physical pain anyways, and any virtual pain we could just switch off.

If we can not be hurt, the corollary is that we can not really hurt anyone. We can not do anything that has any negative consequences on anyone else.

We can not even steal someones computing resources, or rather we probably could but there would be no point because even if computing resources are unevenly distributed, it would not matter.

There is no sense of time, since any sense of time is simulated and can be controlled. So if we were trying to build something that takes lots of time, we could adjust our sense of time so that we would not have to feel the wait for the computation to complete. With that in mind, stealing resources to make computation faster would become meaningless.

And if we could take all resources from someone, then that would effectively kill them as their program could no longer run. It would be frozen in storage. Allowing this could start a war of attrition that would end with one person controlling everyone else and everyone fearing for their life. It just doesn't make sense to allow that.

In other words we no longer have freedom to do evil. Or more drastically, we no longer have complete free will. Free will implies the ability to chose evil. Without that choice free will is limited.

In summary life in cyberspace has the following attributes:

  • We will all live there eternally (well, as long as the computer keeps running).
  • There is no sense of time.
  • We will keep our sense of identity.
  • We will be able to interact with every human ever uploaded.
  • We will continue to advance and develop.
  • There is no power to do evil.
  • We will be able to affect the physical world, and the physical world will affect us.

Here is how cyberspace looks like from the outside:

  • When a person is uploaded, its physical body ceases to function and decays.
  • Everyone can be uploaded, there is no specific requirements or conditions that would prevent anyone from being uploaded.
  • We are assuming that we will be able to communicate with those in cyberspace, but imagine how it would look like if we could not communicate with an uploaded person. We would then actually not be able to tell if they successfully uploaded or not. We would in fact not even be able to tell whether cyberspace exists at all, and we would have to take a leap of faith that it is real.

These attributes, are all attributes of life after death as described at least by the baha'i faith, and possibly other religions.

So maybe, cyberspace already exists, and death is just the upload process? Maybe we are simply not yet advanced enough to perceive or understand our life beyond the point of upload from the outside?

Maybe we just need to evolve further before we are able to communicate with those who are uploaded?

Syndicated 2017-05-06 04:06:33 (Updated 2017-05-06 06:05:15) from DevLog

4 May 2017 johnw   » (Master)

Monads are monoid objects

Monads are monoid objects

Lately I’ve been working again on my Category Theory formalization in Coq, and just now proved, in a completely general setting, the following statement:

Monads are monoid (objects) in the (monoidal) category of endofunctors (which is monoidal with respect to functor composition).

The proof, using no axioms, is here.

Now, just how much category theory was needed to establish this fact?

Categories

We start with concept of a category, which has objects of some Type, and arrows between objects of some other Type. In this way, objects and arrows can be almost anything, except they must provide: identity arrows on every object, and composition of arrows, with composition being associative and identity arrows having no effect on composition.

All arrows between two objects forms a set of arrows, called a “hom-set”. In my library, these are actually constructive hom-setoids, allowing a category-specific definition of what it means for two members of a hom-setoid to be “equivalent”. The fact that it is constructive means that the witness to this equivalence must be available to later functions and proofs, and not only the fact that a witness had been found.

Functors

Given two categories, which may have different objects, arrows and hom equivalences, it is sometime possible to map objects to objects, arrows to arrows, and equivalences to equivalences, so long as identity arrows, composition, and the related laws are preserved. In this case we call such a mapping a “functor”.

Natural transformations

While functors map between categories, natural transformations map between functors, along with a “naturality” condition that performing the transformation before or after utilizing the related functors has no effect on the result.

Isomorphisms

Two objects in a category are said to be isomorphic if there are arrows for one to the other, and back, and the composition of these two arrows is equivalent to identity in both directions.

Note that since the type of objects and arrows is unknown in the general case, the “meaning” of isomorphism can vary from category to category, as we will see below in the case of Cat, the category of all categories.

Cartesian categories

Although objects are just abstract symbols, sometimes it’s possible to reveal additional structure about a category through the identification of arrows that give us details about the internal structure of some object.

One such structure is “cartesian products”. This identifies a product object in the category, in terms of introduction and elimination arrows, along with a universal property stating that all product-like objects in the category must be mappable (in terms of their being a product) to the object identified by the cartesian structure.

For example, I could pick tuples (a, b) in Haskell as a product , or some custom data type Tuple, or even a larger data structure (a, b, c), and all of these would be products for a and b. However, only tuples and Tuple are universal, in the sense that every other product has a mapping to them, but not vice versa. Further, the mapping between tuple and Tuple must be a isomorphism. This leaves me free to choose either as the product object for the Haskell category.

Product categories

Whereas cartesion categories tell us more about the internal structure of some product object in a category, product categories are a construction on top of some category, without adding anything to our knowledge of its internals. In particular, a product category is a category whose objects are pairs of objects from some other category, and whose arrows are pairs of the corresponding arrows between those two objects. Arrow equivalence, identity and composition, follow similarly. Thus, every object in a product category is a product, and arrows must always “operate on products”.

Bifunctors

If a functor maps from a product category to some other category (which could also be another product category, but doesn’t have to be), we call it a bifunctor. Another way to think of it is as a “functor of two arguments”.

Endofunctors

A functor that maps a category to itself (though it may map objects to different objects, etc) is called an endofunctor on that category.

The category of endofunctors

The category of endofunctors on some category has as objects every endofunctor, and as arrows natural transformations between these endofunctors. Here identity is the identity transformation, and composition is composition between natural transformations. We can designate the category of endofunctors using the name [C, C], for some category C.

Monoidal categories

A monoidal category reveals the structure of a tensor operation in the category, plus a special object, the unit of the tensor operation. Along with these come laws expressed in terms of isomorphisms between the results of the tensor:

tensor : C × C ⟶ C where "x ⨂ y" := (tensor (x, y));
I : C;

unit_left  {X} : I ⨂ X ≅ X;
unit_right {X} : X ⨂ I ≅ X;

tensor_assoc {X Y Z} : (X ⨂ Y) ⨂ Z ≅ X ⨂ (Y ⨂ Z)

Note that the same category may be monoidal in multiple different ways. Also, we needed product categories, since the tensor is a bifunctor from the product of some category C to itself.

We could also have specified the tensor in curried form, as a functor from C to the category of endofunctors on C:

tensor : C ⟶ [C, C]

However, this adds no information (the two forms are isomorphic), and just made some of the later proofs a bit more complicated.

Monoidal composition

The category of endofunctors on C is a monoidal category, taking the identity endofunctor as unit, and endofunctor composition as the tensor. It is monoidal in other ways too, but this is the structure of interest concerning monads.

Monoid categories

A monoid object in a monoidal category is an object in the category, plus a pair of arrows. Let’s call the arrows mappend and mempty. These map from a tensor product of the monoid object to itself, and from the monoidal unit to the monoid object, along with preservation of the monoid laws in terms of arrow equivlances. In Coq it looks like this:

Context `{C : Category}.
Context `{@Monoidal C}.

(* Here [mon] is the monoid object. *)
Class Monoid (mon : C) := {
  mappend : mon ⨂ mon ~> mon;
  mempty : I ~> mon;

  mempty_left : (* I ⨂ mon ≈ mon *)
    mappend ∘ bimap mempty id ≈ to (@unit_left C _ mon);
  mempty_right : (* mon ⨂ I ≈ mon *)
    mappend ∘ bimap id mempty ≈ to (@unit_right C _ mon);

  (* (mon ⨂ mon) ⨂ mon ≈ mon ⨂ (mon ⨂ mon) *)
  mappend_assoc :
    mappend ∘ bimap mappend id
      ≈ mappend ∘ bimap id mappend ∘ to tensor_assoc
}.

Monads are monoid objects

Given all of the above, we can now state that every monad is a monoid object in the monoidal category of endofunctors, taking composition as the tensor product. return is the mempty natural transformation of that object, and join, the mappend natural transformation:

Context `{C : Category}.
Context `{M : C ⟶ C}.

Definition Endofunctors `(C : Category) := ([C, C]).

Program Definition Monoid_Monad
        (m : @Monoid (Endofunctors C) Composition_Monoidal M) : 
  Monad := {|
  ret  := transform[mempty[m]];
  join := transform[mappend[m]]
|}.

This makes no assumptions about the structure of the category C, other than what has been stated above, and no other aspects of category theory are needed. The proof, again, is here.

Note that there is another way to arrive at monads, from the adjunction of two functors, which I also have a proof for, but this can wait until another post.

Footnotes: [1] We say small here to avoid the paradox of Cat not containing itself.

Syndicated 2017-05-04 00:00:00 from Lost in Technopolis

3 May 2017 LaForge   » (Master)

Overhyped Docker

Overhyped Docker missing the most basic features

I've always been extremely skeptical of suddenly emerging over-hyped technologies, particularly if they advertise to solve problems by adding yet another layer to systems that are already sufficiently complex themselves.

There are of course many issues with containers, ranging from replicated system libraries and the basic underlying statement that you're giving up on the system packet manager to properly deal with dependencies.

I'm also highly skeptical of FOSS projects that are primarily driven by one (VC funded?) company. Especially if their offering includes a so-called cloud service which they can stop to operate at any given point in time, or (more realistically) first get everybody to use and then start charging for.

But well, despite all the bad things I read about it over the years, on one day in May 2017 I finally thought let's give it a try. My problem to solve as a test balloon is fairly simple.

My basic use case

The plan is to start OsmoSTP, the m3ua-testtool and the sua-testtool, which both connect to OsmoSTP. By running this setup inside containers and inside an internal network, we could then execute the entire testsuite e.g. during jenkins test without having IP address or port number conflicts. It could even run multiple times in parallel on one buildhost, verifying different patches as part of the continuous integration setup.

This application is not so complex. All it needs is three containers, an internal network and some connections in between. Should be a piece of cake, right?

But enter the world of buzzword-fueled web-4000.0 software-defined virtualised and orchestrated container NFW + SDN vodoo: It turns out to be impossible, at least not with the preferred tools they advertise.

Dockerfiles

The part that worked relatively easily was writing a few Dockerfiles to build the actual containers. All based on debian:jessie from the library.

As m3ua-testsuite is written in guile, and needs to build some guile plugin/extension, I had to actually include guile-2.0-dev and other packages in the container, making it a bit bloated.

I couldn't immediately find a nice example Dockerfile recipe that would allow me to build stuff from source outside of the container, and then install the resulting binaries into the container. This seems to be a somewhat weak spot, where more support/infrastructure would be helpful. I guess the idea is that you simply install applications via package feeds and apt-get. But I digress.

So after some tinkering, I ended up with three docker containers:

  • one running OsmoSTP
  • one running m3ua-testtool
  • one running sua-testtool

I also managed to create an internal bridged network between the containers, so the containers could talk to one another.

However, I have to manually start each of the containers with ugly long command line arguments, such as docker run --network sigtran --ip 172.18.0.200 -it osmo-stp-master. This is of course sub-optimal, and what Docker Services + Stacks should resolve.

Services + Stacks

The idea seems good: A service defines how a given container is run, and a stack defines multiple containers and their relation to each other. So it should be simple to define a stack with three services, right?

Well, it turns out that it is not. Docker documents that you can configure a static ipv4_address [1] for each service/container, but it seems related configuration statements are simply silently ignored/discarded [2], [3], [4].

This seems to be related that for some strange reason stacks can (at least in later versions of docker) only use overlay type networks, rather than the much simpler bridge networks. And while bridge networks appear to support static IP address allocations, overlay apparently doesn't.

I still have a hard time grasping that something that considers itself a serious product for production use (by a company with estimated value over a billion USD, not by a few hobbyists) that has no support for running containers on static IP addresses. that. How many applications out there have I seen that require static IP address configuration? How much simpler do setups get, if you don't have to rely on things like dynamic DNS updates (or DNS availability at all)?

So I'm stuck with having to manually configure the network between my containers, and manually starting them by clumsy shell scripts, rather than having a proper abstraction for all of that. Well done :/

Exposing Ports

Unrelated to all of the above: If you run some software inside containers, you will pretty soon want to expose some network services from containers. This should also be the most basic task on the planet.

However, it seems that the creators of docker live in the early 1980ies, where only TCP and UDP transport protocols existed. They seem to have missed that by the late 1990ies to early 2000s, protocols like SCTP or DCCP were invented.

But yet, in 2017, Docker chooses to

Now some of the readers may think 'who uses SCTP anyway'. I will give you a straight answer: Everyone who has a mobile phone uses SCTP. This is due to the fact that pretty much all the connections inside cellular networks (at least for 3G/4G networks, and in reality also for many 2G networks) are using SCTP as underlying transport protocol, from the radio access network into the core network. So every time you switch your phone on, or do anything with it, you are using SCTP. Not on your phone itself, but by all the systems that form the network that you're using. And with the drive to C-RAN, NFV, SDN and all the other buzzwords also appearing in the Cellular Telecom field, people should actually worry about it, if they want to be a part of the software stack that is used in future cellular telecom systems.

Summary

After spending the better part of a day to do something that seemed like the most basic use case for running three networked containers using Docker, I'm back to step one: Most likely inventing some custom scripts based on unshare to run my three test programs in a separate network namespace for isolated execution of test suite execution as part of a Jenkins CI setup :/

It's also clear that Docker apparently don't care much about playing a role in the Cellular Telecom world, which is increasingly moving away from proprietary and hardware-based systems (like STPs) to virtualised, software-based systems.

[1]https://docs.docker.com/compose/compose-file/#ipv4address-ipv6address
[2]https://forums.docker.com/t/docker-swarm-1-13-static-ips-for-containers/28060
[3]https://github.com/moby/moby/issues/31860
[4]https://github.com/moby/moby/issues/24170

Syndicated 2017-05-02 22:00:00 from LaForge's home page

1 May 2017 mjg59   » (Master)

Intel's remote AMT vulnerablity

Intel just announced a vulnerability in their Active Management Technology stack. Here's what we know so far.

Background

Intel chipsets for some years have included a Management Engine, a small microprocessor that runs independently of the main CPU and operating system. Various pieces of software run on the ME, ranging from code to handle media DRM to an implementation of a TPM. AMT is another piece of software running on the ME, albeit one that takes advantage of a wide range of ME features.

Active Management Technology

AMT is intended to provide IT departments with a means to manage client systems. When AMT is enabled, any packets sent to the machine's wired network port on port 16992 will be redirected to the ME and passed on to AMT - the OS never sees these packets. AMT provides a web UI that allows you to do things like reboot a machine, provide remote install media or even (if the OS is configured appropriately) get a remote console. Access to AMT requires a password - the implication of this vulnerability is that that password can be bypassed.

Remote management

AMT has two types of remote console: emulated serial and full graphical. The emulated serial console requires only that the operating system run a console on that serial port, while the graphical environment requires drivers on the OS side. However, an attacker who enables emulated serial support may be able to use that to configure grub to enable serial console. Remote graphical console seems to be problematic under Linux but some people claim to have it working, so an attacker would be able to interact with your graphical console as if you were physically present. Yes, this is terrifying.

Remote media

AMT supports providing an ISO remotely. In older versions of AMT (before 11.0) this was in the form of an emulated IDE controller. In 11.0 and later, this takes the form of an emulated USB device. The nice thing about the latter is that any image provided that way will probably be automounted if there's a logged in user, which probably means it's possible to use a malformed filesystem to get arbitrary code execution in the kernel. Fun!

The other part of the remote media is that systems will happily boot off it. An attacker can reboot a system into their own OS and examine drive contents at their leisure. This doesn't let them bypass disk encryption in a straightforward way[1], so you should probably enable that.

How bad is this

That depends. Unless you've explicitly enabled AMT at any point, you're probably fine. The drivers that allow local users to provision the system would require administrative rights to install, so as long as you don't have them installed then the only local users who can do anything are the ones who are admins anyway. If you do have it enabled, though…

How do I know if I have it enabled?

Yeah this is way more annoying than it should be. First of all, does your system even support AMT? AMT requires a few things:

1) A supported CPU
2) A supported chipset
3) Supported network hardware
4) The ME firmware to contain the AMT firmware

Merely having a "vPRO" CPU and chipset isn't sufficient - your system vendor also needs to have licensed the AMT code. Under Linux, if lspci doesn't show a communication controller with "MEI" in the description, AMT isn't running and you're safe. If it does show an MEI controller, that still doesn't mean you're vulnerable - AMT may still not be provisioned. If you reboot you should see a brief firmware splash mentioning the ME. Hitting ctrl+p at this point should get you into a menu which should let you disable AMT.

What do we not know?

We have zero information about the vulnerability, other than that it allows unauthenticated access to AMT. One big thing that's not clear at the moment is whether this affects all AMT setups, setups that are in Small Business Mode, or setups that are in Enterprise Mode. If the latter, the impact on individual end-users will be basically zero - Enterprise Mode involves a bunch of effort to configure and nobody's doing that for their home systems. If it affects all systems, or just systems in Small Business Mode, things are likely to be worse.

What should I do?

Make sure AMT is disabled. If it's your own computer, you should then have nothing else to worry about. If you're a Windows admin with untrusted users, you should also disable or uninstall LSM by following these instructions.

Does this mean every Intel system built since 2008 can be taken over by hackers?

No. Most Intel systems don't ship with AMT. Most Intel systems with AMT don't have it turned on.

Does this allow persistent compromise of the system?

Not in any novel way. An attacker could disable Secure Boot and install a backdoored bootloader, just as they could with physical access.

But isn't the ME a giant backdoor with arbitrary access to RAM?

Yes, but there's no indication that this vulnerability allows execution of arbitrary code on the ME - it looks like it's just (ha ha) an authentication bypass for AMT.

Is this a big deal anyway?

Yes. Fixing this requires a system firmware update in order to provide new ME firmware (including an updated copy of the AMT code). Many of the affected machines are no longer receiving firmware updates from their manufacturers, and so will probably never get a fix. Anyone who ever enables AMT on one of these devices will be vulnerable. That's ignoring the fact that firmware updates are rarely flagged as security critical (they don't generally come via Windows update), so even when updates are made available, users probably won't know about them or install them.

Avoiding this kind of thing in future

Users ought to have full control over what's running on their systems, including the ME. If a vendor is no longer providing updates then it should at least be possible for a sufficiently desperate user to pay someone else to do a firmware build with the appropriate fixes. Leaving firmware updates at the whims of hardware manufacturers who will only support systems for a fraction of their useful lifespan is inevitably going to end badly.

How certain are you about any of this?

Not hugely - the quality of public documentation on AMT isn't wonderful, and while I've spent some time playing with it (and related technologies) I'm not an expert. If anything above seems inaccurate, let me know and I'll fix it.

[1] Eh well. They could reboot into their own OS, modify your initramfs (because that's not signed even if you're using UEFI Secure Boot) such that it writes a copy of your disk passphrase to /boot before unlocking it, wait for you to type in your passphrase, reboot again and gain access. Sealing the encryption key to the TPM would avoid this.

comment count unavailable comments

Syndicated 2017-05-01 22:52:01 from Matthew Garrett

1 May 2017 pabs3   » (Master)

FLOSS Activities April 2017

Changes

Issues

Review

Administration

  • Debian systems: quiet a logrotate warning, investigate issue with DNSSEC and alioth, deploy fix on our first stretch buildd, restore alioth git repo after history rewrite, investigate iptables segfaults on buildd and investigate time issues on a NAS
  • Debian derivatives census: delete patches over 5 MiB, re-enable the service
  • Debian wiki: investigate some 403 errors, fix alioth KGB config, deploy theme changes, close a bogus bug report, ping 1 user with bouncing email, whitelist 9 email addresses and whitelist 2 domains
  • Debian QA: deploy my changes
  • Debian mentors: security upgrades and service restarts
  • Openmoko: debug mailing list issue, security upgrades and reboots

Communication

  • Invite Wazo to the Debian derivatives census
  • Welcome ubilinux, Wazo and Roopa Prabhu (of Cumulus Linux) to the Debian derivatives census
  • Discuss HP/ProLiant wiki page with HPE folks
  • Inform git history rewriter about the git mailmap feature

Sponsors

The libconfig-crontab-perl backports and pyvmomi issue were sponsored by my employer. All other work was done on a volunteer basis.

Syndicated 2017-04-30 22:56:02 from Advogato

30 Apr 2017 mjg59   » (Master)

Looking at the Netgear Arlo home IP camera

Another in the series of looking at the security of IoT type objects. This time I've gone for the Arlo network connected cameras produced by Netgear, specifically the stock Arlo base system with a single camera. The base station is based on a Broadcom 5358 SoC with an 802.11n radio, along with a single Broadcom gigabit ethernet interface. Other than it only having a single ethernet port, this looks pretty much like a standard Netgear router. There's a convenient unpopulated header on the board that turns out to be a serial console, so getting a shell is only a few minutes work.

Normal setup is straight forward. You plug the base station into a router, wait for all the lights to come on and then you visit arlo.netgear.com and follow the setup instructions - by this point the base station has connected to Netgear's cloud service and you're just associating it to your account. Security here is straightforward: you need to be coming from the same IP address as the Arlo. For most home users with NAT this works fine. I sat frustrated as it repeatedly failed to find any devices, before finally moving everything behind a backup router (my main network isn't NATted) for initial setup. Once you and the Arlo are on the same IP address, the site shows you the base station's serial number for confirmation and then you attach it to your account. Next step is adding cameras. Each base station is broadcasting an 802.11 network on the 2.4GHz spectrum. You connect a camera by pressing the sync button on the base station and then the sync button on the camera. The camera associates with the base station via WDS and now you're up and running.

This is the point where I get bored and stop following instructions, but if you're using a desktop browser (rather than using the mobile app) you appear to need Flash in order to actually see any of the camera footage. Bleah.

But back to the device itself. The first thing I traced was the initial device association. What I found was that once the device is associated with an account, it can't be attached to another account. This is good - I can't simply request that devices be rebound to my account from someone else's. Further, while the serial number is displayed to the user to disambiguate between devices, it doesn't seem to be what's used internally. Tracing the logon traffic from the base station shows it sending a long random device ID along with an authentication token. If you perform a factory reset, these values are regenerated. The device to account mapping seems to be based on this random device ID, which means that once the device is reset and bound to another account there's no way for the initial account owner to regain access (other than resetting it again and binding it back to their account). This is far better than many devices I've looked at.

Performing a factory reset also changes the WPA PSK for the camera network. Newsky Security discovered that doing so originally reset it to 12345678, which is, uh, suboptimal? That's been fixed in newer firmware, along with their discovery that the original random password choice was not terribly random.

All communication from the base station to the cloud seems to be over SSL, and everything validates certificates properly. This also seems to be true for client communication with the cloud service - camera footage is streamed back over port 443 as well.

Most of the functionality of the base station is provided by two daemons, xagent and vzdaemon. xagent appears to be responsible for registering the device with the cloud service, while vzdaemon handles the camera side of things (including motion detection). All of this is running as root, so in the event of any kind of vulnerability the entire platform is owned. For such a single purpose device this isn't really a big deal (the only sensitive data it has is the camera feed - if someone has access to that then root doesn't really buy them anything else). They're statically linked and stripped so I couldn't be bothered spending any significant amount of time digging into them. In any case, they don't expose any remotely accessible ports and only connect to services with verified SSL certificates. They're probably not a big risk.

Other than the dependence on Flash, there's nothing immediately concerning here. What is a little worrying is a family of daemons running on the device and listening to various high numbered UDP ports. These appear to be provided by Broadcom and a standard part of all their router platforms - they're intended for handling various bits of wireless authentication. It's not clear why they're listening on 0.0.0.0 rather than 127.0.0.1, and it's not obvious whether they're vulnerable (they mostly appear to receive packets from the driver itself, process them and then stick packets back into the kernel so who knows what's actually going on), but since you can't set one of these devices up in the first place without it being behind a NAT gateway it's unlikely to be of real concern to most users. On the other hand, the same daemons seem to be present on several Broadcom-based router platforms where they may end up being visible to the outside world. That's probably investigation for another day, though.

Overall: pretty solid, frustrating to set up if your network doesn't match their expectations, wouldn't have grave concerns over having it on an appropriately firewalled network.

comment count unavailable comments

Syndicated 2017-04-30 05:09:46 from Matthew Garrett

30 Apr 2017 mentifex   » (Master)

Mentifex on predictive textlike brain mechanism

The predictive textlike brain mechanism mentioned in the article works perhaps because each word as a concept has many associative tags over to other concept-words frequently used in connection with the triggering word. A similar neural network of associative tags is at work in the Mind.Forth Strong AI which has been ported into Strawberry Perl 5 and which you may download free-of-charge in order to study the Theory of Mind depicted in the diagram below also available as an animated brain-mind GiF:
./^^^^^^^\..SEMANTIC MEMORY../^^^^^^^\
| Visual. | .. syntax ..... |Auditory |
| Memory..| .. /..\---------|-------\ |
| Channel.| . ( .. )function|Memory | |
| . . . . | .. \__/---/ . \ | . . . | |
| . /-----|---\ |flush\___/ | . . . | |
| . | . . | . | |vector | . | .word | |
| ._|_ .. | . v_v_____. | . | .stem | |
| / . \---|--/ . . . .\-|---|--/ .\ | |
| \___/ . | .\________/ | . | .\__/ | |
| percept | . concepts _V . | .. | .| |
| . . . . | . . . . . / . \-|----' .| |
| . . . . | . . . . .( . . )| ending| |
| . . . . | inflection\___/-|->/..\_| |
| . . . . | . . . . . . . . | .\__/.. |
Syntax generates thought from concepts.
AI Mind Maintainer jobs will be like working in a nuclear power plant control room.

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!

Advogato User Stats
Users13993
Observer9877
Apprentice748
Journeyer2333
Master1031

New Advogato Members

Recently modified projects

24 Feb 2017 Xrsf
15 Feb 2017 Justice4all
28 Sep 2016 Geomview
28 Sep 2016 SaVi
14 Jun 2016 luxdvd
8 Mar 2016 ShinyCMS
8 Feb 2016 OpenBSC
5 Feb 2016 Abigail
29 Dec 2015 mod_virgule
19 Sep 2015 Break Great Firewall
25 May 2015 Molins framework for PHP5
25 May 2015 Beobachter
7 Mar 2015 Ludwig van
7 Mar 2015 Stinky the Shithead
18 Dec 2014 AshWednesday

New projects

8 Mar 2016 ShinyCMS
5 Feb 2016 Abigail
2 Dec 2014 Justice4all
11 Nov 2014 respin
8 Mar 2014 Noosfero
17 Jan 2014 Haskell
17 Jan 2014 Erlang
17 Jan 2014 Hy
17 Jan 2014 clj-simulacrum
17 Jan 2014 Haskell-Lisp
17 Jan 2014 lfe-disco
17 Jan 2014 clj-openstack
17 Jan 2014 lfe-openstack
17 Jan 2014 LFE
1 Nov 2013 FAQ Linux