Wednesday, 3 May 2017

Rrdtool Gleitender Durchschnitt


Ich arbeite mit einer großen Menge an Zeitreihen. Diese Zeitreihen sind grundsätzlich Netzwerkmessungen, die alle 10 Minuten kommen, und einige von ihnen sind periodisch (d. h. die Bandbreite), während einige andere Arent (d. h. die Menge des Routing-Verkehrs). Ich möchte einen einfachen Algorithmus für eine Online-Ausreißer-Erkennung. Grundsätzlich möchte ich die ganze historische Daten für jede Zeitreihe im Speicher (oder auf Festplatte) behalten und ich möchte jeden Ausreißer in einem Live-Szenario erkennen (jedes Mal, wenn ein neues Sample erfasst wird). Was ist der beste Weg, um diese Ergebnisse zu erreichen Im derzeit mit einem gleitenden Durchschnitt, um etwas Lärm zu entfernen, aber dann was als nächstes Einfache Dinge wie Standardabweichung, verrückt. Gegen den ganzen Datensatz funktioniert nicht gut (ich kann nicht annehmen, dass die Zeitreihen stationär sind), und ich möchte etwas genaueres, idealerweise eine Black Box wie: double outlierdetection (double vector, double value) wobei vector das Array von doppelten enthält Die historischen Daten und der Rückgabewert ist die Anomalie für den neuen Stichprobenwert. Fragte am 2. August um 20:37 Ja, ich habe angenommen, dass die Frequenz bekannt und spezifiziert ist. Es gibt Methoden, um die Frequenz automatisch abzuschätzen, aber das würde die Funktion erheblich erschweren. Wenn du die Häufigkeit abschätzen musst, versuchst du, eine eigene Frage darüber zu stellen - und ich werde wahrscheinlich eine Antwort geben. Aber es braucht mehr Platz, als ich in einem Kommentar zur Verfügung habe. Ndash Rob Hyndman Eine gute Lösung wird mehrere Zutaten haben, einschließlich: Verwenden Sie eine resistente, bewegte Fenster glatt, um Nichtstationarität zu entfernen. Die ursprünglichen Daten so ausdrücken, dass die Reste in Bezug auf die Glattheit etwa symmetrisch verteilt sind. Angesichts der Natur Ihrer Daten, ist es wahrscheinlich, dass ihre Quadratwurzeln oder Logarithmen symmetrische Residuen geben würde. Tragen Sie Kontroll-Chart-Methoden, oder zumindest Kontrolle Diagramm Denken, um die Residuen. Soweit das letzte Mal geht, zeigt das Kontroll-Diagramm-Denken, dass herkömmliche Schwellen wie 2 SD oder das 1,5-fache des IQR jenseits der Quartile schlecht funktionieren, weil sie zu viele falsche Out-of-Control-Signale auslösen. Die Leute benutzen gewöhnlich 3 SD in der Kontrollkarte Arbeit, wo 2,5 (oder sogar 3) mal die IQR jenseits der Quartile wäre ein guter Ausgangspunkt. Ich habe mehr oder weniger die Art der Rob-Hyndmans-Lösung umrissen und dabei zwei Hauptpunkte hinzugefügt: das Potenzial, die Daten wiederzugeben und die Weisheit, bei der Signalisierung eines Ausreißers konservativer zu sein. Im nicht sicher, dass Loess ist gut für einen Online-Detektor, obwohl, weil es nicht gut funktioniert an den Endpunkten. Sie könnten stattdessen etwas so einfaches wie ein bewegliches Medianfilter verwenden (wie bei Tukeys resistenten Glättung). Wenn Ausreißer nicht in Bursts kommen, können Sie ein schmales Fenster benutzen (5 Datenpunkte, vielleicht, die nur mit einem Burst von 3 oder mehr Ausreißern innerhalb einer Gruppe von 5 brechen). Sobald Sie die Analyse durchgeführt haben, um eine gute Re-Expression der Daten zu bestimmen, ist es unwahrscheinlich, dass Sie den Re-Ausdruck ändern müssen. Daher muss Ihr Online-Detektor wirklich nur die aktuellsten Werte (das neueste Fenster) verweisen, weil es nicht die früheren Daten überhaupt verwenden wird. Wenn Sie wirklich lange Zeitreihen haben, können Sie weiter gehen, um Autokorrelation und Saisonalität zu analysieren (wie wiederkehrende tägliche oder wöchentliche Schwankungen), um das Verfahren zu verbessern. Antwortete Aug 26 10 at 18:02 John, 1.5 IQR ist Tukey39s ursprüngliche Empfehlung für die längsten Whiskers auf einem Boxplot und 3 IQR ist seine Empfehlung für die Markierung von Punkten als Quoten outliersquot (ein Riff auf einer beliebten 6039s Phrase). Dies ist in viele Boxplot-Algorithmen eingebaut. Die Empfehlung wird theoretisch in Hoaglin, Mosteller, amp Tukey, Understanding Robust und Exploratory Data Analysis analysiert. Ndash w huber 9830 Oct 9 12 at 21:38 Dies bestätigt Zeitreihendaten, die ich versucht habe zu analysieren. Fensterdurchschnitt und auch Fensterfensterabweichungen. ((X - avg) sd) gt 3 scheinen die Punkte zu sein, die ich als Ausreißer markieren möchte. Nun zumindest warnen als Ausreißer, ich fasse etwas höher als 10 sd als extreme Fehler Ausreißer. Das Problem, in das ich hineingehe, ist, was eine ideale Fensterlänge ist, die mit etwas zwischen 4-8 Datenpunkten spielt. Ndash NeoZenith Jun 29 16 um 8:00 Neo Ihre beste Wette kann sein, mit einer Teilmenge Ihrer Daten zu experimentieren und bestätigen Sie Ihre Schlussfolgerungen mit Tests auf dem Rest. Sie könnten auch eine formellere Cross-Validierung durchführen (aber aufgrund der Interdependenz aller Werte ist besondere Aufmerksamkeit mit Zeitreihendaten erforderlich). Ndash w huber 9830 Jun 29 16 at 12:10 (Diese Antwort antwortete auf eine doppelte (jetzt geschlossene) Frage bei Erkennung von herausragenden Ereignissen, die einige Daten in grafischer Form vorstellten.) Die Ausreißererkennung hängt von der Art der Daten und von dem ab, was du bist Sind bereit, über sie zu übernehmen. Allzweck-Methoden beruhen auf robusten Statistiken. Der Geist dieses Ansatzes besteht darin, den Großteil der Daten in einer Weise zu charakterisieren, die nicht von Ausreißern beeinflusst wird und dann auf irgendwelche individuellen Werte hinweist, die nicht in diese Charakterisierung passen. Weil dies eine Zeitreihe ist, fügt es die Komplikation hinzu, um die Ausreißer laufend zu ermitteln. Wenn dies geschehen soll, wenn die Serie entfaltet ist, dann dürfen wir nur ältere Daten für die Erkennung verwenden, nicht zukünftige Daten. Darüber hinaus möchten wir als Schutz gegen die vielen wiederholten Tests eine Methode verwenden, die eine sehr niedrige falsche hat Positive Rate. Diese Überlegungen schlagen vor, einen einfachen, robusten bewegten Fensterausreißertest über die Daten zu führen. Es gibt viele Möglichkeiten, aber eine einfache, leicht verständliche und leicht umsetzbare basiert auf einer laufenden MAD: median absolute Abweichung vom Median. Dies ist ein stark robustes Maß an Variation innerhalb der Daten, verglichen mit einer Standardabweichung. Eine ausgedehnte Spitze wäre mehrere MADs oder mehr größer als der Median. Es gibt noch etwas Tuning. Wie viel von einer Abweichung von der Masse der Daten sollte als outlying und wie weit zurück in der Zeit sollte man aussehen Lets verlassen diese als Parameter für Experimente. Heres a R-Implementierung auf Daten x (1,2, ldots, n) angewendet (mit n1150, um die Daten zu emulieren) mit entsprechenden Werten y: Angewendet auf einen Datensatz wie die in der Frage dargestellte rote Kurve ergibt sich das Ergebnis: Die Daten Sind rot dargestellt, das 30-Tage-Fenster von median5MAD-Schwellen in grau und die Ausreißer - die sind einfach die Datenwerte über der grauen Kurve - in schwarz. (Die Schwelle kann nur am Ende des Anfangsfensters berechnet werden. Für alle Daten innerhalb dieses Anfangsfensters wird die erste Schwelle verwendet: Das ist der Grund, warum die graue Kurve zwischen x0 und x30 flach ist.) Die Auswirkungen der Änderung der Parameter sind (A) Erhöhung des Wertes des Fensters neigt dazu, die graue Kurve zu glätten und (b) die Erhöhung der Schwelle erhöht die graue Kurve. Wenn man das kennt, kann man ein erstes Segment der Daten nehmen und schnell die Werte der Parameter identifizieren, die die auslaufenden Peaks am besten aus dem Rest der Daten teilen. Wenden Sie diese Parameterwerte an, um den Rest der Daten zu überprüfen. Wenn ein Diagramm zeigt, dass sich die Methode im Laufe der Zeit verschlechtert, bedeutet dies, dass sich die Daten der Daten ändern und die Parameter möglicherweise neu eingestellt werden müssen. Beachten Sie, wie wenig diese Methode von den Daten übernimmt: Sie müssen nicht normal verteilt sein, sie müssen keine Periodizität ausstellen, die sie nicht einmal nicht negativ sein müssen. Alles, was davon ausgeht, ist, dass sich die Daten in angemessener Weise über die Zeit verhalten und dass die äußeren Peaks sichtbar höher sind als der Rest der Daten. Wenn jemand experimentieren möchte (oder eine andere Lösung mit dem hier angebotenen vergleichen), hier ist der Code, den ich verwendet habe, um Daten wie die in der Frage gezeigten zu produzieren. Ich vermute, anspruchsvolle Zeitreihe Modell wird nicht für Sie arbeiten, weil der Zeit, die es braucht, um Ausreißer mit dieser Methode zu erkennen. Daher ist hier ein Workaround: Erstens eine Basislinie normalen Verkehrsmuster für ein Jahr auf der Grundlage der manuellen Analyse der historischen Daten, die für die Zeit des Tages, Wochentag vs Wochenende, Monat des Jahres etc. verwenden. Verwenden Sie diese Grundlinie zusammen mit einigen einfachen Mechanismus (ZB gleitender Durchschnitt von Carlos vorgeschlagen), um Ausreißer zu erkennen. Sie können auch die statistische Prozesskontrollliteratur für einige Ideen überprüfen. Ja, das ist genau das, was ich tue: bis jetzt habe ich das Signal manuell in Perioden aufgeteilt, so dass für jeden von ihnen kann ich ein Konfidenzintervall definieren, in dem das Signal stationär sein soll, und deshalb kann ich Standardmethoden verwenden Als Standardabweichung. Das eigentliche Problem ist, dass ich das erwartete Muster für alle Signale, die ich zu analysieren habe, nicht entscheiden kann, und warum suchst du etwas Intelligenteres. Ndash gianluca Aug 2 10 at 21:37 Hier ist eine Idee: Schritt 1: Implementieren und Schätzen eines generischen Zeitreihenmodells auf einer einmaligen Basis basierend auf historischen Daten. Dies kann offline erfolgen. Schritt 2: Verwenden Sie das resultierende Modell, um Ausreißer zu erkennen. Schritt 3: Bei einer gewissen Frequenz (vielleicht jeden Monat) das Zeitreihenmodell neu kalibrieren (dies kann offline geschehen), so dass Ihr Schritt 2 Erkennung von Ausreißern nicht zu viel aus dem Schritt mit aktuellen Verkehrsmustern geht. Würde das für deinen Kontext arbeiten ndash user28 Aug 2 10 um 22:24 Ja, das könnte funktionieren. Ich dachte an einen ähnlichen Ansatz (Replizierung der Grundlinie jede Woche, die CPU-intensiv sein kann, wenn Sie Hunderte von univariate Zeitreihen zu analysieren haben). BTW die echte schwierige Frage ist quotwhat ist der beste Blackbox-Stil Algorithmus für die Modellierung eines völlig generischen Signal, unter Berücksichtigung von Lärm, Trend Schätzung und Saisonalität. AFAIK, jeder Ansatz in der Literatur erfordert eine wirklich harte quotparameter tuningquot Phase, und die einzige automatische Methode, die ich gefunden habe, ist ein ARIMA Modell von Hyndman (robjhyndmansoftwareforecast). Bin ich vermisse etwas ndash gianluca Auch hier geht es gut gut, wenn das Signal soll eine Saisonalität wie das haben, aber wenn ich eine ganz andere Zeitreihe (dh die durchschnittliche TCP Rundreise Zeit im Laufe der Zeit verwenden ), Wird diese Methode nicht funktionieren (da es besser wäre, diese mit einem einfachen globalen Mittelwert und Standardabweichung mit einem Schiebefenster mit historischen Daten zu behandeln). Wenn Sie bereit sind, ein allgemeines Zeitreihenmodell (das in seine Nachteile in Bezug auf Latenz usw. bringt) zu implementieren, bin ich pessimistisch, dass Sie eine allgemeine Implementierung finden, die zur gleichen Zeit einfach genug ist. Deutsch:. Englisch: v3.espacenet. com/textdoc? DB = EPODOC & ... PN = Für alle möglichen Zeitreihen arbeiten. Ndash user28 Aug 2 10 at 22:06 Ein weiterer Kommentar: Ich kenne eine gute Antwort könnte auch sein, du könntest die Periodizität des Signals abschätzen und den Algorithmus nach itquot entscheiden, aber ich habe keine echte gute Lösung für dieses andere gefunden Problem (ich spielte ein bisschen mit Spektralanalyse mit DFT und Zeitanalyse mit der Autokorrelationsfunktion, aber meine Zeitreihe enthält viel Lärm und solche Methoden geben einige verrückte Ergebnisse die meisten der Zeit) ndash gianluca Aug 2 10 um 22:06 A Kommentieren Sie zu Ihrem letzten Kommentar: that39s warum I39m auf der Suche nach einem generischeren Ansatz, aber ich brauche eine Art von quadratischen boxquot, weil ich canuldt irgendeine Annahme über das analysierte Signal machen kann, und deshalb kann ich den Questsatz für den Lernalgorithmus erstellen. Ndash gianluca Aug 2 10 at 22:09 Da es sich um eine Zeitreihe handelt, wird ein einfacher Exponentialfilter en. wikipedia. orgwikiExponentialsmoothing die Daten glätten. Es ist ein sehr guter Filter, da Sie nicht brauchen, um alte Datenpunkte zu akkumulieren. Vergleichen Sie jeden neu geglätteten Datenwert mit seinem ungehinderten Wert. Sobald die Abweichung eine bestimmte vordefinierte Schwelle überschreitet (je nachdem, was Sie glauben, dass ein Ausreißer in Ihren Daten ist), dann kann Ihr Ausreißer leicht erkannt werden. Beantwortet Apr 30 15 at 8:50 Du könntest die Standardabweichung der letzten N Messungen verwenden (du musst eine passende N auswählen). Eine gute Anomalie Ergebnis wäre, wie viele Standardabweichungen eine Messung aus dem gleitenden Durchschnitt ist. Beantwortet am 2. August 10 um 20:48 Vielen Dank für Ihre Antwort, aber was ist, wenn das Signal eine hohe Saisonalität aufweist (dh viele Netzwerkmessungen zeichnen sich durch ein tägliches und wöchentliches Muster zur gleichen Zeit aus, zB Nacht vs Tag oder Wochenende Vs Arbeitstage) Ein Ansatz, der auf Standardabweichung basiert, funktioniert in diesem Fall nicht. Ndash gianluca Zum Beispiel, wenn ich eine neue Probe alle 10 Minuten zu bekommen, und I39m eine Ausreißer Erkennung der Netzwerk-Bandbreite Nutzung eines Unternehmens, im Grunde um 18 Uhr diese Maßnahme wird fallen (dies ist ein erwartet Ein total normales Muster), und eine Standardabweichung, die über ein Schiebefenster berechnet wird, wird fehlschlagen (weil es eine Warnung sicher auslöst). Zur gleichen Zeit, wenn die Maßnahme um 4pm abfällt (abweichend von der üblichen Grundlinie), ist dies ein echter Ausreißer. Ndash gianluca Was ich tue, gruppiere die Messungen um Stunde und Wochentag und vergleiche Standardabweichungen davon. Immer noch nicht korrigieren Dinge wie Feiertage und Sommerwinter Saisonalität aber seine korrekte die meiste Zeit. Der Nachteil ist, dass Sie wirklich brauchen, um ein Jahr oder so von Daten zu sammeln, um genug zu sammeln, damit stddev beginnt Sinn zu machen. Spektralanalyse erkennt Periodizität in stationären Zeitreihen. Der Frequenzbereich Ansatz auf der Grundlage der spektralen Dichte Schätzung ist ein Ansatz, den ich als Ihren ersten Schritt empfehlen würde. Wenn für bestimmte Perioden Unregelmäßigkeit eine viel höhere Spitze als für diese Zeit typisch ist, dann wäre die Reihe mit solchen Unregelmäßigkeiten nicht stationär und die spektrale Anlsyse wäre nicht angemessen. Aber vorausgesetzt, Sie haben die Periode identifiziert, die die Unregelmäßigkeiten aufweist, die Sie in der Lage sein sollten, ungefähr zu bestimmen, was die normale Höhepunkthöhe sein würde, und kann dann eine Schwelle auf irgendeinem Niveau über diesem Durchschnitt setzen, um die unregelmäßigen Fälle zu bezeichnen. Antwortete am 3. September um 14:59 Ich schlage vor, das Schema unten, die sollte implementiert werden in einem Tag oder so: Sammeln Sie so viele Samples, wie Sie im Speicher halten können Entfernen Sie offensichtliche Ausreißer mit der Standardabweichung für jedes Attribut Berechnen und speichern Sie die Korrelationsmatrix Und auch der Mittelwert jedes Attributes Berechnen und speichern Sie die Mahalanobis Entfernungen aller Ihrer Proben Berechnen Ausreißer: Für die einzelne Probe, von der Sie wissen wollen, ihre Ausreißer: Abrufen der Mittel, Kovarianz Matrix und Mahalanobis Abstand s aus der Ausbildung Berechnen Sie die Mahalanobis Abstand d Für deine Probe Rückgabe des Perzentils, in dem d fällt (unter Verwendung der Mahalanobis-Distanzen vom Training) Das wird dein Outlier-Score sein: 100 ist ein extremer Ausreißer. PS Bei der Berechnung der Mahalanobis Entfernung. Verwenden Sie die Korrelationsmatrix, nicht die Kovarianzmatrix. Dies ist robuster, wenn die Probenmessungen in Einheit und Zahl variieren. Graphit 1 führt zwei ziemlich einfache Aufgaben durch: Speichern von Zahlen, die sich im Laufe der Zeit ändern und sie grafisch darstellen. Im Laufe der Jahre wurde viel Software geschrieben, um diese Aufgaben zu erledigen. Was Graphit einzigartig macht, ist, dass es diese Funktionalität als Netzwerk-Service bietet, der sowohl einfach zu bedienen als auch hoch skalierbar ist. Das Protokoll für die Datenübertragung in Graphit ist einfach genug, dass man in wenigen Minuten mit der Hand reden kann (nicht das, was du eigentlich willst, sondern auch ein anständiger Lackmus-Test für die Einfachheit). Das Rendern von Graphen und das Abrufen von Datenpunkten sind so einfach wie das Abrufen einer URL. Dies macht es sehr natürlich, Graphite mit anderen Software zu integrieren und ermöglicht es Anwendern, leistungsstarke Anwendungen auf Graphit zu bauen. Eine der häufigsten Verwendungen von Graphite ist das Erstellen von webbasierten Dashboards für die Überwachung und Analyse. Graphit wurde in einer hochvolumigen E-Commerce-Umgebung geboren und sein Design spiegelt dies wider. Skalierbarkeit und Echtzeit-Zugriff auf Daten sind wichtige Ziele. Die Komponenten, die Graphite erlauben, um diese Ziele zu erreichen, umfassen eine spezialisierte Datenbankbibliothek und ihr Speicherformat, einen Caching-Mechanismus zur Optimierung von IO-Operationen und eine einfache, aber effektive Methode zum Gruppieren von Graphite-Servern. Anstatt einfach zu beschreiben, wie Graphit heute arbeitet, werde ich erklären, wie Graphit anfänglich umgesetzt wurde (ganz naiv), welche Probleme ich lief und wie ich Lösungen für sie entwickelte. 7.1. Die Datenbankbibliothek: Speichern der Zeitreihen-Daten Graphit ist vollständig in Python geschrieben und besteht aus drei Hauptkomponenten: einer Datenbankbibliothek namens Whisper. Ein Back-End-Daemon namens Carbon. Und ein Front-End-Webapp, das Graphen macht und eine Basis-Benutzeroberfläche bereitstellt. Während Flüstern speziell für Graphit geschrieben wurde, kann es auch unabhängig verwendet werden. Es ist sehr ähnlich in der Gestaltung der Round-Robin-Datenbank von RRDtool verwendet, und nur speichert Zeitreihen numerische Daten. Normalerweise denken wir an Datenbanken als Serverprozesse, die Clientanwendungen über über Sockets sprechen. Aber flüstern Ähnlich wie RRDtool, ist eine Datenbankbibliothek, die von Anwendungen verwendet wird, um Daten zu manipulieren und abzurufen, die in speziell formatierten Dateien gespeichert sind. Die grundlegendsten Flüsteroperationen werden erstellt, um eine neue Flüstern-Datei zu erstellen, aktualisieren, um neue Datenpunkte in eine Datei zu schreiben und abzurufen, um Datenpunkte abzurufen. Abbildung 7.1: Grundanatomie einer Flüstern-Datei Wie in Abbildung 7.1 gezeigt. Whisper-Dateien bestehen aus einem Header-Bereich mit verschiedenen Metadaten, gefolgt von einem oder mehreren Archiv-Abschnitten. Jedes Archiv ist eine Folge von aufeinanderfolgenden Datenpunkten, die (Zeitstempel-, Wert-) Paare sind. Wenn ein Update - oder Abrufvorgang durchgeführt wird, bestimmt Flüstern den Offset in der Datei, in der Daten auf der Zeit des Zeitstempels und der Archivkonfiguration geschrieben oder gelesen werden sollen. 7.2 Das Back-End: Ein einfacher Speicher-Service Graphites-Back-End ist ein Daemon-Prozess namens Carbon-Cache. Meist einfach als Kohlenstoff bezeichnet. Es ist auf Twisted gebaut, ein hoch skalierbares ereignisgesteuertes IO-Framework für Python. Twisted ermöglicht es Carbon, effizient mit einer großen Anzahl von Kunden zu sprechen und eine große Menge an Verkehr mit niedrigem Overhead zu behandeln. Abbildung 7.2 zeigt den Datenfluss unter Kohlenstoff. Flüstern und die webapp: Client-Anwendungen sammeln Daten und senden sie an das Graphit-Backend, Carbon. Die die Daten mit Flüstern speichert. Diese Daten können dann von der Graphite webapp verwendet werden, um Graphen zu erzeugen. Abbildung 7.2: Datenfluss Die primäre Funktion von Kohlenstoff ist es, Datenpunkte für die von den Kunden bereitgestellten Metriken zu speichern. In der Graphit-Terminologie ist eine Metrik eine beliebige meßbare Größe, die über die Zeit variieren kann (wie die CPU-Auslastung eines Servers oder die Anzahl der Verkäufe eines Produkts). Ein Datenpunkt ist einfach ein (Zeitstempel-, Wert-) Paar, das dem gemessenen Wert einer bestimmten Metrik zu einem Zeitpunkt entspricht. Metriken werden durch ihren Namen eindeutig identifiziert, und der Name jeder Metrik sowie ihre Datenpunkte werden von Client-Anwendungen bereitgestellt. Eine gängige Client-Anwendung ist ein Überwachungs-Agent, der System - oder Anwendungs-Metriken sammelt und seine gesammelten Werte an Carbon zur einfachen Speicherung und Visualisierung sendet. Metriken in Graphit haben einfache hierarchische Namen, ähnlich wie Dateisystempfade, außer dass ein Punkt verwendet wird, um die Hierarchie eher als einen Schrägstrich oder umgekehrten Schrägstrich zu begrenzen. Kohlenstoff wird jeden gesetzlichen Namen respektieren und erstellt eine Flüstern-Datei für jede Metrik, um ihre Datenpunkte zu speichern. Die Flüstern-Dateien werden im Carbon-Daten-Verzeichnis in einer Dateisystem-Hierarchie gespeichert, die die dot-getrennte Hierarchie in jedem Metrik-Namen spiegelt, so dass (zB) servers. www01.cpuUsage Karten zu hellipserverswww01cpuUsage. wsp. Wenn eine Client-Anwendung Datenpunkte an Graphite senden möchte, muss sie eine TCP-Verbindung zu Carbon herstellen. In der Regel auf Port 2003 2. Der Client macht alle sprechenden Kohlenstoff nicht etwas über die Verbindung zu senden. Der Client sendet Datenpunkte in einem einfachen Klartextformat, während die Verbindung offen gelassen und nach Bedarf wiederverwendet werden kann. Das Format ist eine Textzeile pro Datenpunkt, wobei jede Zeile den gepunkteten metrischen Namen, den Wert und einen Unix-Epochen-Zeitstempel enthält, der durch Leerzeichen getrennt ist. Zum Beispiel könnte ein Client senden: Auf einem hohen Niveau, alle Kohlenstoff-T-Shirts ist, um Daten in diesem Format zu hören und versuchen, es auf der Festplatte so schnell wie möglich mit Flüstern zu speichern. Später werden wir die Details einiger Tricks besprechen, die verwendet werden, um Skalierbarkeit zu gewährleisten und die beste Leistung zu erhalten, die wir aus einer typischen Festplatte herausholen können. 7.3 Das Frontend: Graphs On-Demand Das Graphite webapp ermöglicht es Benutzern, benutzerdefinierte Grafiken mit einer einfachen URL-basierten API anzufordern. Graphing-Parameter werden in der Abfrage-Zeichenkette einer HTTP-GET-Anforderung angegeben und ein PNG-Bild wird als Antwort zurückgegeben. Zum Beispiel, die URL: fordert eine 500times300 Grafik für die metrischen server. www01.cpuUsage und die letzten 24 Stunden von Daten. Tatsächlich ist nur der Zielparameter erforderlich, alle anderen sind optional und verwenden Ihre Standardwerte, wenn sie weggelassen werden. Graphit unterstützt eine Vielzahl von Anzeigeoptionen sowie Datenmanipulationsfunktionen, die einer einfachen funktionalen Syntax folgen. Zum Beispiel könnten wir einen 10-Punkt-gleitenden Durchschnitt der Metrik in unserem vorherigen Beispiel wie folgt darstellen: Funktionen können verschachtelt werden, so dass komplexe Ausdrücke und Berechnungen möglich sind. Hier ist ein weiteres Beispiel, das die laufende Summe der Verkäufe für den Tag mit pro-Produkt-Metriken von Sales-per-Minute gibt: Die SumSeries-Funktion berechnet eine Zeitreihe, die die Summe jeder Metrik ist, die den Musterprodukten entspricht. salesPerMinute. Dann berechnet das Integral eine laufende Summe und nicht eine Per-Minute-Zählung. Von hier ist es nicht zu schwer, sich vorzustellen, wie man eine Web-Benutzeroberfläche zum Betrachten und Manipulieren von Graphen bauen könnte. Graphit kommt mit einer eigenen Komponisten-Benutzeroberfläche, die in Abbildung 7.3 gezeigt ist. Das tut dies mit Javascript, um die grafischen URL-Parameter zu ändern, da der Benutzer durch Menüs der verfügbaren Features klickt. Abbildung 7.3: Graphite Composer Interface 7.4. Dashboards Seit seiner Gründung wurde Graphite als Werkzeug für die Erstellung von webbasierten Dashboards verwendet. Die URL-API macht dies zu einem natürlichen Anwendungsfall. Machen ein Armaturenbrett ist so einfach wie eine HTML-Seite voller Tags wie folgt: Aber nicht jeder mag Handwerk URLs von Hand, so Graphites Composer UI bietet eine Point-and-Click-Methode, um eine Grafik, aus der Sie einfach kopieren können und erstellen Fügen Sie die URL ein. Wenn es mit einem anderen Tool gekoppelt ist, das eine schnelle Erstellung von Webseiten ermöglicht (wie ein Wiki), wird dies einfach genug, dass nicht-technische Benutzer ihre eigenen Dashboards ganz einfach bauen können. 7,5. Ein offensichtlicher Bottleneck Sobald meine Benutzer mit dem Aufbau von Dashboards begonnen haben, begann Graphite schnell Performance-Probleme zu haben. Ich habe die Web-Server-Protokolle untersucht, um zu sehen, welche Anfragen sie bogten. Es war ziemlich offensichtlich, dass das Problem die schiere Anzahl der grafischen Anfragen war. Der Webapp war CPU-gebunden, grafische Darstellungen ständig. Ich bemerkte, dass es viele identische Anfragen gab, und die Dashboards waren schuld. Stellen Sie sich vor, Sie haben ein Armaturenbrett mit 10 Graphen darin und die Seite aktualisiert einmal pro Minute. Jedes Mal, wenn ein Benutzer das Dashboard in ihrem Browser öffnet, muss Graphite 10 weitere Anfragen pro Minute verarbeiten. Das wird schnell teuer. Eine einfache Lösung ist, jedes Diagramm nur einmal zu rendern und dann eine Kopie davon an jeden Benutzer zu übergeben. Das Django-Web-Framework (welches Graphit aufgebaut ist) bietet einen hervorragenden Caching-Mechanismus, der verschiedene Backends wie Memcached verwenden kann. Memcached 3 ist im Wesentlichen eine Hash-Tabelle als Netzwerk-Service zur Verfügung gestellt. Client-Anwendungen können Key-Value-Paare genau wie eine normale Hash-Tabelle bekommen und setzen. Der Hauptvorteil der Verwendung von memcached ist, dass das Ergebnis einer teuren Anforderung (wie das Rendering eines Graphen) sehr schnell gespeichert und später abgerufen werden kann, um nachfolgende Anfragen zu behandeln. Um zu vermeiden, dass dieselben abgestandenen Graphen für immer zurückgegeben werden, kann memcached so konfiguriert werden, dass die Cache-Graphen nach kurzer Zeit abgelaufen sind. Auch wenn dies nur ein paar Sekunden dauert, ist die Last, die es abgibt, Graphit enorm, weil doppelte Anfragen so häufig sind. Ein weiterer häufiger Fall, der viele Rendering-Anfragen erzeugt, ist, wenn ein Benutzer die Anzeigeoptionen optimiert und Funktionen in der Composer-Benutzeroberfläche anwendet. Jedes Mal, wenn der Benutzer etwas ändert, muss Graphit den Graphen neu zeichnen. Die gleichen Daten sind in jeder Anfrage beteiligt, so dass es sinnvoll ist, die zugrunde liegenden Daten auch in den memcache zu setzen. Dies hält die Benutzeroberfläche auf den Benutzer ansprechend, da der Schritt des Abrufs von Daten übersprungen wird. 7.6. Optimieren von IO Stellen Sie sich vor, dass Sie 60.000 Metriken haben, die Sie an Ihren Graphite-Server senden, und jede dieser Metriken hat einen Datenpunkt pro Minute. Denken Sie daran, dass jede Metrik eine eigene Flüstern-Datei auf dem Dateisystem hat. Dies bedeutet, dass Kohlenstoff eine Schreiboperation auf 60.000 verschiedene Dateien pro Minute ausführen muss. Solange Carbon jede Millisekunde auf eine Datei schreiben kann, sollte es in der Lage sein, mithalten zu können. Dies ist nicht zu weit geholt, aber wir sagen, Sie haben 600.000 Metriken aktualisieren jede Minute, oder Ihre Metriken aktualisieren jede Sekunde, oder vielleicht können Sie einfach nicht leisten, schnell genug Speicherplatz. Was auch immer der Fall ist, nehmen Sie an, dass die Rate der eingehenden Datenpunkte die Rate der Schreiboperationen übersteigt, mit denen Ihre Speicherung mithalten kann. Wie sollte diese Situation behandelt werden Die meisten Festplatten in diesen Tagen haben langsam suchen Zeit 4. das ist, die Verzögerung zwischen IO-Operationen an zwei verschiedenen Standorten, im Vergleich zu schreiben eine zusammenhängende Datenfolge. Das bedeutet, je mehr angrenzendes Schreiben wir machen, desto mehr Durchsatz erhalten wir. Aber wenn wir Tausende von Dateien haben, die häufig geschrieben werden müssen, und jeder Schreibvorgang ist sehr klein (ein Flüstern-Datenpunkt ist nur 12 Bytes), dann werden unsere Festplatten definitiv die meiste Zeit verbringen. Unter der Annahme, dass die Rate der Schreiboperationen eine relativ niedrige Decke hat, ist die einzige Möglichkeit, unseren Datenpunktdurchsatz über diese Rate hinaus zu erhöhen, um mehrere Datenpunkte in einer einzigen Schreiboperation zu schreiben. Dies ist machbar, weil Flüstern aufeinanderfolgende Datenpunkte angrenzend auf der Festplatte anordnen. Also habe ich eine aktualisierende Funktion hinzugefügt, um zu flüstern. Die eine Liste von Datenpunkten für eine einzelne Metrik aufnimmt und zusammenhängende Datenpunkte in einen einzigen Schreibvorgang komprimiert. Auch wenn dies jeweils größer geschrieben wurde, ist der Zeitunterschied, den es braucht, um zehn Datenpunkte (120 Bytes) gegenüber einem Datenpunkt (12 Bytes) zu schreiben, vernachlässigbar. Es dauert noch ein paar Datenpunkte, bevor die Größe jedes Schreibens beginnt, die Latenz spürbar zu beeinflussen. Als nächstes habe ich einen Puffermechanismus in Kohlenstoff umgesetzt. Jeder eingehende Datenpunkt wird einer Warteschlange basierend auf seinem metrischen Namen zugeordnet und wird dann an diese Warteschlange angehängt. Ein weiterer Thread wiederholt immer wieder durch alle Warteschlangen und für jeden zieht er alle Datenpunkte heraus und schreibt sie in die entsprechende Flüstern-Datei mit updatemany. Gehen wir zurück zu unserem Beispiel, wenn wir 600.000 Metriken aktualisieren jede Minute und unsere Lagerung kann nur halten mit 1 schreiben pro Millisekunde, dann werden die Warteschlangen am Ende halten etwa 10 Datenpunkte im Durchschnitt. Die einzige Ressource, die das kostet, ist das Gedächtnis, das relativ reichlich ist, da jeder Datenpunkt nur wenige Bytes ist. Diese Strategie dämpft dynamisch so viele Datenpunkte wie nötig, um eine Rate von eingehenden Datenpunkten zu erhalten, die die Rate der IO-Operationen überschreiten können, mit denen Ihre Speicherung mithalten kann. Ein netter Vorteil dieses Ansatzes ist, dass es einen Grad an Elastizität hinzufügt, um temporäre IO-Verlangsamungen zu behandeln. Wenn das System andere IO-Arbeiten außerhalb von Graphit ausführen muss, dann ist es wahrscheinlich, dass die Rate der Schreiboperationen abnimmt, in welchem ​​Fall die C-Warteschlangen einfach wachsen werden. Je größer die Warteschlangen, desto größer die schreibt. Da der Gesamtdurchsatz der Datenpunkte gleich der Rate der Schreiboperationen ist, die die durchschnittliche Größe jedes Schreibens beträgt, kann der Kohlenstoff so lange aufrechterhalten, wie es genügend Speicher für die Warteschlangen gibt. Carbon-Warteschlangenmechanismus ist in Abbildung 7.4 dargestellt. Abbildung 7.4: Carbons Queuing Mechanism 7.7. Halten Sie es Real-Time-Pufferung Datenpunkte war eine schöne Art und Weise zu optimieren Kohlenstoff s IO aber es dauerte nicht lange für meine Benutzer zu bemerken, eine ziemlich beunruhigende Nebenwirkung. Wenn wir unser Beispiel erneut besuchen, haben wir 600.000 Metriken erhalten, die jede Minute aktualisieren und davon ausgehen, dass unsere Speicherung nur mit 60.000 Schreiboperationen pro Minute mithalten kann. Dies bedeutet, dass wir etwa 10 Minuten im Wert von Daten sitzen in Carbon s Warteschlangen zu einem bestimmten Zeitpunkt haben. Für einen Benutzer bedeutet dies, dass die Grafiken, die sie von der Graphite Webapp anfordern, die letzten 10 Minuten der Daten fehlen werden: Nicht gut Glücklicherweise ist die Lösung ziemlich einfach. Ich habe einfach einen Socket-Listener zu Carbon hinzugefügt, der eine Abfrage-Schnittstelle für den Zugriff auf die gepufferten Datenpunkte bereitstellt und dann die Graphite webapp ändert, um diese Schnittstelle jedes Mal zu verwenden, wenn sie Daten abrufen muss. Der Webapp kombiniert dann die Datenpunkte, die er von Carbon abruft, mit den Datenpunkten, die er von der Festplatte abgerufen hat, und voila, die Graphen sind in Echtzeit. Zugegeben, in unserem Beispiel werden die Datenpunkte auf die Minute aktualisiert und damit nicht genau in Echtzeit, aber die Tatsache, dass jeder Datenpunkt sofort in einem Graphen zugänglich ist, sobald er von Carbon empfangen wird, ist in Echtzeit. 7.8 Kernel, Caches und katastrophale Ausfälle Wie es wohl schon jetzt offensichtlich ist, ist ein Schlüsselelement der Systemleistung, dass Graphite eigene Leistung abhängt, IO-Latenz. Bisher haben wir davon ausgegangen, dass unser System eine konstant niedrige IO-Latenzzeit von durchschnittlich 1 Millisekunde pro Schuss hat, aber das ist eine große Annahme, die eine wenig tiefere Analyse erfordert. Die meisten Festplatten einfach arent, dass schnell sogar mit Dutzenden von Festplatten in einem RAID-Array gibt es sehr wahrscheinlich mehr als 1 Millisekunden Latenz für zufälligen Zugriff. Doch wenn Sie versuchen zu testen, wie schnell sogar ein alter Laptop einen ganzen Kilobyte auf die Festplatte schreiben könnte, würden Sie feststellen, dass der Schreibsystem-Anruf in weit weniger als 1 Millisekunde zurückkehrt. Warum, wenn Software inkonsistente oder unerwartete Leistungsmerkmale hat, ist in der Regel entweder Pufferung oder Caching schuld. In diesem Fall befassten sich beide mit beiden. Der Schreibsystemaufruf schreibt nicht technisch Ihre Daten auf die Festplatte, sondern setzt sie einfach in einen Puffer, den der Kernel dann später auf die Festplatte schreibt. Aus diesem Grund kommt der Schreibaufruf in der Regel so schnell zurück. Sogar nachdem der Puffer auf die Festplatte geschrieben wurde, bleibt er oft für nachfolgende Lesevorgänge im Cache. Beide Verhaltensweisen, Pufferung und Caching, erfordern natürlich Gedächtnis. Kernel-Entwickler, die intelligente Leute, die sie sind, beschlossen, es wäre eine gute Idee, zu verwenden, was Benutzer-Speicherplatz ist derzeit frei, anstatt zugewiesen Speicher direkt. Dies erweist sich als ein enorm nützlicher Performance-Booster und es erklärt auch, warum egal wie viel Speicher Sie zu einem System hinzufügen, wird es in der Regel am Ende mit fast null freie Speicher nach einer bescheidenen Menge an IO. Wenn Ihre Benutzer-Raum-Anwendungen arent mit diesem Speicher dann Ihr Kernel wahrscheinlich ist. Der Nachteil dieses Ansatzes ist, dass dieses freie Gedächtnis von dem Kernel weggenommen werden kann, sobald eine Benutzer-Raum-Anwendung entscheidet, dass es mehr Speicher für sich selbst zuteilen muss. Der Kernel hat keine andere Wahl, als ihn aufzugeben und zu verlieren, was Puffer dort haben könnte. Also, was bedeutet das alles für Graphit. Wir haben gerade die Abhängigkeit von Carbon auf konsequent niedrige IO-Latenz hervorgehoben und wir wissen auch, dass der Schreibsystemaufruf nur schnell zurückkehrt, weil die Daten lediglich in einen Puffer kopiert werden. Was passiert, wenn es nicht genügend Arbeitsspeicher für den Kernel gibt, um die Pufferung zu schreiben. Die Schreibvorgänge werden synchron und damit furchtbar langsam Dies führt zu einem dramatischen Rückgang der Rate von Kohlenstoff-Schreiboperationen, was dazu führt, dass Kohlenstoff-Warteschlangen wachsen, was noch mehr isst Gedächtnis, verhungert den Kern noch weiter. Am Ende führt diese Art von Situation in der Regel dazu, dass Kohlenstoff aus dem Gedächtnis läuft oder von einem wütenden Sysadmin getötet wird. Um diese Art von Katastrophe zu vermeiden, habe ich mehrere Features zu Carbon hinzugefügt, einschließlich konfigurierbarer Grenzen, wie viele Datenpunkte in die Warteschlange eingereiht werden können, und Geschwindigkeitsbegrenzungen, wie schnell verschiedene Flüsteroperationen durchgeführt werden können. Diese Eigenschaften können den Kohlenstoff vor dem Spiralen außer Kontrolle schützen und stattdessen weniger harte Effekte verhängen, wie etwa einige Datenpunkte fallen lassen oder mehr Datenpunkte akzeptieren. Allerdings sind die richtigen Werte für diese Einstellungen systemspezifisch und erfordern eine angemessene Menge an Tests zu stimmen. Sie sind nützlich, aber sie lösen das Problem nicht grundsätzlich. Dafür brauchen wir noch mehr Hardware. 7.9. Clustering Making mehrere Graphite-Server scheinen ein einziges System aus einer Benutzer-Perspektive ist nicht schrecklich schwierig, zumindest für eine naiumlve Umsetzung. Die Webapps User Interaktion besteht in erster Linie aus zwei Operationen: Metriken zu finden und Datenpunkte zu holen (meist in Form eines Graphen). Die Find - und Fetch-Operationen des Webapps sind in einer Bibliothek versteckt, die ihre Implementierung aus dem Rest der Codebasis abstrahiert, und sie werden auch über HTTP-Request-Handler für einfache Remote-Aufrufe ausgesetzt. Der Suchvorgang durchsucht das lokale Dateisystem von Whisper-Daten für Dinge, die einem benutzerdefinierten Muster entsprechen, genauso wie ein Dateisystem Glob wie. txt mit Dateien mit dieser Erweiterung übereinstimmt. Als Baumstruktur ist das Ergebnis, das von find zurückgegeben wird, eine Sammlung von Knotenobjekten, die jeweils aus den Zweig - oder Blattunterklassen des Knotens abgeleitet werden. Verzeichnisse entsprechen den Zweigknoten und Flüstern entsprechen den Knotenknoten. Diese Schicht der Abstraktion macht es einfach, verschiedene Arten von zugrunde liegenden Speicher einschließlich RRD-Dateien 5 und gezippten Flüstern Dateien zu unterstützen. Die Leaf-Schnittstelle definiert eine Abrufmethode, deren Implementierung von der Art des Blattknotens abhängt. Im Falle von Flüstern Dateien ist es einfach eine dünne Wrapper um die Flüstern Bibliotheken eigene Fetch-Funktion. Wenn die Clusterunterstützung hinzugefügt wurde, wurde die Suchfunktion erweitert, um Remote-Anrufe über HTTP zu anderen in der Webapps-Konfiguration angegebenen Graphite-Servern vornehmen zu können. Die in den Ergebnissen dieser HTTP-Anrufe enthaltenen Knotendaten werden als RemoteNode-Objekte verpackt, die dem üblichen Knoten entsprechen. Ast. Und Leaf-Schnittstellen. Damit ist das Clustering für den Rest der Webapps-Codebasis transparent. Die Abrufmethode für einen entfernten Blattknoten wird als weiterer HTTP-Aufruf implementiert, um die Datenpunkte aus dem Knoten Graphite Server abzurufen. Alle diese Anrufe werden zwischen den Webapps auf die gleiche Weise gemacht, wie ein Client sie anrufen würde, außer mit einem zusätzlichen Parameter, der angibt, dass die Operation nur lokal durchgeführt und nicht im gesamten Cluster neu verteilt werden soll. Wenn der Webapp aufgefordert wird, einen Graphen zu rendern, führt er den Suchvorgang aus, um die angeforderten Metriken zu lokalisieren und Anrufe zu holen, um ihre Datenpunkte abzurufen. Dies funktioniert, ob die Daten auf dem lokalen Server, Remote-Server oder beides sind. Wenn ein Server heruntergefahren wird, ruft die Fernbedienung ziemlich schnell ab und der Server wird für einen kurzen Zeitraum als außer Betrieb gesetzt, während der keine weiteren Anrufe stattfinden. Von einem Benutzerstandpunkt aus, was auch immer Daten auf dem verlorenen Server waren, fehlen aus ihren Graphen, es sei denn, dass Daten auf einem anderen Server im Cluster dupliziert werden. 7.9.1. Eine kurze Analyse der Clustering-Effizienz Der teuerste Teil einer Grafikanforderung ist die Darstellung des Graphen. Jedes Rendering wird von einem einzigen Server durchgeführt, so dass das Hinzufügen von mehr Servern die Kapazität für das Rendern von Graphen effektiv erhöht. Allerdings bedeutet die Tatsache, dass viele Anfragen am Ende verteilen, um Anrufe an jeden anderen Server im Cluster zu vermitteln, bedeutet, dass unser Clustering-Schema viel von der Front-End-Last teilt, anstatt es zu dispergieren. Was wir an dieser Stelle erreicht haben, ist jedoch ein effektiver Weg, um Back-End-Last zu verteilen, da jede Carbon-Instanz unabhängig arbeitet. Dies ist ein guter erster Schritt seit der meiste Zeit das Back-End ist ein Engpass weit vor dem vorderen Ende ist, aber klar das vordere Ende wird nicht horizontal mit diesem Ansatz skalieren. Um die Front-End-Skala effektiver zu machen, muss die Anzahl der Remote-Find-Anrufe, die vom Webapp gemacht werden, reduziert werden. Auch hier ist die einfachste Lösung das Caching. So wie memcached bereits zum Zwischenspeichern von Datenpunkten und gerenderten Graphen verwendet wird, kann es auch verwendet werden, um die Ergebnisse von Suchaufträgen zwischenzuspeichern. Da der Standort der Metriken viel weniger wahrscheinlich häufig ändern wird, sollte dies typischerweise länger zwischengespeichert werden. Der Kompromiss, das Cache-Timeout festzulegen, um Ergebnisse zu lang zu finden, ist jedoch, dass neue Metriken, die der Hierarchie hinzugefügt wurden, möglicherweise nicht so schnell dem Benutzer erscheinen. 7.9.2. Verteilen von Metriken in einem Cluster Die Graphite webapp ist während eines Clusters ziemlich homogen, indem sie auf jedem Server genau denselben Job ausführt. Die Rolle von carbon s kann jedoch von Server zu Server variieren, je nachdem, welche Daten Sie an jede Instanz senden möchten. Oft gibt es viele verschiedene Clients, die Daten an Carbon senden. So wäre es ganz nervig, jede Client-Konfiguration mit deinem Graphite-Cluster-Layout zu koppeln. Anwendungs-Metriken können zu einem Carbon-Server gehen, während Geschäftsmetriken an mehrere Carbon-Server für Redundanz gesendet werden können. Um das Management von Szenarien so zu vereinfachen, kommt Graphit mit einem zusätzlichen Tool namens Carbon-Relais. Sein Job ist ganz einfach, er erhält metrische Daten von Clients genau wie der Standard-Carbon-Daemon (der eigentlich Carbon-Cache genannt wird), aber anstatt die Daten zu speichern, wendet er einen Satz von Regeln auf die Metriknamen an, um zu bestimmen, welche Carbon-Cache-Server Um die Daten weiterzuleiten. Jede Regel besteht aus einem regulären Ausdruck und einer Liste von Zielservern. Für jeden empfangenen Datenpunkt werden die Regeln in der Reihenfolge ausgewertet und die erste Regel, deren regulärer Ausdruck mit dem Metriknamen übereinstimmt, wird verwendet. Auf diese Weise müssen alle Klienten ihre Daten an das Carbon-Relais senden und es wird auf den richtigen Servern enden. In gewissem Sinne liefert das Carbon-Relay Replikationsfunktionalität, obwohl es genauer als Eingabe-Duplizierung bezeichnet wird, da es sich nicht um Synchronisierungsprobleme handelt. Wenn ein Server vorübergehend heruntergefahren wird, fehlen die Datenpunkte für den Zeitraum, in dem er heruntergefahren ist, aber sonst normal funktionieren. Es gibt administrative Skripte, die die Kontrolle über den Re-Synchronisationsprozess in den Händen des Systemadministrators verlassen. 7.10. Design Reflexionen Meine Erfahrung in der Arbeit an Graphit hat bekräftigt, ein Glaube von mir, dass Skalierbarkeit hat sehr wenig mit Low-Level-Performance zu tun, sondern ist ein Produkt der Gesamt-Design. Ich bin in viele Engpässe auf dem Weg gelaufen, aber jedes Mal, wenn ich nach Verbesserungen in Design und nicht auf Beschleunigungen in der Leistung suche. Ich habe viele Male gefragt, warum ich Graphit in Python anstelle von Java oder C geschrieben habe, und meine Antwort ist immer, dass ich noch auf ein wahres Bedürfnis nach der Leistung stoßen muss, die eine andere Sprache anbieten könnte. In Knu74 sagte Donald Knuth berühmt, dass die vorzeitige Optimierung die Wurzel allen Übels ist. Solange wir davon ausgehen, dass sich unser Code auf nicht-triviale Weise weiterentwickeln wird, ist die Optimierung 6 in gewissem Sinne verfrüht. Einer von Graphites größten Stärken und größten Schwächen ist die Tatsache, dass sehr wenig davon tatsächlich im traditionellen Sinne entworfen wurde. Im Großen und Ganzen entwickelte sich Graphit allmählich, Hürde durch Hürde, als Probleme auftraten. Viele Male waren die Hürden vorhersehbar und verschiedene präventive Lösungen schienen natürlich zu sein. Allerdings kann es nützlich sein, zu vermeiden, Probleme zu lösen, die Sie noch nicht haben, auch wenn es wahrscheinlich scheint, dass Sie bald werden. Der Grund dafür ist, dass man viel mehr von der genauen Untersuchung der tatsächlichen Ausfälle als von der Theorie über überlegene Strategien lernen kann. Problemlösungen werden sowohl von den empirischen Daten, die wir zur Hand haben, als auch von unserem eigenen Wissen und unserer Intuition angetrieben. Ive festgestellt, dass Zweifel an Ihrer eigenen Weisheit genügend können Sie zwingen, Ihre empirischen Daten genauer zu betrachten. Zum Beispiel, als ich zum ersten Mal flüsterte, war ich überzeugt, dass es in C für Geschwindigkeit umgeschrieben werden müsste und dass meine Python-Implementierung nur als Prototyp dienen würde. Wenn ich werent unter einer Zeit-Crunch ich sehr gut kann die Python-Implementierung ganz übersprungen haben. Es stellt sich heraus jedoch, dass IO ist ein Engpass so viel früher als CPU, dass die geringere Effizienz von Python kaum in der Praxis wichtig ist in der Praxis. Wie ich schon sagte, ist der evolutionäre Ansatz auch eine große Schwäche von Graphit. Schnittstellen, stellt sich heraus, nicht eignet sich gut für allmähliche Evolution. Eine gute Schnittstelle ist konsistent und setzt Konventionen ein, um die Vorhersagbarkeit zu maximieren. Durch diese Maßnahme ist Graphites URL API derzeit eine Sub-Par-Schnittstelle. Optionen und Funktionen wurden im Laufe der Zeit angeheftet, manchmal bilden kleine Inseln der Konsistenz, aber insgesamt fehlt ein globales Gefühl der Konsistenz. Der einzige Weg, um ein solches Problem zu lösen, ist durch die Versionierung von Schnittstellen, aber das hat auch Nachteile. Sobald eine neue Schnittstelle entworfen ist, ist die alte noch schwer zu befreien und verweilt als evolutionäres Gepäck wie der menschliche Anhang. Es kann harmlos genug sein, bis eines Tages Ihr Code Appendizitis bekommt (d. h. ein Bug an die alte Schnittstelle gebunden) und du bist gezwungen zu operieren. Wenn ich schon früh eine Sache über Graphite ändern würde, wäre es bei der Gestaltung der externen APIs viel mehr Sorgfalt gehabt worden, wenn man anstatt sie nach und nach weiterentwickelte. Ein weiterer Aspekt von Graphit, der etwas Frustration verursacht, ist die begrenzte Flexibilität des hierarchischen Metrik-Namensmodells. Während es ganz einfach und sehr praktisch für die meisten Anwendungsfälle ist, macht es einige anspruchsvolle Abfragen sehr schwierig, sogar unmöglich, auszudrücken. Als ich zum ersten Mal an Graphit bat, wusste ich von Anfang an, dass ich eine menschlich editierbare URL-API für die Erstellung von Graphen 7 wollte. Während ich immer noch froh bin, dass Graphite dies heute bietet, habe ich Angst, dass diese Anforderung die API mit übermäßig einfacher Syntax belastet hat Macht komplexe Ausdrücke unhandlich. Eine Hierarchie macht das Problem, den Primärschlüssel für eine Metrik ganz einfach zu bestimmen, weil ein Pfad im Wesentlichen ein Primärschlüssel für einen Knoten im Baum ist. Der Nachteil ist, dass alle beschreibenden Daten (d. h. Spaltendaten) direkt in den Pfad eingebettet werden müssen. Eine mögliche Lösung besteht darin, das hierarchische Modell zu pflegen und eine separate Metadaten-Datenbank hinzuzufügen, um eine erweiterte Auswahl von Metriken mit einer speziellen Syntax zu ermöglichen. 7.11. Als Open Source Rückblick auf die Evolution von Graphit, bin ich immer noch überrascht, wie weit es als Projekt gekommen ist und wie weit es mich als Programmierer genommen hat. Es begann als Haustierprojekt, das nur ein paar hundert Zeilen Code war. Die Rendering-Engine begann als Experiment, einfach um zu sehen, ob ich einen schreiben könnte. Flüstern wurde im Laufe eines Wochenendes aus Verzweiflung geschrieben, um ein Show-Stopper-Problem vor einem kritischen Starttermin zu lösen. Kohlenstoff wurde mehrmals umgeschrieben, als ich mich erinnern möchte. Einmal durfte ich im Jahr 2008 Graphite unter einer Open-Source-Lizenz freigeben. Nach ein paar Monaten wurde es in einem CNET Artikel erwähnt, der von Slashdot abgeholt wurde und das Projekt plötzlich abging und seitdem aktiv ist. Heute gibt es Dutzende von großen und mittelständischen Unternehmen mit Graphit. Die Gemeinde ist sehr aktiv und wächst weiter. Weit davon entfernt, ein fertiges Produkt zu sein, gibt es eine Menge coole experimentelle Arbeit, die es macht, Spaß zu machen, um an und voller Potenzial zu arbeiten. Launchpadgraphite Es gibt einen anderen Port, über den serialisierte Objekte gesendet werden können, was effizienter ist als das Klartextformat. Dies wird nur für sehr hohe Verkehrsstufen benötigt. Memcached. org Solid-State-Laufwerke haben in der Regel extrem schnelle Suchzeiten im Vergleich zu herkömmlichen Festplatten. RRD-Dateien sind eigentlich Verzweigungsknoten, da sie mehrere Datenquellen enthalten können, eine RRD-Datenquelle ist ein Blattknoten. Knuth spezifisch bedeutet Low-Level-Code-Optimierung, nicht makroskopische Optimierung wie Design-Verbesserungen. Dies zwingt die Graphen selbst, um Open Source zu sein. Jeder kann einfach auf eine grafische URL schauen, um es zu verstehen oder zu modifizieren. 24.Deutscher Planet 24. Februar 2017 Der zweite Release-Kandidat von NetBSD 7.1 steht ab sofort zum Download zur Verfügung: Diejenigen, die es vorziehen, aus der Quelle zu bauen, können weiterhin dem netbsd-7 folgen Verzweigen oder das netbsd-7-1-RC2-Tag verwenden. Die meisten Änderungen seit 7.1RC1 wurden Sicherheitsfixes gemacht. Siehe srcdocCHANGES-7.1 für die vollständige Liste. Bitte helfen Sie uns durch Testen 7.1RC2. Wir lieben alle Rückmeldungen. Melden Sie Probleme durch die üblichen Kanäle (senden Sie eine PR oder schreiben Sie die entsprechende Liste). Mehr allgemeines Feedback ist willkommen bei email160protected 23. Februar 2017 Ziele: pkgcomp 2.0 verwenden, um ein binäres Repository aller Pakete zu erstellen, die Sie interessieren möchten, um das Repository täglich frisch zu halten und dieses Repository mit pkgin zu verwenden, um Ihr MacOS zu pflegen System up-to-date und sicher. Dieses Tutorial ist speziell auf macOS ausgerichtet und setzt auf das Makrospezifische Selbstinstallationspaket. Für ein allgemeineres Tutorial, das das pkgcomp-cron-Paket in pkgsrc verwendet, finden Sie unter Keeping NetBSD up-to-date mit pkgcomp 2.0. Erste Schritte Erstmaliges Herunterladen und Installieren des eigenständigen MacOS-Installationspakets. Um die richtige Datei zu finden, navigieren Sie zur Freigabeseite auf GitHub. Wählen Sie die aktuellste Version und laden Sie die Datei mit dem Namen des Formulars pkgcomp-ltversiongt-macos. pkg herunter. Dann doppelklicken Sie auf die heruntergeladene Datei und folgen Sie den Installationsanweisungen. Sie werden nach Ihrem Administrator-Passwort gefragt, da das Installationsprogramm Dateien unter usrlocal beachten muss, dass pkgcomp root-Berechtigungen trotzdem zum Ausführen benötigt (weil es chroot (8) intern verwendet), so dass Sie irgendwann eine Erlaubnis erteilen müssen. Der Installateur ändert den Standard-PATH (indem er etcpaths. dpkgcomp erstellt), um das eigene Installationsverzeichnis und das Installations-Präfix von pkgcomps einzuschließen. Starten Sie Ihre Shell-Sessions neu, um diese Änderung effektiv zu machen, oder aktualisieren Sie Ihre eigenen Shell-Startup-Scripts entsprechend, wenn Sie nicht die Standard-verwenden. Schließlich stellen Sie sicher, dass Xcode in der Standard-ApplikationenXcode. app-Standort installiert ist und dass alle Komponenten, die zum Erstellen von Befehlszeilen-Apps erforderlich sind, verfügbar sind. Tipp: Versuche, cc aus der Befehlszeile auszuführen und zu sehen, ob es seine Benutzungsnachricht druckt. Adjusting the configuration The macOS flavor of pkgcomp is configured with an installation prefix of usrlocal. which means that the executable is located in usrlocalsbinpkgcomp and the configuration files are in usrlocaletcpkgcomp. This is intentional to keep the pkgcomp installation separate from your pkgsrc installation so that it can run no matter what state your pkgsrc installation is in. The configuration files are as follows: usrlocaletcpkgcompdefault. conf. This is pkgcomps own configuration file and the defaults configured by the installer should be good to go for macOS. In particular, packages are configured to go into optpkg instead of the traditional usrpkg. This is a necessity because the latter is not writable starting with OS X El Capitan thanks to System Integrity Protection (SIP). usrlocaletcpkgcompsandbox. conf. This is the configuration file for sandboxctl, which is the support tool that pkgcomp uses to manage the compilation sandbox. The default settings configured by the installer should be good. usrlocaletcpkgcompextra. mk. conf. This is pkgsrcs own configuration file. In here, you should configure things like the licenses that are acceptable to you and the package-specific options youd like to set. You should not configure the layout of the installed files (e. g. LOCALBASE ) because thats handled internally by pkgcomp as specified in default. conf . usrlocaletcpkgcomplist. txt. This determines the set of packages you want to build automatically (either via the auto command or your periodic cron job). The automated builds will fail unless you list at least one package. Make sure to list pkgin here to install a better binary package management tool. Youll find this very handy to keep your installation up-to-date. Note that these configuration files use the varpkgcomp directory as the dumping ground for: the pkgsrc tree, the downloaded distribution files, and the built binary packages. We will see references to this location later on. The cron job The installer configures a cron job that runs as root to invoke pkgcomp daily. The goal of this cron job is to keep your local packages repository up-to-date so that you can do binary upgrades at any time. You can edit the cron job configuration interactively by running sudo crontab - e . This cron job wont have an effect until you have populated the list. txt file as described above, so its safe to let it enabled until you have configured pkgcomp. If you want to disable the periodic builds, just remove the pkgcomp entry from the crontab. On slow machines, or if you are building a lot of packages, you may want to consider decreasing the build frequency from daily to weekly . Sample configuration Here is what the configuration looks like on my Mac Mini as dumped by the config subcommand. Use this output to get an idea of what to expect. Ill be using the values shown here in the rest of the tutorial: Building your own packages by hand Now that you are fully installed and configured, youll build some stuff by hand to ensure the setup works before the cron job comes in. The simplest usage form, which involves full automation and assumes you have listed at least one package in list. txt. is something like this: This trivially-looking command will: clone or update your copy of pkgsrc create the sandbox bootstrap pkgsrc and pbulk use pbulk to build the given packages and destroy the sandbox. After a successful invocation, youll be left with a collection of packages in the varpkgcomppackages directory. If youd like to restrict the set of packages to build during a manually-triggered build, provide those as arguments to auto. This will override the contents of AUTOPACKAGES (which was derived from your list. txt file). But what if you wanted to invoke all stages separately, bypassing auto. The command above would be equivalent to: Go ahead and play with these. You can also use the sandbox-shell command to interactively enter the sandbox. See pkgcomp(8) for more details. Lastly note that the root user will receive email messages if the periodic pkgcomp cron job fails, but only if it fails. That said, you can find the full logs for all builds, successful or not, under varpkgcomplog . Installing the resulting packages Now that you have built your first set of packages, you will want to install them. This is easy on macOS because you did not use pkgsrc itself to install pkgcomp. First, unpack the pkgsrc installation. You only have to do this once: Thats it. You can now install any packages you like: The command above assume you have restarted your shell to pick up the correct path to the pkgsrc installation. If the call to pkgadd fails because of a missing binary, try restarting your shell or explicitly running the binary as optpkgsbinpkgadd . Keeping your system up-to-date Thanks to the cron job that builds your packages, your local repository under varpkgcomppackages will always be up-to-date you can use that to quickly upgrade your system with minimal downtime. Assuming you are going to use pkgtoolspkgin as recommended above (and why not), configure your local repository: And, from now on, all it takes to upgrade your system is: February 22, 2017 At the obvious risk of this post getting downvoted and eventually closed as too biasedopionated, Id nevertheless ask this question. The NetBSD projects tagline is of course, it runs NetBSD. I understand that one of the main goals is to run on every possible hardware out there (pages on the internet are full of possible hyperbole, such as anything with a computing chip in it, even a toaster shall run NetBSD). However, if you examine the webpages of IoT hardware from mid-2010s, there is poor visibility of NetBSD as the first choice of OS. Z. B. on the Raspberry Pi, Raspbian OS is regarded as the go-to starter OS. Arduinos Wikipedia page says that it runs either Windows, macOS or Linux. Snappy Ubuntu-Core and even Win10 IoT (gasp) are staking a claim as leading OSes in the IoT market. While I understand that the last two OSes mentioned above have corporate muscle-power behind them, even open-source job requirement listings do not place much emphasis on NetBSD expertise. The question distills down to: Why is NetBSD not considered the first-rate choice in these IoT hardware. This seems as an anti-pattern given the projects canonical goals All of a sudden (read: without changing any parameters) my netbsd virtualmachine started acting oddly. The symptoms concern ssh tunneling. From my laptop I launch: Then, in another shell: The ssh debug says: I tried also with localhost:80 to connect to the (remote) web server, with identical results. The remote host runs NetBSD: I am a bit lost. I tried running tcpdump on the remote host, and I spotted these bad chksum: I tried restarting the ssh daemon to no avail. I havent rebooted yet - perhaps somebody here can suggest other diagnostics. I think it might either be the virtual network card driver, or somebody rooted our ssh. February 20, 2017 Introduction I have been working on and off for almost a year trying to get reproducible builds (the same source tree always builds an identical cdrom) on NetBSD. I did not think at the time it would take as long or be so difficult, so I did not keep a log of all the changes I needed to make. I was also not the only one working on this. Other NetBSD developers have been making improvements for the past 6 years. I would like to acknowledge the NetBSD build system (aka build. sh ) which is a fully portable cross-build system. This build system has given us a head-start in the reproducible builds work. I would also like to acknowledge the work done by the Debian folks who have provided a platform to run, test and analyze reproducible builds. Special mention to the diffoscope tool that gives an excellent overview of whats different between binary files, by finding out what they are (and if they are containers what they contain) and then running the appropriate formatter and diff program to show whats different for each file. Finally other developers who have started, motivated and did a lot of work getting us here like Joerg Sonnenberger and Thomas Klausner for their work on reproducible builds, and Todd Vierling and Luke Mewburn for their work on build. sh. Sources of difference Heres is what we found that we needed to fix, how we chose to fix it and why, and where are we now. There are many reasons why two separate builds from the same sources can be different. Heres an (incomplete) list: timestamps Many things like to keep track of timestamps, specially archive formats ( tar(1) . ar(1) ), filesystems etc. The way to handle each is different, but the approach is to make them either produce files with a 0 timestamp (where it does not matter like ar), or with a specific timestamp when using 0 does not make sense (it is not useful to the user). datestimesauthors etc. embedded in source files Some programs like to report the datetime they were built, the author, the system they were built on etc. This can be done either by programmatically finding and creating source files containing that information during build time, or by using standard macros such as DATE, TIME etc. Usually putting a constant time or eliding the information (such as we do with kernels and bootblocks) solves the problem. timezone sensitive code Certain filesystem formats (iso 9660 etc.) dont store raw timestamps but formatted times to achieve this they convert from a timestamp to localtime, so they are affected by the timezone. directory orderbuild order The build order is not constant especially in the presence of parallel builds neither is directory scan order. If those are used to create output files, the output files will need to be sorted so they become consistent. non-sanitized data stored into files Writing data structures into raw files can lead to problems. Running the same program in different operating systems or using ASLR makes those issues more obvious. symbolic linkspaths Having paths embedded into binaries (specially for debugging information) can lead to binary differences. Propagation of the logical path can prove problematic. general tool inconsistencies gcc(1) profiling uses a PROFILEHOOK macro on RISC targets that utilizes the current function number to produce labels. Processing order of functions is not guaranteed. gpt(8) creation involves uuid generation these are generally random. block allocation on msdos filesystems had a random component. makefs(8) uses timezones with timestamps (iso9660 ), randomness for block selection (msdos ), stores stray pointers in superblock (ffs ). Every program that is used to generate other output needs to have consistent results. In NetBSD this is done with build. sh. which builds a set of tools from known sources before it can use those tools to build the rest of the system). There is a large number of tools. There are also internal issues with the tools that make their output non reproducible, such as nondeterministic symbol creation or capturing parts of the environment in debugging information. build information tunables environment There are many environment settings, or build variable settings that can affect the build. This needs to be kept constant across builds so weve changed the list of variables that are reported in Makefile. params. making sure that the source tree has no local changes Variables controlling reproducible builds Reproducible builds are controlled on NetBSD with two variables: MKREPRO (which can be set to yes or no) and MKREPROTIMESTAMP which is used to set the timestamp of the builds artifacts. This is usually set to the number of seconds from the epoch. The build. sh - P flag handles reproducible builds automatically: sets the MKREPRO variable to yes, and then finds the latest source file timestamp in the tree and sets MKREPROTIMESTAMP to that. Handling timestamps The first thing that we needed to understand was how to deal with timestamps. Some of the timestamps are not very useful (for example inside random ar archives) so we choose to 0 them out. Others though become annoying if they are all 0. What does it mean when you mount install media and all the dates on the files are Jan 1, 1970 We decided that a better timestamp would be the timestamp of the most recently modified file in the source tree. Unfortunately this was not easy to find on NetBSD, because we are still using CVS as the source control system, and CVS does not have a good way to provide that. For that we wrote a tool called cvslatest. that scans the CVS metadata files (CVSEntries) and finds the latest commit. This works well for freshly checked out trees (since CVS uses the source timestamp when checking out), but not with updated trees (because CVS uses the current time when updating files, so that make(1) thinks theyve been modified). To fix that, weve added a new flag to the cvs(1) update command - t . that uses the source checkout time. The build system needs now to evaluate the tree for the latest file running cvslatest(1) and find the latest timestamp in seconds from the Epoch which is set in the MKREPROTIMESTAMP variable. This is the same as SOURCEDATEEPOCH. Various Makefiles are using this variable and MKRERPO to determine how to produce consistent build artifacts. For example many commands ( tar(1) . makefs(8) . gpt(8) . ) have been modified to take a --timestamp or - T command line switch to generate output files that use the given timestamp, instead of the current time. Other software (am-utils, acpica, bootblocks, kernel) used DATE or TIME, or captured the user, machine, etc. from the environment and had to be changed to a constant time, user, machine, etc. roff(7) documents used the td macro to generate the date of formatting in the document have been changed to conditionally use the macro based on register R . for example as in intro. me and then the Makefile was changed to set that register for MKREPRO. Handling Order We dont control the build order of things and we also dont control the directory order which can be filesystem dependent. The collation order also is environment specific, and sorting needs to be stable (we have not encountered that problem yet). Two different programs caused us problems here: file(1) with the generation of the compiled magic file using directory order (fixed by changing file(1) ). install-info(1) . texinfo(5) files that have no specific order. For that we developed another tool called sortinfo(1) that sorts those files as a post-process step. Fortunately the filesystem builders and tar programs usually work with input directories that appear to have a consistent order so far, so we did not have to fix things there. Permissions NetBSD already keeps permissions for most things consistent in different ways: the build system uses install(8) and specifies ownership and mode. the mtree(8) program creates build artifacts using consistent ownership and permissions. Nevertheless, the various architecture-specific distribution media installers used cp(1) mkdir(1) and needed to be corrected. Most of the issues found had to do with capturing the environment in debugging information. The two biggest issues were: DWATProducer and DWATcompdir . Here you see two changes we made for reproducible builds: We chose to allow variable names (and have gcc(1) expand them) for the source of the prefix map because the source tree location can vary. Others have chosen to skip - fdebug-prefix-map from the variables to be listed. We added - fdebug-regex-map so that we could handle the NetBSD specific objdir build functionality. Object directories can have many flavors in NetBSD so it was difficult to use - fdebug-prefix-map to capture that. DWATcompdir presented a different challenge. We got non-reproducibility when building on paths where either the source or the object directories contained symbolic links. Although gcc(1) does the right thing handling logical paths (respects PWD), we found that there were problems both in the NetBSD sh(1) (fixed here ) and in the NetBSD make(1) (fixed here ). Unfortunately we cant depend on the shell to obey the logical path so we decided to go with: This works because make(1) is a tool (part of the toolchain we provide) whereas sh(1) is not. Another weird issue popped up on sparc64 where a single file in the whole source tree does not build reproducibly. This file is asn1krb5asn1.c which is generated in here. The problem is that when profiling on RISC machines gcc uses the PROFILEHOOK macro which in turn uses the function number to generate labels. This number is assigned to each function in a source file as it is being compiled. Unfortunately this number is not deterministic because of optimization (a bug), but fortunately turning optimization off fixes the problem. Status and future work As of 2017-02-20 we have fully reproducible builds on amd64 and sparc64. We are planning to work on the following areas: Vary more parameters on the system build (filesystem types, build OSs) Verify that cross building is reproducible Verify that unprivileged builds work Test on all the platforms February 19, 2017 At the second annual PillarCon. I facilitated a workshop called Fundamentals of C and Embedded using Mob Programming. On a Mac, we test-drove toggling a Raspberry Pis onboard LED. Before and after Before: ACT LED off Here are the takeaways we wrote down: Could test return type of main() Why wasnt numcalls 0 to begin with Maybe provide the mocks in advance (maybe use CMock ) Fun idea: fake GPIO device Vim tricks Cool But maybe use an easier editor for target audience Appropriate amount of effort need bigger payoff Mob programming supported the learning processobjective My own thoughts for next time I do this material: Try: providing the mocks in the starting state Keep: providing multi-target Makefile and prebuilt cross compiler Try: using a more discoverable (e. g. non-modal) text editor Keep: being prepared with a test list Try: providing already-written test cases to uncomment one at a time (one of the aspects of James Grennings training course I especially loved) Keep: being prepared with corners to cut if time gets short Try: knowing more of the mistakes we might make when cutting corners Keep: mobbing Participants who already knew some of this stuff liked the mobbing (new to some of them) and appreciated how I structured the material to unfold. Participants who were new to C andor embedded (my target audience) came away feeling that they neednt be intimidated by it, and that programming in this context can be as fun and feedbacky as theyre accustomed to. Play along at home Then follow the steps outlined in the README . Further learning Youre welcome to use the workshop materials for any purpose, including your own workshop. If you do, Id love to hear about it. Or if youd like me to come facilitate it for your company, meetup group, etc. lets talk. February 18, 2017 This is a tutorial to guide you through the shiny new pkgcomp 2.0 on NetBSD. Goals: to use pkgcomp 2.0 to build a binary repository of all the packages you are interested in to keep the repository fresh on a daily basis and to use that repository with pkgin to maintain your NetBSD system up-to-date and secure. This tutorial is specifically targeted at NetBSD but should work on other platforms with some small changes. Expect, at the very least, a macOS-specific tutorial as soon as I create a pkgcomp standalone installer for that platform. Getting started First install the sysutilssysbuild-user package and trigger a full build of NetBSD so that you get usable release sets for pkgcomp. See sysbuild(1) and pkginfo sysbuild-user for details on how to do so. Alternatively, download release sets from the FTP site and later tell pkgcomp where they are. Then install the pkgtoolspkgcomp-cron package. The rest of this tutorial assumes you have done so. Adjusting the configuration To use pkgcomp for periodic builds, youll need to do some minimal edits to the default configuration files. The files can be found directly under varpkgcomp. which is pkgcomp-cron s home: varpkgcomppkgcomp. conf. This is pkgcomps own configuration file and the defaults installed by pkgcomp-cron should be good to go. The contents here are divided in three major sections: declaration on how to download pkgsrc, definition of the file system layout on the host machine, and definition of the file system layout for the built packages. You may want to customize the target system paths, such as LOCALBASE or SYSCONFDIR. but you should not have to customize the host system paths. varpkgcompsandbox. conf. This is the configuration file for sandboxctl. The default settings installed by pkgcomp-cron should suffice if you used the sysutilssysbuild-user package as recommended otherwise tweak the NETBSDNATIVERELEASEDIR and NETBSDSETSRELEASEDIR variables to point to where the downloaded release sets are. varpkgcompextra. mk. conf. This is pkgsrcs own configuration file. In here, you should configure things like the licenses that are acceptable to you and the package-specific options youd like to set. You should not configure the layout of the installed files (e. g. LOCALBASE ) because thats handled internally by pkgcomp as specified in pkgcomp. conf . varpkgcomplist. txt. This determines the set of packages you want to build in your periodic cron job. The builds will fail unless you list at least one package. WARNING: Make sure to include pkgcomp-cron and pkgin in this list so that your binary kit includes these essential package management tools. Otherwise youll have to deal with some minor annoyances after rebootstrapping your system. Lastly, review roots crontab to ensure the job specification for pkgcomp is sane. On slow machines, or if you are building many packages, you will probably want to decrease the build frequency from daily to weekly . Sample configuration Here is what the configuration looks like on my NetBSD development machine as dumped by the config subcommand. Use this output to get an idea of what to expect. Ill be using the values shown here in the rest of the tutorial: Building your own packages by hand Now that you are fully installed and configured, youll build some stuff by hand to ensure the setup works before the cron job comes in. The simplest usage form, which involves full automation, is something like this: This trivially-looking command will: checkout or update your copy of pkgsrc create the sandbox bootstrap pkgsrc and pbulk use pbulk to build the given packages and destroy the sandbox. After a successful invocation, youll be left with a collection of packages in the directory you set in PACKAGES. which in the default pkgcomp-cron installation is varpkgcomppackages . If youd like to restrict the set of packages to build during a manually-triggered build, provide those as arguments to auto. This will override the contents of AUTOPACKAGES (which was derived from your list. txt file). But what if you wanted to invoke all stages separately, bypassing auto. The command above would be equivalent to: Go ahead and play with these. You can also use the sandbox-shell command to interactively enter the sandbox. See pkgcomp(8) for more details. Lastly note that the root user will receive email messages if the periodic pkgcomp cron job fails, but only if it fails. That said, you can find the full logs for all builds, successful or not, under varpkgcomplog . Installing the resulting packages Now that you have built your first set of packages, you will want to install them. On NetBSD, the default pkgcomp-cron configuration produces a set of packages for usrpkg so you have to wipe your existing packages first to avoid build mismatches. WARNING: Yes, you really have to wipe your packages. pkgcomp currently does not recognize the package tools that ship with the NetBSD base system (i. e. it bootstraps pkgsrc unconditionally, including bmake ), which means that the newly-built packages wont be compatible with the ones you already have. Avoid any trouble by starting afresh. To clean your system, do something like this: Now, rebootstrap pkgsrc and reinstall any packages you previously had: Finally, reconfigure any packages where you had have previously made custom edits. Use the backup in rootetc. old to properly update the corresponding files in etc. I doubt you made a ton of edits so this should be easy. IMPORTANT: Note that the last command in this example includes pkgin and pkgcomp-cron. You should install these first to ensure you can continue with the next steps in this tutorial. Keeping your system up-to-date If you paid attention when you installed the pkgcomp-cron package, you should have noticed that this configured a cron job to run pkgcomp daily. This means that your packages repository under varpkgcomppackages will always be up-to-date so you can use that to quickly upgrade your system with minimal downtime. Assuming you are going to use pkgtoolspkgin (and why not), configure your local repository: And, from now on, all it takes to upgrade your system is: Lots of storage this week. February 17, 2017 After many (many) years in the making, pkgcomp 2.0 and its companion sandboxctl 1.0 are finally here Read below for more details on this launch. I will publish detailed step-by-step tutorials on setting up periodic package rebuilds in separate posts. What are these tools pkgcomp is an automation tool to build pkgsrc binary packages inside a chroot-based sandbox. The main goal is to fully automate the process and to produce clean and reproducible packages. A secondary goal is to support building binary packages for a different system than the one doing the builds: e. g. building packages for NetBSDi386 6.0 from a NetBSDamd64 7.0 host. The highlights of pkgcomp 2.0 . compared to the 1.x series, are: multi-platform support . including NetBSD, FreeBSD, Linux, and macOS use of pbulk for efficient builds management of the pkgsrc tree itself via CVS or Git and a more robust and modern codebase . sandboxctl is an automation tool to create and manage chroot-based sandboxes on a variety of operating systems . sandboxctl is the backing tool behind pkcomp. sandboxctl hides the details of creating a functional chroot sandbox on all supported operating systems in some cases, like building a NetBSD sandbox using release sets, things are easy but in others, like on macOS, they are horrifyingly difficult and brittle. Storytelling time pkgcomps history is a long one. pkgcomp 1.0 first appeared in pkgsrc on September 6th, 2002 as the pkgtoolspkgcomp package in pkgsrc. As of this writing, the 1.x series are at version 1.38 and have received contributions from a bunch of pkgsrc developers and external users even more, the tool was featured in the BSD Hacks book back in 2004. This is a long time for a shell script to survive in its rudimentary original form: pkgcomp 1.x is now a teenager at its 14 years of age and is possibly one of my longest-living pieces of software still in use. Motivation for the 2.x rewrite For many of these years, I have been wanting to rewrite pkgcomp to support other operating systems. This all started when I first got a Mac in 2005, at which time pkgsrc already supported Darwin but there was no easy mechanism to manage package updates. What would happenand still happens to this dayis that, once in a while, Id realize that my packages were out of date (read: insecure) so Id wipe the whole pkgsrc installation and start from scratch. Very inconvenient I had to automate that properly. Thus the main motivation behind the rewrite was primarily to support macOS because this was, and still is, my primary development platform. The secondary motivation came after writing sysbuild in 2012, which trivially configured daily builds of the NetBSD base system from cron I wanted the exact same thing for my packages. One, two no, three rewrites The first rewrite attempt was sometime in 2006, soon after I learned Haskell in school. Why Haskell Just because that was the new hotness in my mind and it seemed like a robust language to drive a pretty tricky automation process. That rewrite did not go very far, and thats possibly for the better: relying on Haskell would have decreased the portability of the tool, made it hard to install it, and guaranteed to alienate contributors. The second rewrite attempt started sometime in 2010, about a year after I joined Google as an SRE. This was after I became quite familiar with Python at work, wanting to use the language to rewrite this tool. That experiment didnt go very far though, but I cant remember why probably because I was busy enough at work and creating Kyua. The third and final rewrite attempt started in 2013 while I had a summer intern and I had a little existential crisis. The year before I had written sysbuild and shtk. so I figured recreating pkgcomp using the foundations laid out by these tools would be easy. And it was to some extent. Getting the barebones of a functional tool took only a few weeks, but that code was far from being stable, portable, and publishable. Life and work happened, so this fell through the cracks until late last year, when I decided it was time to close this chapter so I could move on to some other project ideas. To create the focus and free time required to complete this project, I had to shift my schedule to start the day at 5am instead of 7amand, many weeks later, the code is finally here and Im still keeping up with this schedule. Granted: this third rewrite is not a fancy one, but it wasnt meant to be. pkgcomp 2.0 is still written in shell, just as 1.x was, but this is a good thing because bootstrapping on all supported platforms is easy. I have to confess that I also considered Go recently after playing with it last year but I quickly let go of that thought: at some point I had to ship the 2.0 release, and 10 years since the inception of this rewrite was about time. The launch of 2.0 On February 12th, 2017, the authoritative sources of pkgcomp 1.x were moved from pkgtoolspkgcomp to pkgtoolspkgcomp1 to make room for the import of 2.0. Yes, the 1.x series only existed in pkgsrc and the 2.x series exist as a standalone project on GitHub . And here we are. Today, February 17th, 2017, pkgcomp 2.0 saw the light Why sandboxctl as a separate tool sandboxctl is the supporting tool behind pkgcomp, taking care of all the logic involved in creating chroot-based sandboxes on a variety of operating systems. Some are easy, like building a NetBSD sandbox using release sets, and others are horrifyingly difficult like macOS. In pkgcomp 1.x, this logic used to be bundled right into the pkgcomp code, which made it pretty much impossible to generalize for portability. With pkgcomp 2.x, I decided to split this out into a separate tool to keep responsibilities isolated. Yes, the integration between the two tools is a bit tricky, but allows for better testability and understandability. Lastly, having sandboxctl as a standalone tool, instead of just a separate code module, gives you the option of using it for your own sandboxing needs. I know, I know the world has moved onto containerization and virtual machines, leaving chroot-based sandboxes as a very rudimentary thing but thats all weve got in NetBSD, and pkgcomp targets primarily NetBSD. Note, though, that because pkgcomp is separate from sandboxctl, there is nothing preventing adding different sandboxing backends to pkgcomp. Installation Installation is still a bit convoluted unless you are on one of the tier 1 NetBSD platforms or you already have pkgsrc up and running. For macOS in particular, I plan on creating and shipping a installer image that includes all of pkgcomp dependenciesbut I did not want to block the first launch on this. For now though, you need to download and install the latest source releases of shtk. sandboxctl. and pkgcomp in this order pass the --with-atfno flag to the configure scripts to cut down the required dependencies. On macOS, you will also need OSXFUSE and the bindfs file system. If you are already using pkgsrc, you can install the pkgtoolspkgcomp package to get the basic tool and its dependencies in place, or you can install the wrapper pkgtoolspkgcomp-cron package to create a pre-configured environment with a daily cron job to run your builds. See the packages MESSAGE (with pkginfo pkgcomp-cron ) for more details. Documentation Both pkgcomp and sandboxctl are fully documented in manual pages. See pkgcomp(8). sandboxctl(8). pkgcomp. conf(5) and sandbox. conf(5) for plenty of additional details. As mentioned at the beginning of the post, I plan on publishing one or more tutorials explaining how to bootstrap your pkgsrc installation using pkgcomp on, at least, NetBSD and macOS. Stay tuned. And, if you need support or find anything wrong, please let me know by filing bugs in the corresponding GitHub projects: jmmvpkgcomp and jmmvsandboxctl . February 16, 2017 I claim an IPv6 address using ifconfig in a script. This address is then immediately used to listen on a TCP port. When I write the script like this, it fails because the service is unable to listen: However, it succeeds when I do it like this: I tried writing the output of ifconfig directly after running the add - operation. It appears that ifconfig reports the IP-address as being tentative . which apparently prevents a service from listening on it. Naturally, waiting exactly one second and hoping that the address has become available is not a very good way to handle this. How can I wait for a tentative address to become available, or make ifconfig return later so that the address is all set up I finally registered, have been reading the forum for years. Ill simply copy this from LQ. Already have written to a couple of lists (including netbsd-users) but without results. Running 7.0.2 with out of the box kernel. All my GTK2 apps segfault on keyboard input. lxappearance for example, when looking for a theme you can start pressing keys and it will search. But in my case it dumps core with usrliblibpthread. so.1 . usrliblibc. so.12 and usrpkgliblibXcursor. so.1 . The same thing happens when typing something into a GTK2 text editor, leafpad, or looking for something in CtrlO window in firefox or gimp or any other programme. gimp cant even run inside gdb because of: Program received signal SIGTRAP, Tracebreakpoint trap. 0x00007f7fea49f6aa in lwppark60 () from usrliblibc. so.12 (gdb) bt 0 0x00007f7fea49f6aa in lwppark60 () from usrliblibc. so.12 1 0x00007f7fec808f2b in pthreadcondtimedwait () from usrliblibpthread. so.1 2 0x00007f7feb880b80 in gcondwait () from usrpkgliblibglib-2.0.so.0 3 0x00007f7feb81d7cd in gasyncqueuepopinternunlocked () from usrpkgliblibglib-2.0.so.0 4 0x00007f7feb86742f in gthreadpoolthreadproxy () from usrpkgliblibglib-2.0.so.0 5 0x00007f7feb866a7d in gthreadproxy () from usrpkgliblibglib-2.0.so.0 6 0x00007f7fec80a9cc in. () from usrliblibpthread. so.1 7 0x00007f7fea483de0 in. () from usrliblibc. so.12 8 0x0000000000000000 in. () Firefox also has problems in libc. so.12 and libpthread. so.1 but doesnt say about lwppark60. It also cant run inside gdb. lxappearance also dumps core when clicking Apply after changing something (themes, cursor or icon themes, fonts etc.) with another output: 0 0x00007f7fefcb27ba in. () from usrliblibc. so.12 1 0x00007f7fefcb2bc7 in malloc () from usrliblibc. so.12 2 0x00007f7ff1849782 in gmalloc () from usrpkgliblibglib-2.0.so.0 3 0x00007f7ff185ef1c in gmemdup () from usrpkgliblibglib-2.0.so.0 4 0x00007f7ff18356b8 in ghashtableinsertnode () from usrpkgliblibglib-2.0.so.0 5 0x00007f7ff1835823 in ghashtableinsertinternal () from usrpkgliblibglib-2.0.so.0 6 0x00007f7ff183ccb1 in gkeyfileflushparsebuffer () from usrpkgliblibglib-2.0.so.0 7 0x00007f7ff183cf62 in gkeyfileparsedata () from usrpkgliblibglib-2.0.so.0 8 0x00007f7ff183d0e1 in gkeyfileloadfromfd () from usrpkgliblibglib-2.0.so.0 9 0x00007f7ff183d99e in gkeyfileloadfromfile () from usrpkgliblibglib-2.0.so.0 10 0x0000000000405532 in start () Apart from these programmes I receive SIGILL in mplayer when trying to play videos. Backtrace doesnt tell anything useful. sxiv, an image viewer, segfaults with this: 0 0x00007f7ff64b209f in. () from usrliblibc. so.12 1 0x00007f7ff64b3983 in free () from usrliblibc. so.12 2 0x000000000040729c in removefile () 3 0x0000000000409a92 in main () Previously, if built from local pkgsrc tree it worked but now stopped working at all at all. mpg321 dumps core and says Memory fault with this backtrace: 0 0x00007f7ff78068b1 in sempost () from usrliblibpthread. so.1 1 0x000000000040afe0 in. () 2 0x0000000000403695 in. () 3 0x00007f7ff7ffa000 in. () 4 0x0000000000000002 in. () 5 0x00007f7ffffffdb0 in. () 6 0x00007f7ffffffdb7 in. () 7 0x0000000000000000 in. () I did memtests, once for four hours (two passes) and once for eight hours (eight passes). I did Dells ePSA tests (diagnostic utility accessed from BIOS), it has own memtest, apart from monitoring the hard drive, the power supply, the keyboard, the fans, the CPU all of them returned no errors. I rebuilt gtk2 with debug symbols but it changed nothing. On LQ it was suggested that I have hardware problems, but I am not convinced. Every programme described above worked inside Ubuntu LiveUSB and Void Linux LiveUSB on the same machine (picked because they have different libcs). Before I had NetBSD with X11 a couple of months ago (and earlier) and I didnt have these errors. In the Interwebs I found similar messages on Arch forum and Launchpad. Is there a need for a 24 hour memtest Should I just remove each of the two memory modules and try Is it hardware related after all Thanks everyone for any kind of help. February 14, 2017 The LLVM project is a quickly moving target, this also applies to the LLVM debugger -- LLDB. Its actively used in several first-class operating systems, while - thanks to my spare time dedication - NetBSD joined the LLDB club in 2014, only lately the native support has been substantially improved and the feature set is quickly approaching the support level of Linux and FreeBSD. During this work 12 patches were committed to upstream, 12 patches were submitted to review, 11 new ATF were tests added, 2 NetBSD bugs filed and several dozens of commits were introduced in pkgsrc-wip, reducing the local patch set to mostly Native Process Plugin for NetBSD. What has been done in NetBSD 1. Triagged issues of ptrace(2) in the DTraceNetBSD support Chuck Silvers works on improving DTrace in NetBSD and he has detected an issue when tracer signals are being ignored in libproc . The libproc library is a compatibility layer for DTrace simulating proc capabilities on the SunOS family of systems. Ive verified that the current behavior of signal routing is incorrect. The NetBSD kernel correctly masks signals emitted by a tracee, not routing them to its tracer. On the other hand the masking rules in the inferior process blacklists signals generated by the kernel, which is incorrect and turns a debugger into a deaf listener. This is the case for libproc as signals were masked and software breakpoints triggering INT3 on i386 amd64 CPUs and SIGTRAP with TRAPBRKP sicode wasnt passed to the tracer. This isnt limited to turning a debugger into a deaf listener, but also a regular execution of software breakpoints requires: rewinding the program counter register by a single instruction, removing trap instruction and restoring the original instruction. When an instruction isnt restored and further code execution is pretty randomly affected, it resulted in execution anomalies and breaking of tracee. A workaround for this is to disable signal masking in tracee. Another drawback inspired by the DTrace code is to enhance PTSYSCALL handling by introducing a way to distinguish syscall entry and syscall exit events. Im planning to add dedicated sicodes for these scenarios. While there, there are users requesting PTSTEP and PTSYSCALL tracing at the same time in an efficient way without involving heuristcs. Ive filed the mentioned bug: Ive added new ATF tests: Verify that masking single unrelated signal does not stop tracer from catching other signals Verify that masking SIGTRAP in tracee stops tracer from catching this raised signal Verify that masking SIGTRAP in tracee does not stop tracer from catching software breakpoints Verify that masking SIGTRAP in tracee does not stop tracer from catching single step trap Verify that masking SIGTRAP in tracee does not stop tracer from catching exec() breakpoint Verify that masking SIGTRAP in tracee does not stop tracer from catching PTRACEFORK breakpoint Verify that masking SIGTRAP in tracee does not stop tracer from catching PTRACEVFORK breakpoint Verify that masking SIGTRAP in tracee does not stop tracer from catching PTRACEVFORKDONE breakpoint Verify that masking SIGTRAP in tracee does not stop tracer from catching PTRACELWPCREATE breakpoint Verify that masking SIGTRAP in tracee does not stop tracer from catching PTRACELWPEXIT breakpoint 2. ELF Auxiliary Vectors The ELF file format permits to transfer additional information for a process with a dedicated container of properties, its named ELF Auxilary Vector . Every system has its dedicated way to read this information in a debugger from a tracee. The NetBSD approach is to transfer this vector with a ptrace (2) API PIODREADAUXV . Our interface shares the API with OpenBSD. I filed a bug that our interface returns vector size of 8496 bytes, while OpenBSD has constant 64 bytes. It was diagnosed and fixed by Christos Zoluas that we were incorrectly counting bits and bytes and this enlarged the data streamlined. The bug was harmless and had no known side-effects besides large chunk of zeroed data. There is also a prepared local patch extending NetBSD platform support to read information for this vector, its primarily required for correct handling of PIE binaries. At the moment there is no interface similar to info auxv to the one from GDB. Unfortunately at the current stage, this code is still unused by NetBSD. I will return to it once the Native Process Plugin is enhanced. Ive filed the mentioned bug: Ive added new ATF test: Verify PTREADAUXV called for tracee . What has been done in LLDB 1. Resolving executables name with sysctl(7) In the past the way to retrieve a specified process executable path name was using Linux-compatibile feature in procfs ( proc ). The canonical solution on Linux is to resolve path of procPIDexe . Christos Zoulas added in DTrace port enhancements a solution similar to FreeBSD to retrieve this property with sysctl (7). This new approach removes dependency on proc mounted and Linux compatibility functionality. Support for this has been submitted to LLDB and merged upstream: 2. Real-Time Signals The key feature of the POSIX standard with Asynchronous IO is to support Real-Time Signals. One of their use-cases is in debugging facilities. Support for this set of signals was developed during Google Summer of Code 2016 by Charles Cui and reviewed and committed by Christos Zoulas. Ive extended the LLDB capabilities for NetBSD to recognize these signals in the NetBSDSignals class. Support for this has been submitted to LLDB and merged upstream: 3. Conflict removal with system-wide six. py The transition from Python 2.x to 3.x is still ongoing and will take a while. The current deadline support for the 2.x generation has been extended to 2020. One of the ways to keep both generations supported in the same source-code is to use the six. py library (py2 x py3 6.py). It abstracts commonly used constructs to support both language families. The issue for packaging LLDB in NetBSD was to install this tiny library unconditionally to a system-wide location. There were several solutions to this approach: drop Python 2.x support, install six. py into subdirectory, make an installation of six. py conditional. The first solution would turn discussion into flamewar, the second one happened to be too difficult to be properly implemented as the changes were invasive and Python is used in several places of the code-base (tests, bindings. ). The final solution was to introduce a new CMake option LLDBUSESYSTEMSIX - disabled by default to retain the current behavior. To properly implement LLDBUSESYSTEMSIX . I had to dig into installation scripts combined in CMake and Python files. It wasnt helping that Python scripts were reinventing getopt (3) functionality. and I had to alter it in order to introduce a new option --useSystemSix . Support for this has been submitted to LLDB and merged upstream: 4. Do not pass non-POD type variables through variadic function There was a long standing local patch in pkgsrc, added by Tobias Nygren and detected with Clang. According to the C11 standard 5.2.27: Passing a potentially-evaluated argument of class type having a non-trivial copy constructor, a non-trivial move constructor, or a non-trivial destructor, with no corresponding parameter, is conditionally-supported with implementation-defined semantics. A short example to trigger similar warning was presented by Joerg Sonnenberg: This code compiled against libc gives: Support for this has been submitted to LLDB and merged upstream: 5. Add NetBSD support in Host::GetCurrentThreadID Linux has a very specific thread model, where process is mostly equivalent to native thread and POSIX thread - its completely different on other mainstream general-purpose systems. That said fallback support to translate pthreadt on NetBSD to retrieve the native integer identifier was incorrect. The proper NetBSD function to retrieve light-weigth process identification is to call lwpself (2). Support for this has been submitted to LLDB and merged upstream: 6. Synchronize PlatformNetBSD with Linux The old PlatformNetBSD code was based on the FreeBSD version. While the FreeBSD current one is still similar to the one from a year ago, its inappropriate to handle a remote process plugin approach. This forced me to base refreshed code on Linux. After realizing that PlatformPlugin on POSIX platforms suffers from code duplication, Pavel Labath helped out to eliminate common functions shared by other systems. This resulted in a shorter patch synchronizing PlatformNetBSD with Linux, this step opened room for FreeBSD to catch up. Support for this has been submitted to LLDB and merged upstream: 7. Transform ProcessLauncherLinux to ProcessLauncherPosixFork It is UNIX specific that signal handlers are global per application. This introduces issues with wait (2)-like functions called in tracers, as these functions tend to conflict with real-life libraries, notably GUI toolkits (where SIGCHLD events are handled). The current best approach to this limitation is to spawn a forkee and establish a remote connection over the GDB protocol with a debugger frontend. ProcessLauncherLinux was prepared with this design in mind and I have added support for NetBSD. Once FreeBSD will catch up, they might reuse the same code. Support for this has been submitted to LLDB and merged upstream: reviews. llvm. orgD29347 - Add ProcessLauncherNetBSD to spawn a tracee renamed to Transform ProcessLauncherLinux to ProcessLauncherPosixFork committed r293768 8. Document that LaunchProcessPosixSpawn is used on NetBSD Host::GetPosixspawnFlags was built for most POSIX platforms - however only Apple, Linux, FreeBSD and other-GLIBC ones (I assume DebiankFreeBSD to be GLIBC-like) were documented. Ive included NetBSD to this list. Support for this has been submitted to LLDB and merged upstream: Document that LaunchProcessPosixSpawn is used on NetBSD committed r293770 9. Switch std::callonce to llvm::callonce There is a long-standing bug in libstdc on several platforms that std::callonce is broken for cryptic reasons. This motivated me to follow the approach from LLVM and replace it with homegrown fallback implementation llvm::callonce . This change wasnt that simple at first sight as the original LLVM version used different semantics that disallowed straight definition of non - static onceflag . Thanks to cooperation with upstream the proper solution was coined and LLDB now works without known regressions on libstdc out-of-the-box. Support for this has been submitted to LLVM, LLDB and merged upstream: 10. Other enhancements I a had plan to push more code in this milestone besides the mentioned above tasks. Unfortunately not everything was testable at this stage. Among the rescheduled projects: In the NetBSD platform code conflict removal in GetThreadName SetThreadName between pthreadt and lwpidt . It looks like another bite from the Linux thread model. Proper solution to this requires pushing forward the Process Plugin for NetBSD. Host::LaunchProcessPosixSpawn proper setting ::posixspawnattrsetsigdefault on NetBSD - currently untestable. Fix false positives - premature before adding more functions in NetBSD Native Process Plugin. On the other hand Ive fixed a build issue of one test on NetBSD: Plan for the next milestone Ive listed the following goals for the next milestone. mark exect (3) obsolete in libc remove libpthreaddbg (3) from the base distribution add new API in ptrace (2) PTSETSIGMASK and PTGETSIGMASK add new API in ptrace (2) to resume and suspend a specific thread finish switch of the PTWATCHPOINT API in ptrace (2) to PTGETDBREGS amp PTSETDBREGS validate i386, amd64 and Xen proper support of new interfaces upstream to LLDB accessors for debug registers on NetBSDamd64 validate PTSYSCALL and add a functionality to detect and distinguish syscall-entry syscall-exit events validate accessors for general purpose and floating point registers Post mortem FreeBSD is catching up after NetBSD changes, e. g. with the following commit: This move allows to introduce further reduction of code-duplication. There still is a lot of room for improvement. Another benefit for other software distributions, is that they can now appropriately resolve the six. py conflict without local patches. These examples clearly show that streamlining NetBSD code results in improved support for other systems and creates a cleaner environment for introducing new platforms. A pure NetBSD-oriented gain is improvement of system interfaces in terms of quality and functionality, especially since DTraceNetBSD is a quick adopter of new interfaces. and indirectly a sandbox to sort out bugs in ptrace (2). The tasks in the next milestone will turn NetBSDs ptrace (2) to be on par with Linux and FreeBSD, this time with marginal differences. To render it more clearly NetBSD will have more interfaces in readwrite mode than FreeBSD has (and be closer to Linux here), on the other hand not so many properites will be available in a thread specific field under the PTLWPINFO operation that caused suspension of the process. Another difference is that FreeBSD allows to trace only one type of syscall events: on entry or on exit. At the moment this is not needed in existing software, although its on the longterm wishlist in the GDB project for Linux. It turned out that, I was overly optimistic about the feature set in ptrace (2), while the basic ones from the first milestone were enough to implement basic support in LLDB. it would require me adding major work in heuristics as modern tracers no longer want to perform guessing what might happened in the code and what was the source of signal interruption. This was the final motivation to streamline the interfaces for monitoring capabilities and now Im adding remaining interfaces as they are also needed, if not readily in LLDB, there is DTrace and other software that is waiting for them now. Somehow I suspect that I will need them in LLDB sooner than expected. This work was sponsored by The NetBSD Foundation. The NetBSD Foundation is a non-profit organization and welcomes any donations to help us continue to fund projects and services to the open-source community. Please consider visiting the following URL, and chip in what you can: February 09, 2017 We became tired of waiting. File Info: 7Min, 3MB. Ogg Link: archive. orgdownloadbsdtalk266bsdtalk266.ogg February 08, 2017 Background I am using a sparc64 Sun Blade 2500 (silver) as a desktop machine - for my pretty light desktop needs. Besides the usual developer tools (editors, compilers, subversion, hg, git) and admin stuff (all text based) I need mpg123 and mserv for music queues, Gimp for image manipulation and of course Firefox. Recently I updated all my installed pkgs to pkgsrc-current and as usual the new Firefox version failed to build. Fortunately the issues were minor, as they all had been handled upstream for Firefox 52 already, all I needed to do was back-porting a few fixes. This made the pkg build, but after a few minutes of test browsing, it crashed. Not surprisingly this was reproducible, any web site trying to play audio triggered it. A bit surprising though: the same happened on an amd64 machine I tried next. After a bit digging the bug was easy to fix, and upstream already took the fix and committed it to the libcubeb repository. So I am now happily editing this post using Firefox 51 on the Blade 2500. I saw one crash in two days of browsing, but unfortunately could not (yet) reproduce it (I have gdb attached now). There will be future pkg updates certainly. Future Obstacles You may have read elsewhere that Firefox will start to require a working Rust compiler to build. This is a bit unfortunate, as Rust (while academically interesting) is right now not a very good implementation language if you care about portability. The only available compiler requires a working LLVM back end, which we are still debugging. Our auto-builds produce sparc sets with LLVM, but the result is not fully working (due to what we believe being code gen bugs in LLVM). It seems we need to fix this soon (which would be good anyway, independent of the Rust issue). Besides the back end, only very recently traces of sparc64 support popped up in Rust. However, we still have a few firefox versions time to get it all going. I am optimistic. Another upcoming change is that Cairo (currently used as 2D graphics back end, at least on sparc64) will be phased out and Skia will be the only supported software rendering target. Unfortunately Skia does (as of now) not support any big endian machine at all. I am looking for help getting Skia to work on big endian hardware in general, and sparc64 in particular. Alternatives Just in case, I tested a few other browsers and (so far) they all failed: NetSurf Nice, small, has a few tweaks and does not yet support JavaScript good enough for many sites MidoriThey call it lightweight but it is based on WebKit, which alone is a few times more heavy than all of Firefox. It crashes immediately at startup on sparc64 (I am investigating, but with low priority - actually I had to replace the hard disk in my machine to make enough room for the debug object files for WebKit - it takes So, while it is a bit of a struggle to keep a modern browser working on my favorite odd-ball architecture, it seems we will get at least to the Firefox 52 ESR release, and that should give us enough time to get Rust working and hopefully continue with Firefox. February 07, 2017 So finally Ive moved all services from my old server to my Christmas Xen box. This was not without problems due to the fact it had to run NetBSD - current gcc toolchain is broken for some packages which affected running any PHP build clang toolchain was broken for my config (USESSP yes and . February 04, 2017 Note the end this week of pc98, the most focused of niche platforms. January 31, 2017 What has been done in NetBSD What has been done in LLDB Plan for the next milestone Accidental theme this week: books. What are the techniques generally people follow to dump full core dump if the size of core dump is more than the RAM and flash. Say, kernel core is of 2GB size but we have exactly 2GB of RAM and 1GB of disk space. I am aware external USB and tftp options. But, reliability and stability matters when we choose these options. How do embedded people handle these type of issues and what are the techniques available Platform: NetBSD, ARM7 January 18, 2017 Previously This is the sixth in a series of Nifty and Minimally Invasive qmail Tricks, following Losing services (and eventually restoring them) When my Mac mini s hard drive died in the Great Crash of Fall 2008. taking this website and my email offline with it, I was already going through a rough time, and my mental bandwidth was extremely limited. I expended some of it explaining to friends what they could do about their hosted domains until such time as my brain became available again (as I assumed andor hoped it eventually would). I expended a bit more asking a friend to do a small thing to keep my email flowing somewhere I could get it. And then I was spent. The years where I used Gmail and had no website felt like years in the wilderness. That feeling could mostly have been about how I missed the habit of reflecting about my life now and again, writing about it, and sharing. But when the website returned four years ago (in order to remember Aaron Swartz ), the feeling didnt go away. All I got was a small sense of relief that my writings and recordings were available and that I could safely revive my old habit. After a year and half of reflecting, writing, and sharing, the feels-needle hadnt rebounded much further. It was only after painstakingly restoring all my old email (from Mail. apps cache, using emlx2maildir ), moving it up to my IMAP server, carefully merging six years worth of Gmail into that, accepting SMTP deliveries for schmonz. and not needing Gmail at all for several weeks that I noticed my long, strange sojourn had ended. Hypothetically speaking If it so happened that Id instead fixed email first, Id also have felt a tiny bit weird till my website was back. But only a tiny bit. When my web servers down, you might not hear from me when my mail servers down, I cant hear from you or, as happened in 2008, from my professors during finals week. So while web hosting can be interesting. mail hosting keeps me attached to what it feels like to be responsible for a production service. Keeping it real I value this firsthand understanding very, very highly. I started as a sysadmin, Im often still a developer, and thats part of why Im sometimes helpful to others. But since Im always in danger of forgetting lessons I learned by doing it, Im always in danger of being harmful when I try to help others do it . As a coach, one of my meta-jobs is to remind myself what it takes to know the risks, decide to ship it, live with the consequences, tighten the shipping-it loop until its tight enough, and notice when that stops being true. And thats why I run my own mail server. Whats new this week My 2014 mail server was configured just about identically with my 2008 one, for which it was handy to consult the earlier articles in this series . Then, recently, my weekly build broke on the software Ive been using to send mail. It was a trivial breakage, easy to fix, but it reminded me about a non-trivial future risk that I didnt want hanging over my head anymore. (For more details, see my previous post .) Now Im sending mail another way. Clients are unchanged, the server no longer needs TMDA or its dependencies, and I no longer have a specific expectation for how this aspect of my mail service will certainly break in the future. (Just some vague guesses, like a newly discovered compromise in the TLS protocol or OpenSSLs implementation thereof, or STARTTLS or Stunnel s implementation thereof.) A couple iterations First, I tried the smallest change that might work: Replacing tmda-ofmipd with the original ofmipd from mess822 (by the author of qmail. the software around which my mail service is built), Wrapped in SMTP AUTH by spamdyke (new use of an existing tool), Wrapped in STARTTLS by stunnel (as before). It worked TMDA no longer needed. I committed an update to my qmail-run package with a new shell script to manage this ofmipd service. uninstalled TMDA, and removed its configuration files. Next, I tried a change that might shorten the chain of executables : It worked Second instance of spamdyke no longer needed. To start a mail submission service on localhost port 26, these are the lines I added to etcrc. conf : To make the service available on the network, this is the config from etcstunnelstunnel. conf : (It already had this stanza, but with 8025 where tmda-ofmipd was listening. I simply changed the port number and restarted stunnel .) Im still relying on spamdyke for other purposes, but Im comfortable with those. Im still relying on stunnel for STARTTLS, but Im relatively comfortable keeping OpenSSL contained in its own address space and user account. Refactoring for mail hosting The present configuration is a refactoring. no externally visible change to email clients, yes internally visible change to email administrator (moi). I believe this refactoring was one of the best kind, able to be performed safely and reducing the risk I was worried about. The current configuration is much more likely to meet my future need to not have a production outage that interrupts my work for arbitrary duration while I scramble to understand and fix it. I dont have any more cheap ideas for lowering my risk, and it feels low enough anyway. So Im comfortable that this is the right place to stop . Conclusion Want to learn to see the consequences of your choices andor help other people do the same Consider productionizing something important to you. January 14, 2017 Im trying to compile a program with clang and libc on NetBSD. Clang version is 3.9.0, and NetBSD version is 7.0.2. The compile is failing with: ltcstddefgt is present, but it appears to be GCCs: If I am parsing Index of pubNetBSDNetBSD-release-7srcexternalbsdlibc correctly, the library is available. When I attempt to install libc or libcxx : Is Clang with libc a supported configuration on NetBSD How do we use Clang and libc on NetBSD January 11, 2017 Ill install netbsd on an old computer, but I am sure Ill have a hard time to get wireless internet working in a way or another. I figured I could do that easily if I managed to install things for this computer, on another one, the one I am using now, by crosscompiling. And that it would be a good training, isnt it For now, if pkgadd and so on are recognized, I still cant pkgadd pkgin or any software: it says it doesnt know that package. How come. I see it, its there. Vielen Dank. Heres my PATH variable: PATHusrpkgsbin:usrpkgbin:usrlocalbin:usrbin:bin:usrlocalgames:usrgames ps:some might remember me. Indeed, I failed using this system many time, but I am a romantic, and I cant stop feeling something in my heart anytime I read pkgsrc or netbsd, I just dont know why. so here I am again :D January 09, 2017 NetBSDs scheduler was recently changed to better distribute load of long-running processes on multiple CPUs. So far, the associated sysctl tweaks were not documented, and this was changed now, documenting the kern. sched sysctls. For reference, here is the text that was added to the sysctl(7) manpage. Well, subject says it all. To quote from Soren Jacobsens email. The first release candidate of NetBSD 7.1 is now available for download at: Those of you who prefer to build from source can continue to follow the netbsd-7 branch or use the netbsd-7-1-RC1 tag. There have been quite a lot of changes since 7.0. See srcdocCHANGES-7.1 for the full list. Please help us out by testing 7.1RC1. We love any and all feedback. Report problems through the usual channels (submit a PR or write to the appropriate list). More general feedback is welcome at email160protected Ive installed NetBSD 7.0.1 in a KVM virtual machine under libvirt on a Fedora 25 Linux host. I want to use spice. so i specified the requisite qxl graphic in the virtual machine then installed xf86-video-qxl-0.1.4nb1 with pkgin in the NetBSD guest. But both varlogxdm. log and varlogXorg.0.log complained that they couldnt find the qxl module. Then I realized they were looking in usrX11R7libmodules but the qxl package put it in usrpkglibxorgmodules. To solve that, I manually added a symbolic link. And indeed, that solved the not found problem. (But why the two directories. ) Now they complain that its the wrong driver. Both xdm. log and Xorg.0.log gripe: (EE) module ABI major version (20) doesnt match the servers version (10) (EE) Failed to load module qxl (module requirement mismatch, 0) Why are things out of sync in the NetBSD code base How can anyone get X to work What can I do to solve this January 08, 2017 im trying to install nzbget. i think it was in the pkgsrc way back but its not there anymore. so i tried this: (1) i downloaded the source from nzbget website (2) then. configure said A compiler with support for C14 language features is required.. so i installed gcc6 using pkgin in gcc6 (3) so then i tried PATHusrpkggcc6bin:PATH. configure and it said compiler is ok, but now i got configure: error: ncurses library not found (4) i have ncurses lib in usrpkgincludencurses, how to let. configure know the location of ncurses lib Is it normal that when I use Zlib from Pkgsrc or base as reference via include bl3 for a project (like the current supertuxkart version 0.9.2) that within. buildlinkinclude directory no symlinks exist of zlib. h and zconf. h I newer saw this behaviour before and it breaks the compilation. January 05, 2017 Last night, mere moments from letting me commit a new package of Test::Continuous (continuous testing for Perl), my computer acted as though it knew its replacement was on the way and didnt care to meet it. This tiny mid-2013 11 MacBook Air made it relatively ergonomic to work from planes, buses, and anywhere else when I lived in New York and flew regularly to see someone important in Indiana, and continued to serve me well when that changed and changed again . The next thing I was planning to do with it was write this post. Instead I rebooted into DiskWarrior and crossed my fingers. Things get in your way, or threaten to. Thats life. But when you have slack time. you can Cope better when stuff happens, Invest in reducing obstacles, and Feel more prepared for the next time stuff happens. Having enough slack is as virtuous a cycle as insufficient slack is a vicious one. Paying down non-tech debts Last year I decided to spend more time and energy improving my health. Having recently spent a few weeks deliberately not paying attention to any of that, Im quite sure that I prefer paying attention to it, and am once again doing so. Learning to make my health a priority required that I make other things non-priorities, notably Agile in 3 Minutes. It no longer requires that. Ive recently invested in making the site easier for me to publish, and you may notice that its easier for you to browse. I didnt have enough slack to do these things when I was writing and recording a new episode every week. Now that enough of them have been taken care of, I feel prepared to take new steps with the podcast. And tech debts Earlier this week I noticed a broken link in a comment on Refactorings for web hosting. so I took a moment to check for other broken links on this site (ikiwiki makes it easy ). Before that, I inspected and minimized the differences between dev (my laptop) and prod (my server, where youre reading this), updated prod with the latest ikiwiki settings, and (because its all in Git) rebased dev from prod. In so doing, I observed that more config differences could be easily harmonized by adjusting some server paths to match those on my laptop. (When Apple introduced System Integrity Protection. pkgsrc on Mac OS X could no longer install under usr. and moved to opt. With my automated NetBSD package build. I can easily build the next batch for optpkg as well, retaining usrpkg as a symlink for a while. So I have.) Ive been running lots of these builds in the past week anyway, because a family of packages I maintain in pkgsrc had been outdated for quite a while and I finally got around to catching them up to upstream. Once they built on OS X, I committed the updates to the cross-platform package system. only to notice that at least one of them didnt build on NetBSD. So I fixed it, ran another build, saw what else I broke, and repeated until green. And taking on patience debt telling you about more of this crud Due to another update that temporarily broke the build of TMDA. I was freshly reminded that thats a relatively biggish liability in my server setup. I use TMDA to send mail. which is not mainly what its for, and I never got around to using it for what its for (protecting against spam with automated challenge-response), and it hasnt been maintained for years, and is stuck needing an old version of Python. On the plus side, running a weekly build means that when TMDA breaks more permanently, Ill notice pretty quickly. On the minus side, when that happens, Ill feel pressure to fix or replace it so I can (1) continue to send email like a normal person and (2) restart the weekly build like a me-person. If I can reduce the liability now, maybe I can avoid feeling that pressure later. Investigating alternatives, I remembered that Spamdyke. which I already use for delaying the SMTP greeting. blacklisting from a DNSBL as well as To: addresses that only get spam anymore, and greylisting from unknown senders, can provide SMTP AUTH. So Ill try keeping stunnel and replacing tmda-ofmipd with a second instance of spamdyke. If thats good, Ill remove mailtmda from the list of packages I build every week. then build spamdyke with OpenSSL support and try letting it handle the TLS encryption directly. If thats good, Ill remove securitystunnel from the list of packages too, leaving me at the mercy of fewer pieces of software breaking. Leaning more heavily on Spamdyke isnt a clear net reduction of risk. When a bad bug is found, itll impact several aspects of my mail service. And if and when NetBSD moves from GCC to Clang, Ill have to add langgcc to my list of packages and instruct pkgsrc to use it when building Spamdyke, or else come up with a patch to remove Spamdykes use of anonymous inner functions in C. (That could be fun. I recently started learning C .) I could go on, but Im a nice person who cares about you. Thats enough of that. So what All these builds pushing my soon-to-be-replaced laptop through its final paces as a development machine might have had something to do with triggering its misbehavior last night. And all this work seems like, well, a lot of work. Is there some way I could do less of it Yes, of course. But given my interests and goals, it might not be a clear net improvement. For instance, when Tim Ottinger drew my attention to that Test::Continuous Perl module, being a pkgsrc developer gave me an easy way to uninstall it if I wound up not liking it, which meant it was easy to try, which meant I tried it. I want conditions in my life to favor trying things. So Im invested in preserving and extending those conditions. In Gary Bernhardt s formulation, Im aiming to maximize the area under the curve . No new resolutions, yes new resolvings Im not looking to add new goals for myself for 2017. Im not even trying to make existing things good enough there are too many things, and as a recovering perfectionist I have trouble setting a reasonable bar Im just trying to make them good enough enough that I can expect small slices of time and attention to permit small improvements . Jessica Kerr has a thoughtful side blog named True in software, true in life. Heres something thatd qualify: When conditions are expected to change, smaller batch size helps us adjust. Reducing batch size takes time and effort. Paying down my self-debts (technical and otherwise) feels like resolving . I have, at times, felt quite out of position at managing myself. Lately Im feeling much more in position, and much more like I can expect to continue to make small improvements to my positioning. When you want the option to change your bodys direction, you take smaller steps, lower your center, concentrate on balance. Thats Agile. Moi My current best understanding is that a balanced life is a small-batch-size life. If thats the case, Im getting there. Further repositioning This coming Monday, Ill be switching to one of these weird new MacBook Pros with the row of non-clicky touchscreen keys. If my current computer survives till then, thatll be one smooth step in a series of transitions. (In other news, Bekki defends her dissertation that day.) The following Monday, Ill be starting my next project, a mostly-remote gig pairing in Python to deliver software for a client while encouraging and supporting growth in my Pillar teammates. Ill be in Des Moines every so often if youre there andor have recommendations for me, Id love to hear from you. The Monday after that, well pack up a few things the movers havent already taken away, and our time in Indiana will come to an end. Were headed back to the New York area to live near family and friends. No resolutions, yes intentions For 2017, I declare my intentions to: Continue to improve my health and otherwise attend to my own needs Help more people understand what software development work is like Help more people feel heard I hope to see and hear you along the way. January 04, 2017 So over the holidays, I managed to get in some good quality family time and find some time to work on some Open Source stuff. I meant to work mainly on dhcpcd. but it turned out I spent most of my time working on NetBSD curses library so that Python Curses now works with it. Now, most people r. Adding and removing hardware components in operation is common in todays commoditized computing environments. This was not always the case - in the past century, one had to power down a machine in order to change network cards, harddisks or RAM. A major step towards changing a systems configuration at runtime for customers came with USB, but thats not where it ends - other systems like PCI support hotplugging as well. Another area where changing of the systems configuration is the amount of Ramdom Access Memory (RAM) of a system. Usually fixed, this is determined at system start time, and then managed by the operating systems memory managent system. But esp. with todays virtualized hardware systems, even the amount of RAM assigned to a system can easily be changed. For example a VM can be assigned more RAM when needed, without even rebooting the system, leading to increased system performance without introducing swappingpaging overhead. Of course this required support from the operating system and its memory management subsystem. For NetBSD, the UVM virtual memory system was now changed to support this via the uvmhotplug(9) API, and a first user for this is the Xen balloon(4) driver. Quoting from the balloon(4) manpage. The balloon driver supports the memory ballooning operations offered in Xen environments. It allows shrinking or extending a domains available memory by passing pages between different domains. The uvmhotplug(9) manpage gives us more information on the UVM hotplug functionality: When the kernel is compiled with options UVMHOTPLUG, memory segments are handled in a dynamic data structure (rbtree(3)) com - pared to a static array when not. This enables kernel code to add or remove information about memory segments at any point after boot - thus hotplug. To answer more questions for portmasters who want to change their ports, Cherry G. Mathew has now posted a uvmhotplug(9) port masters FAQ. It covers questions on the background, affected files, and needed changes. For more information on UVM, see Charles Chuck Cranors PhD disertation on Design and Implementation of UVM (PDF) as well as his Usenix talk on the UVM Virtual Memory System (PS). There is also plenty of information available on Xen ballooning - check it out and share your experiences on NetBSDs port-xen mailing list December 29, 2016 My brother got me some very tasty presents for Christmas (and my up-coming Birthday) . namely the GIGABYTE BRIX J1900 and a Samsung EVO 750 250G. Santa also brought me 8G of Crucial memory. Putting them all together is a nice new machine to install NetBSD Xen. The key part is this is a low. December 22, 2016 After my last blog postings on the NetBSD scheduler. some time went by. What has happened that the code to handle process migration was rewritten to give more knobs for tuning, and some testing was done. The initial problem state in PR kern51615 is solved by the code. To reach a wider audience and get more testing, the code was committed to NetBSD-current today. Now, two things remain to be seen: More testing . This best involved situations that compare the systems behaviour without and with the patch. Situations to test include pure computation jobs that involve multiple parallel processes a mix of CPU-crunching and inputoutput, again on a number of concurrent processes full build. sh examples If you have time and an interesting set of numbers, please feel free to let us know on tech-kern.. Documentation . There is already a number of undocumented sysctls under kern. sched, which was now extended by one more, averageweight. While its obvious to add the knob from the formula, testing it under various real-life conditions and see how things change is left to be determined by a PhD thesis or two - be sure to drop us your patches for srcsharemanman7sysctl.7 if you can come up with a comprehensible description of all the scheduler sysctls So just now when you thought there is no more research to be done in scheduling algorithms, here is your chance to fame and glory. -)December 17, 2016 How can I activate Keyboard Latin American on NetBSD Because when I am installing I never saw the Latin American keyboard, only Spanish. December 09, 2016 Where can I find and install an AR9271 driver for the latest NetBSD The target machine does not have Internet access and I need to setup the WiFi dongle first. UPDATE . wpasupplicant was already written, but I didnt see my device. When I plug in the dongle its shown as: ifconfig shows only re0 and lo0 interfaces. UPDATE . I saw on some Linux forums that the dongle uses an Atheros chip, but I checked in Windows and see Ralink. The ral driver is also integrated in NetBSD, but the situation doesnt change - I see no ra device in dmesg. boot. December 08, 2016 So, Ive installed NetBSD 7 and device shown again as ugen (ugein, lol). Then Im installed FreeBSD 10.2 and ugen again. usbconfig gives me ugen4.3: ltproduct 0x7601 vendor 0x148fgt at usbus4, cfg0 mdHOST spdHIGH(480Mbps) pwrON (90ma) So, whats next Buying new dongle is a last thing, which Ill make. UPD: NDIS driver not works. December 07, 2016 At Agile Testing Days. I facilitated a workshop called DevOps Dojo. We role-played Dev and Ops developing and operating a production system, then figured out how to do it better together. Youre welcome to use the workshop materials for any purpose, including your own workshop. If you do, Id love to hear about it. Some firsts Ive spoken at several instances of pkgsrcCon (including twice in nearby Berlin ), but thats more like a hackathon with some talks. Agile Testing Days was a proper conference . with hundreds of people and plenty of conferring. If someone asks whether Im an international speaker, or claims I am one, I now wont feel terribly uncomfortable going along with it. What I expected from many previous Lean Coffees: Id have to control myself to not say all the ideas and suggestions that come to mind. What happened at this Lean Coffee: It was very easy to listen, because I didnt have many ideas or suggestions, because the topics came from people who were mostly testers. Conclusions I immediately drew: Come to think of it, I have not played every role on a team. I dont know what its like to be a tester. Maybe my guesses about what its like are less wrong than some others, but theyre still gonna be wrong. This is evidently my first conference thats more testing than Agile . Cool I bet I can learn a lot here. Thanks to Troy Magennis. Markus Grtner. and Cat Swetel. I decided to try a new idea and spend a few slides drawing attention to the existence and purpose of Agile Testing Days Code of Conduct. I cant tell yet how much good this did, but it took so little time that Ill keep trying it in future conference presentations and workshops. Some nexts My next gig will be remote coaching, centered around what we notice as were pair programming and delivering working software. Ive done plenty of coaching and plenty of remote work. but not usually at the same time. Thanks to Lean Coffee with folks like Janet and Alex Schladebeck. I got some good advice on being a more effective influencer when it takes more intention and effort to have face-to-face interactions. Alex: For a personal connection, start meetings by unloading your baggage whatevers on your mind today that might be dividing your attention and inviting others to unload theirs. (Ideally, establish this practice in person first.) Janet: Ask questions that help people recognize their own situation. (Helping people orient themselves in their problem spaces is one of my go-to strengths. Im ready to be leaning harder on it.) As I learn about remote coaching, I expect to write things down at Shape My Work. a wiki about distributed Agile that Alex Harms and I created. Youll notice it has a Code of Conduct. If it makes good sense to you, wed love to learn what youve learned as a remote Agilist. I found Agile Testing Days to be a lovingly organized and carefully tuned mix of coffee breaks, efficiency, flexibility, and whimsy. The love and whimsy shone through. Im honored to have been part of it, and I sure as heck hope to be back next year. Wed be back next year anyway we visit family in Germany every December. Someday we might choose to live near them for a while. It occurs to me that having participated in Agile Testing Days might well have been an early investment in that option, and the thought pleases me. (As does the thought of hopping on a train to participate again.) Im in Europe through Christmas. I consult, coach. and train. Do you know of anyone who could use a day or three of my services One aspect of being a tester I do identify with is being frequently challenged to explain their discipline or justify their decisions to people who dont know what the work is like (and might not recognize the impact of their not knowing). In that regard, I wonder how helpful Agile in 3 Minutes is for testers. Lets say I could be so lucky as to have a few guest episodes about testing. Who would be the first few people youd want to hear from Who has a way with words and ideas, knows the work, and can speak to it in their unique voice to help the rest of us understand a bit better December 01, 2016 November 24, 2016 Interesting news come in via slashdot: Apple Releases macOS 10.12 Sierra Open Source Darwin Code. Apple has released the open source Darwin code for macOS 10.12 Sierra. The code, located on Apples open source website, can be accessed via direct link now, although it doesnt yet appear on the sites home page. The release builds on a long-standing library of open source code that dates all the way back to OS X 10.0. There, youll also find the Open Source Reference Library, developer tools, along with iOS and OS X Server resources. The lowest layers of macOS, including the kernel, BSD portions, and drivers are based mainly on open source technologies, collectively called Darwin. As such, Apple provides download links to the latest versions of these technologies for the open source community to learn and to use. This may not only be of interest to the OpenDarwin folks (or rather their successors in PureDarwin ) but more investigation not only on the code itself, but also the license it is released under is neccessary to learn if anything can be gained back for NetBSD. Why back As you may or may not remember, mac OS includes some parts of NetBSD (besides lots of FreeBSD, probably some OpenBSD, much other Open Source software and sure a big lot of Apples own code). My first job was in Operations. When I got to be a Developer, I promised myself Id remember how to be good to Ops. Ive sometimes succeeded. And when Ive been effective, its been in part due to my firsthand knowledge of both roles. DevOps is two things (hint: theyre not Dev and Ops) Part of what people mean when they say DevOps is automation. Once a system or service is in operation, it becomes more important to engineer its tendencies toward staying in operation. Applying disciplines from software development can help. These words are brought to you by a Unix server I operate. I rely on it to serve this website, those of a few friends, and a tiny podcast of some repute. Oh yeah, and my email. It has become rather important to me that these services tend to stay operational. One way I improve my chances is to simplify whats already there . If it hurts, do it more often Another way is to update my installed third-party software once a week. This introduces two pleasant tendencies: its much Less likely, at any given time, that Im running something dangerously outdated More likely, when an urgent fix is needed, that Ill have my wits about me to do it right Updating software every week also makes two strong assumptions about safety (see Modern Agiles Make Safety a Prerequisite): that I can quickly and easily Roll back to the previous versions Build and install new versions Since Ive been leaning hard on these assumptions, Ive invested in making them more true. The initial investment was to figure out how to configure pkgsrc to build a complete set of binary packages that could be installed at the same time as another complete set. My hypothesis was that then, with predictable and few side effects, I could select the active software set by moving a symbolic link . It worked. On my PowerPC Mac mini. the best-case upgrade scenario went from half an hours downtime (bring down services, uninstall old packages, install new packages, bring up services) to less than a minute (install new packages, bring down services, move symlink, bring up services, delete old packages after a while). The worst case went from over an hour to maybe a couple of minutes. Until it hurts enough less I liked the payoff on that investment a lot . Ive been adding incremental enhancements ever since. I used to do builds directly on the server: in place for low-risk leaf packages, as a separate full batch otherwise. It was straightforward to do, and I was happy to accept an occasional reduction in responsiveness in exchange for the results. After the Mac mini died. I moved to a hosted Virtual Private Server that was much easier to mimic. So I took the job offline to a local VirtualBox running the same release and architecture of NetBSD (32-bit i386 to begin with, 64-bit amd64 now, both under Xen ). The local job ran faster by some hours (I forget how many), during which the server continued devoting all its IO and CPU bandwidth to its full-time responsibilities. Last time I went and improved something was to fully automate the building and uploading, leaving myself a documented sequence of manual installation steps. Yesterday I extended that shell script to generate another shell script thats uploaded along with the packages. When the uploads done, theres one manual step: run the install script. If you can read these words, it works. DevOps is still two things Applying Dev concepts to the Ops domain is one aspect. When Im acting alone as both Dev and Ops, as in the above example, Ive demonstrated only that one aspect. The other, bigger half is collaboration across disciplines and roles. I find it takes some not-tremendously-useful effort to distinguish this aspect of DevOps from BDD or from anything else that looks like healthy cross-functional teamwork. Its the healthy cross-functional teamwork Im after. There are lots of places to start having more of that. If your teams context suggests to you that DevOps would be a fine place to start, go after it Find ways for Dev and Ops to be learning together and delivering together. Thats the whole deal. Heres another deal Two weeks from today, at Agile Testing Days in Potsdam, Germany, Im running a hands-on DevOps collaboration workshop. Can you join us Its not too late, and you can save 10 off the price of the conference ticket. Just provide my discount code when you register. Id love to see you there. November 22, 2016 According to NetBSDs wiki I can use pkgadd - uu to upgrade packages. However, when I attempt to use pkgadd - uu it results in an error. Ive tried to parse the pkgadd man page but I cant tell what the command it to update everything. I cant use pkgchk because its not installed, and I cant get the package system to install it: What is the secret command to get the OS to update everything Please forgive my ignorance with this question. I only have NetBSD systems for testing software. It gets used a few times a year, and I dont know much about it otherwise. October 27, 2016 A LAN has been set up with IPSubnet mask 192.48.1.0255.255.255.224 What is the maximum number of machines that can be set up in this LAN and why (This comes under class C network so the maximum would be 255 or less - correct me if im wrong) Suresh - email160protected sends a mail to my friend Rahul - email160protected with these three files as separate attachments as below - march-reports. ppt - Powerpoint file of size 256 KB. - locations. rar - Rar archive file of size 460 KB - me-snap. tiff - Tiff picture file of size 2970 KB. a) What is the size of the outgoing mail including mail headers b) What is the outgoing mail size if all the three files are archived as one single. rar file and sent out as one single attachment c) Show the MIME based mail structure of the outgoing mail. Show the NetBSD based C code for sending a text message Hello. This works to a remote server running on IP 122.250.110.14 on port 5050 and getting back an acknowlegement. October 10, 2016 The FreeBSD Release Engineering Team is pleased to announce the availability of FreeBSD 11.0-RELEASE. This is the first release of the stable11 branch. Some of the highlights: OpenSSH DSA key generation has been disabled by default. It is important to update OpenSSH keys prior to upgrading. Additionally, Protocol 1 support has been removed. OpenSSH has been updated to 7.2p2. Wireless support for 802.11n has been added. By default, the ifconfig(8) utility will set the default regulatory domain to FCC on wireless interfaces. As a result, newly created wireless interfaces with default settings will have less chance to violate country-specific regulations. The svnlite(1) utility has been updated to version 1.9.4. The libblacklist(3) library and applications have been ported from the NetBSD Project. Support for the AArch64 (arm64) architecture has been added. Native graphics support has been added to the bhyve(8) hypervisor. Broader wireless network driver support has been added. The release notes provide the in-depth look at the new release, and you can get it from the download page. September 14, 2016 Many programming guides recommend to begin scripts with the usrbinenv shebang in order to to automatically locate the necessary interpreter. For example, for a Python script you would use usrbinenv python. and then the saying goes, the script would just work on any machine with Python installed. The reason for this recommendation is that usrbinenv python will search the PATH for a program called python and execute the first one found and that usually works fine on ones own machine . Unfortunately, this advice is plagued with problems and assuming it will work is wishful thinking. Let me elaborate. Ill use Python below for illustration purposes but the following applies equally to any other interpreted language. i) The first problem is that using usrbinenv lets you find an interpreter but not necessarily the correct interpreter . In our example above, we told the system to look for an interpreter called python but we did not say anything about the compatible versions. Did you want Python 2.x or 3.x Or maybe exactly 2.7 Or at least 3.2 You cant tell right So the the computer cant tell either regardless, the script will probably run with whichever version happens to be called python which could be any thanks to the alternatives system. The danger is that, if the version is mismatched, the script will fail and the failure can manifest itself at a much later stage (e. g. a syntax error in an infrequent code path) under obscure circumstances. ii) The second problem, assuming you ignore the version problem above because your script is compatible with all possible versions (hah), is that you may pick up an interpreter that does not have all prerequisite dependencies installed . Say your script decides to import a bunch of third-party modules: where are those modules located Typically, the modules exist in a centralized repository that is specific to the interpreter installation (e. g. a. libpython2.7site-packages directory that lives alongside the interpreter binary). So maybe your program found a Python 2.7 under usrlocalbin but in reality you needed it to find the one in usrbin because thats where all your Python modules are. If that happens, youll receive an obscure error that doesnt properly describe the exact cause of the problem you got. iii) The third problem, assuming your script is portable to all versions (hah again) and that you dont need any modules (really), is that you are assuming that the interpreter is available via a specific name . Unfortunately, the name of the interpreter can vary. For example: pkgsrc installs all python binaries with explicitly-versioned names (e. g. python2.7 and python3.0 ) to avoid ambiguity, and no python symlink is created by default which means your script wont run at all even when Python is seemingly installed. iv) The fourth problem is that you cannot pass flags to the interpreter . The shebang line is intended to contain the name of the interpreter plus a single argument to it. Using usrbinenv as the interpreter name consumes the first slot and the name of the interpreter consumes the second, so there is no room to pass additional flags to the program. What happens with the rest of the arguments is platform-dependent: they may be all passed as a single string to env or they may be tokenized as individual arguments. This is not a huge deal though: one argument for flags is too restricted anyway and you can usually set up the interpreter later from within the script. v) The fifth and worst problem is that your script is at the mercy of the users environment configuration . If the user has a misconfigured PATH. your script will mysteriously fail at run time in ways that you cannot expect and in ways that may be very difficult to troubleshoot later on. I quote misconfigured because the problem here is very subtle. For example: I do have a shell configuration that I carry across many different machines and various operating systems such configuration has complex logic to determine a sane PATH regardless of the system Im in but this, in turn, means that the PATH can end up containing more than one version of the same program. This is fine for interactive shell use, but its not OK for any program to assume that my PATH will match their expectations. vi) The sixth and last problem is that a script prefixed with usrbinenv is not suitable to being installed . This is justified by all the other points illustrated above: once a program is installed on the system, it must behave deterministically no matter how it is invoked. More importantly, when you install a program, you do so under a set of assumptions gathered by a configure - like script or prespecified by a package manager. To ensure things work, the installed script must see the exact same environment that was specified at installation time. In particular, the script must point at the correct interpreter version and at the interpreter that has access to all package dependencies. So what to do All this considered, you may still use usrbinenv for the convenience of your own throwaway scripts (those that dont leave your machine) and also for documentation purposes and as a placeholder for a better default . For anything else, here are some possible alternatives to using this harmful shebang: Patch up the scripts during the build of your software to point to the specific chosen interpreter based on a setting the user provided at configure time or one that you detected automatically. Yes, this means you need make or similar for a simple script, but these are the realities of the environment theyll run under Rely on the packaging system do the patching, which is pretty much what pkgsrc does automatically (and I suppose pretty much any other packaging system out there). Just dont assume that the magic usrbinenv foo is sufficient or even correct for the final installed program. Bonus chatter: There is a myth that the original shebang prefix was so that the kernel could look for it as a 32-bit magic cookie at the beginning of an executable file. I actually believed this myth for a long time until today, as a couple of readers pointed me at The magic, details about the shebanghash-bang mechanism on various Unix flavours with interesting background that contradicts this. August 24, 2016 Im running NetBSD in a virtual machine. Documentation and explanations on how to use pkgsrc are scarce. Lets say I want to install vim for NetBSD. What would I type Do I need a URL Do I need a specific version Do I need to set up a directory for building the source of vim July 08, 2016 Here are some notes on installing and running NetBSDevbarm on the AllWinner A20 powered CubieBoard2. I bought this board a few weeks ago for its SATA capabilities, despite the fact that there are now cheaper boards with more powerful CPUs. Required steps for creating a bootable micro SD card are detailed on the NetBSD Wiki. and a NetBSD installation is required to run mkubootimage . I used an USB to TTL serial cable to connect to the board and create user accounts. Do not be afraid of serial, as it has in fact only advantages: there is no need to connect an USB keyboard nor an HDMI display, and it also brings back nice memories. Connecting using cu (from my OpenBSD machine) : Device name might be different when using cu on other operating systems. Adding a regular user in the wheel group : Adding a password to the newly created user and changing default shell to ksh : Installing and configuring pkgin : Finally, here is a dmesg for reference purposes : June 30, 2016 Ive been itching to go wireless on my office desk for sometime. The final wires to eradicate are from my Mac into a USB hub connected to two hard discs for backups. Years ago I had an Apple Time Capsule. The Time Capsule is an Airport Wi-Fi basestation with a hard disc for Macs to back up to using the Time Machine backup software. It was pretty solid kit for a couple of years. Under the hood, it runs NetBSD and as an aside, I have had a few beers with the guy who ported the operating system. The power supply decided to give up a very common fault apparently. I will clean the cables up. Ich verspreche. When I was on my travels and living in two places, I had hard discs in both locations. The Mac supports multiple discs for backups and I encrypted the backups in case the discs were stolen. But now Im in one home, I want to be able to move around the house with the Mac but still backup without having to go to the office. We are a two Mac house, so we need something more convenient. I already have a base station and I dont really want to shell out loads of money for an Apple one. There are several options to setup a Time Capsule equivalent. If you have a spare Mac, get a copy of Mac OS X Server. It will support Time Machine backups for multiple Macs and also supports quotas so that the size of the backups can be controlled. I dont have a spare stationary Mac. Anything that speaks Appletalk file sharing protocol reasonably well. Enter the Raspberry Pi. I have a Raspberry Pi 3 and within minutes one can install the Netatalk software. This has been available for years on Linux and implements the Apple file sharing protocols really well. With an external drive added, I was able to get a Time Machine backup working using this article . I could not use my existing backup drive as is. Linux will read and write Mac OS drives, but there is a bit of too-ing and fro-ing so it is best to start with a fresh native Linux filesystem. Even if you can get it to work with the Mac OS drive, it will not be able to use a Time Machine backup from a drive previously directly connected. Ive been using this setup for the last couple of weeks. I have not had to do a serious restore yet and I should caveat that I still have a hard drive I use directly into the machine just in case. The first rule of backups a file doesnt exist unless there are three copies on different physical media. (The Raspberry Pi is setup to be MiniDLNA server. It will stream media to Xboxs and other media players.) June 12, 2016 I installed sudo on NetBSD 7.0 using pkg. I copied usrpkgetcsudoers to etcsudoers because the docs say etcsudoers and possibly etcsudoers. local is used. I uncommented the line wheel ALL(ALL) ALL. I then added myself to the wheel group. I verified I am in wheel with groups. I then logged off and then back on. When I attempt to run sudo ltcommandgt. I get the standard: What is wrong with my sudo installation, and how can I fix it May 31, 2016 A brief description of playing around with SunOS 4.1.4, which was the last version of SunOS to be based on BSD. File Info: 17Min, 8Mb Ogg Link: archive. orgdownloadbsdtalk265bsdtalk265.ogg April 30, 2016 Playing around with the gopher protocol. Description of gopher from the 1995 book Students Guide to the Internet by David Clark. Also, at the end of the episode is audio from an interview with Mark McCahilll and Farhad Anklesaria that can be found at youtubewatchvoR76UI7aTvs Check out gopher. floodgapgopher File Info: 27 Min, 13 MB. Ogg Link:archive. orgdownloadbsdtalk264bsdtalk264.ogg March 23, 2016 This episode is brought to you by ftp, the Internet file transfer program, which first appeared in 4.2BSD. An interview with the hosts of the Garbage Podcast, joshua stein and Brandon Mercer. You can find their podcast at garbage. fm File Info: 17Min, 8MB. Ogg Link: archive. orgdownloadbsdtalk263bsdtalk263.ogg via these fine people and places: This planet is operated by Kimmo Suominen. Hosting provided by Global Wire Oy .

No comments:

Post a Comment