A protobuf ".proto" file can be processed by the protobuf compiler (protoc), 
using a Prolog specific plugin. You can do this by either adding /usr/lib/swi-prolog/library/protobufs 
to your PATH or by specifying the option
--plugin=protoc-gen-swipl=/usr/lib/swi-prolog/library/protobufs/protoc-gen-swipl. 
You specify where the generated files go with the --swipl_out 
option, which must be an existing directory.
When using protoc, it's important to specify the --protopath 
(or
-I) and files correctly. The idea of protopath 
is that it gives a list of source "roots", and the files are specified 
relative to that. If you want to include the current directory, you must 
also specify it (e.g., protoc -I. swipl_out=. foo.proto). 
For example, when bootstrapping the "swipl" plugin, these are used:
protoc -I/usr/include --swipl_out=gen_pb google/include/descriptor.proto google/include/compiler/plugin.proto
which creates these files:
gen_pb/google/protobuf/descriptor_pb.pl gen_pb/google/protobuf/compiler/plugin_pb.pl
The plugin_pb is used by:
:- use_module(gen_pb/google/protobuf/compiler/plugin_pb)
which has this (import is relative to the current module):
:- use_module('../descriptor_pb').
Each X.proto file generates a X_pb.pl file 
in the directory specified by --swipl_out. The file 
contains a module name X, some debugging information, and 
meta-data facts that go into the
protobufs module (all the facts start with "proto_meta_") 
--
protobuf_parse_from_codes/3 
uses these facts to parse the wire form of the message into a Prolog 
term and protobuf_serialize_to_codes/3 
uses them to serialize the data to wire form.
The generated code does not rely on any Google-supplied code.
You must compile all the ".proto" files separately but you only need 
to load the top-level generated file -- it contains the necessary load 
directives for things that it uses. You can find out the dependencies 
for a .proto file by running
PATH="$PATH:/usr/lib/swipl/library/protobufs" protoc -I... --dependency_out=FILE --swipl_out=. SRC.proto
The Prolog term corresponding to a protobuf message is a
dict, with the keys 
corresponding to the field names in the message (the dict 
tag is treated as a comment). Repeated fields are represented as lists; 
enums are looked up and converted to atoms; bools are represented by
false and true; strings are represented by 
Prolog strings or atoms; bytes are represented by lists of codes.
TODO: Add an option to omit default values (this is the proto3 
behavior).
When serializing, the dict tag is treated as a comment and is ignored. So, you can use any dict tags when creating data for output. For example, both of these will generate the same output:
protobuf_serialize_to_codes(_{people:[_{id:1234,name:"John Doe"}]}, 'tutorial.AddressBook', WireCodes).
protobuf_serialize_to_codes('tutorial.AddressBook'{people:['tutorial.Person'{name:"John Doe",id:1234}]}, 'tutorial.AddressBook', WireCodes).
NOTE: if the wire codes can't be parsed, protobuf_parse_from_codes/3 fails. One common cause is if you give an incorrect field name. Typically, this shows up in a call to protobufs:field_segment/3, when protobufs:proto_meta_field_name/4 fails.
This is the inverse of protobuf_serialize_to_codes/3 
-- it takes a wire stream (list of codes) and creates a
dict. The dict tags 
are the fully qualified names of the messages. Repeated fields that 
aren't in the wire stream get set to the value []; other 
fields that aren't in the wire stream get their default value (typically 
the empty string or zero, depending on type). Embedded messages and 
groups are omitted if not in the wire stream; you can test for their 
presence using
get_dict/3. Enums are looked up and 
converted to atoms; bools are represented by
false and true; strings are represented by 
Prolog strings (not atoms); bytes are represented by lists of codes.
There is no mechanism for determining whether a field was in the wire 
stream or not (that is, there is no equivalent of the Python 
implementation's HasField).
The "oneof" feature causes a slightly different behavior. Only the field that's in the wire stream gets set; the other fields are omitted. And if none of the fields in the "oneof" are set, then none of the fields appears. You can check which field is set by using get_dict/3.
Currently, there is no special support for the protobuf "map" feature. It is treated as an ordinary message field. The convenience predicates protobuf_field_is_map/3 and protobuf_map_pairs/3 can be used to convert between a "map" field and a key-value list, which gives you the freedom to use any kind of association list for the map. See also Issue #12 For example:
message MapMessage {
  map<string, sint64> number_ints = 5;
}
is treated as if it is
message MapMessage {
  message KeyValue {
    optional string  Key = 1;
    optional sint64  Value = 2;
  }
  repeated KeyValue number_ints = 5;
}
You can handle this on input by
protobuf_parse_from_codes(WireCodes, 'MapMessage', Term), protobuf_map_pairs(Term.number_ints, _, Pairs).
and on output by
protobuf_map_pairs(TermNnumberInts, _, Pairs),
protobuf_serialize_to_codes(_{number_ints:TermNumberInts}, WireCodes).
The Google documentation has a tutorial example of a simple 
addressbook:
https://developers.google.com/protocol-buffers/docs/tutorials 
The Prolog equivalent is in
/usr/lib/swi-prolog/oc/packages/examples/protobufs/interop/addressbook.pl 
and you can run it by make run_addressbook, which will run protoc 
to generate the _pb.pl files and then run the example. The 
resulting file is addressbook.wire.