Optimization

This commit is contained in:
Kyle McIndoe 2023-04-11 09:04:51 -04:00
parent 0a5ae19f85
commit 4204d51b31
464 changed files with 64683 additions and 0 deletions

2
.gitignore vendored Normal file
View File

@ -0,0 +1,2 @@
.DS_Store

661
COPYING Normal file
View File

@ -0,0 +1,661 @@
GNU AFFERO GENERAL PUBLIC LICENSE
Version 3, 19 November 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU Affero General Public License is a free, copyleft license for
software and other kinds of works, specifically designed to ensure
cooperation with the community in the case of network server software.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
our General Public Licenses are intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
Developers that use our General Public Licenses protect your rights
with two steps: (1) assert copyright on the software, and (2) offer
you this License which gives you legal permission to copy, distribute
and/or modify the software.
A secondary benefit of defending all users' freedom is that
improvements made in alternate versions of the program, if they
receive widespread use, become available for other developers to
incorporate. Many developers of free software are heartened and
encouraged by the resulting cooperation. However, in the case of
software used on network servers, this result may fail to come about.
The GNU General Public License permits making a modified version and
letting the public access it on a server without ever releasing its
source code to the public.
The GNU Affero General Public License is designed specifically to
ensure that, in such cases, the modified source code becomes available
to the community. It requires the operator of a network server to
provide the source code of the modified version running there to the
users of that server. Therefore, public use of a modified version, on
a publicly accessible server, gives the public access to the source
code of the modified version.
An older license, called the Affero General Public License and
published by Affero, was designed to accomplish similar goals. This is
a different license, not a version of the Affero GPL, but Affero has
released a new version of the Affero GPL which permits relicensing under
this license.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU Affero General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Remote Network Interaction; Use with the GNU General Public License.
Notwithstanding any other provision of this License, if you modify the
Program, your modified version must prominently offer all users
interacting with it remotely through a computer network (if your version
supports such interaction) an opportunity to receive the Corresponding
Source of your version by providing access to the Corresponding Source
from a network server at no charge, through some standard or customary
means of facilitating copying of software. This Corresponding Source
shall include the Corresponding Source for any work covered by version 3
of the GNU General Public License that is incorporated pursuant to the
following paragraph.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the work with which it is combined will remain governed by version
3 of the GNU General Public License.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU Affero General Public License from time to time. Such new versions
will be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU Affero General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU Affero General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU Affero General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If your software can interact with users remotely through a computer
network, you should also make sure that it provides a way for users to
get its source. For example, if your program is a web application, its
interface could display a "Source" link that leads users to an archive
of the code. There are many ways you could offer source, and different
solutions will be better for different programs; see section 13 for the
specific requirements.
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU AGPL, see
<https://www.gnu.org/licenses/>.

15
IMPORTANT.txt Normal file
View File

@ -0,0 +1,15 @@
⢀⡴⠑⡄⠀⠀⠀⠀⠀⠀⠀⣀⣀⣤⣤⣤⣀⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠸⡇⠀⠿⡀⠀⠀⠀⣀⡴⢿⣿⣿⣿⣿⣿⣿⣿⣷⣦⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀⠀⠀⠑⢄⣠⠾⠁⣀⣄⡈⠙⣿⣿⣿⣿⣿⣿⣿⣿⣆⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀⠀⠀⢀⡀⠁⠀⠀⠈⠙⠛⠂⠈⣿⣿⣿⣿⣿⠿⡿⢿⣆⠀⠀⠀⠀⠀⠀⠀
⠀⠀⠀⢀⡾⣁⣀⠀⠴⠂⠙⣗⡀⠀⢻⣿⣿⠭⢤⣴⣦⣤⣹⠀⠀⠀⢀⢴⣶⣆
⠀⠀⢀⣾⣿⣿⣿⣷⣮⣽⣾⣿⣥⣴⣿⣿⡿⢂⠔⢚⡿⢿⣿⣦⣴⣾⠁⠸⣼⡿
⠀⢀⡞⠁⠙⠻⠿⠟⠉⠀⠛⢹⣿⣿⣿⣿⣿⣌⢤⣼⣿⣾⣿⡟⠉⠀⠀⠀⠀⠀
⠀⣾⣷⣶⠇⠀⠀⣤⣄⣀⡀⠈⠻⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⡇⠀⠀⠀⠀⠀⠀
⠀⠉⠈⠉⠀⠀⢦⡈⢻⣿⣿⣿⣶⣶⣶⣶⣤⣽⡹⣿⣿⣿⣿⡇⠀⠀⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠉⠲⣽⡻⢿⣿⣿⣿⣿⣿⣿⣷⣜⣿⣿⣿⡇⠀⠀⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠀⢸⣿⣿⣷⣶⣮⣭⣽⣿⣿⣿⣿⣿⣿⣿⠀⠀⠀⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⣀⣀⣈⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠇⠀⠀⠀⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⢿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠃⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠹⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⡿⠟⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉⠛⠻⠿⠿⠿⠿⠛⠉

39
README.md Normal file
View File

@ -0,0 +1,39 @@
# Twitter Recommendation Algorithm
The Twitter Recommendation Algorithm is a set of services and jobs that are responsible for constructing and serving the
Home Timeline. For an introduction to how the algorithm works, please refer to our [engineering blog](https://blog.twitter.com/engineering/en_us/topics/open-source/2023/twitter-recommendation-algorithm). The
diagram below illustrates how major services and jobs interconnect.
![](docs/system-diagram.png)
These are the main components of the Recommendation Algorithm included in this repository:
| Type | Component | Description |
|------------|------------|------------|
| Feature | [SimClusters](src/scala/com/twitter/simclusters_v2/README.md) | Community detection and sparse embeddings into those communities. |
| | [TwHIN](https://github.com/twitter/the-algorithm-ml/blob/main/projects/twhin/README.md) | Dense knowledge graph embeddings for Users and Tweets. |
| | [trust-and-safety-models](trust_and_safety_models/README.md) | Models for detecting NSFW or abusive content. |
| | [real-graph](src/scala/com/twitter/interaction_graph/README.md) | Model to predict likelihood of a Twitter User interacting with another User. |
| | [tweepcred](src/scala/com/twitter/graph/batch/job/tweepcred/README) | Page-Rank algorithm for calculating Twitter User reputation. |
| | [recos-injector](recos-injector/README.md) | Streaming event processor for building input streams for [GraphJet](https://github.com/twitter/GraphJet) based services. |
| | [graph-feature-service](graph-feature-service/README.md) | Serves graph features for a directed pair of Users (e.g. how many of User A's following liked Tweets from User B). |
| Candidate Source | [search-index](src/java/com/twitter/search/README.md) | Find and rank In-Network Tweets. ~50% of Tweets come from this candidate source. |
| | [cr-mixer](cr-mixer/README.md) | Coordination layer for fetching Out-of-Network tweet candidates from underlying compute services. |
| | [user-tweet-entity-graph](src/scala/com/twitter/recos/user_tweet_entity_graph/README.md) (UTEG)| Maintains an in memory User to Tweet interaction graph, and finds candidates based on traversals of this graph. This is built on the [GraphJet](https://github.com/twitter/GraphJet) framework. Several other GraphJet based features and candidate sources are located [here](src/scala/com/twitter/recos) |
| | [follow-recommendation-service](follow-recommendations-service/README.md) (FRS)| Provides Users with recommendations for accounts to follow, and Tweets from those accounts. |
| Ranking | [light-ranker](src/python/twitter/deepbird/projects/timelines/scripts/models/earlybird/README.md) | Light ranker model used by search index (Earlybird) to rank Tweets. |
| | [heavy-ranker](https://github.com/twitter/the-algorithm-ml/blob/main/projects/home/recap/README.md) | Neural network for ranking candidate tweets. One of the main signals used to select timeline Tweets post candidate sourcing. |
| Tweet mixing & filtering | [home-mixer](home-mixer/README.md) | Main service used to construct and serve the Home Timeline. Built on [product-mixer](product-mixer/README.md) |
| | [visibility-filters](visibilitylib/README.md) | Responsible for filtering Twitter content to support legal compliance, improve product quality, increase user trust, protect revenue through the use of hard-filtering, visible product treatments, and coarse-grained downranking. |
| | [timelineranker](timelineranker/README.md) | Legacy service which provides relevance-scored tweets from the Earlybird Search Index and UTEG service. |
| Software framework | [navi](navi/navi/README.md) | High performance, machine learning model serving written in Rust. |
| | [product-mixer](product-mixer/README.md) | Software framework for building feeds of content. |
| | [twml](twml/README.md) | Legacy machine learning framework built on TensorFlow v1. |
We include Bazel BUILD files for most components, but not a top level BUILD or WORKSPACE file.
## Contributing
We invite the community to submit GitHub issues and pull requests for suggestions on improving the recommendation algorithm. We are working on tools to manage these suggestions and sync changes to our internal repository. Any security concerns or issues should be routed to our official [bug bounty program](https://hackerone.com/twitter) through HackerOne. We hope to benefit from the collective intelligence and expertise of the global community in helping us identify issues and suggest improvements, ultimately leading to a better Twitter.
Read our blog on the open source initiative [here](https://blog.twitter.com/en_us/topics/company/2023/a-new-era-of-transparency-for-twitter).

186
twml/BUILD Normal file
View File

@ -0,0 +1,186 @@
twml_sources = [
"twml/**/*.py",
]
twml_deps = [
"3rdparty/python/cherrypy:default",
"3rdparty/python/pyyaml:default",
"3rdparty/python/absl-py:default",
"3rdparty/python/joblib:default",
"3rdparty/python/kazoo:default",
"3rdparty/python/python-dateutil:default",
"3rdparty/python/pytz:default",
"cortex/ml-metastore/src/main/python/com/twitter/mlmetastore/modelrepo/client",
"src/python/twitter/common/app",
"src/python/twitter/common/app/modules:vars",
"src/python/twitter/common/metrics",
"src/python/twitter/deepbird/compat/v1/optimizers",
"src/python/twitter/deepbird/compat/v1/rnn",
"src/python/twitter/deepbird/hparam",
"src/python/twitter/deepbird/io",
"src/python/twitter/deepbird/io/legacy",
"src/python/twitter/deepbird/logging",
"src/python/twitter/deepbird/sparse",
"src/python/twitter/deepbird/stats_server",
"src/python/twitter/deepbird/util:simple-data-record-handler",
"src/python/twitter/deepbird/util/hashing",
"src/python/twitter/ml/api/dal",
"src/python/twitter/ml/common:metrics",
"src/python/twitter/ml/common/kubernetes",
"src/python/twitter/ml/common:resources",
"src/python/twitter/ml/twml/kubernetes",
"src/python/twitter/ml/twml:status",
"src/thrift/com/twitter/dal:dal_no_constants-python",
"src/thrift/com/twitter/statebird:compiled-v2-python",
]
python3_library(
name = "twml-test-common-deps",
tags = ["no-mypy"],
dependencies = [
"src/python/twitter/deepbird/util:inference",
"src/python/twitter/deepbird/util/data",
"src/thrift/com/twitter/ml/api:data-python",
"twml/tests/data:resources",
],
)
python3_library(
name = "twml_packer_deps_no_tf",
tags = [
"bazel-compatible",
"no-mypy",
],
dependencies = [
"3rdparty/python/numpy:default",
"3rdparty/python/pandas:default",
"3rdparty/python/pyyaml:default",
"3rdparty/python/requests:default",
"3rdparty/python/scikit-learn:default",
"3rdparty/python/scipy:default",
"3rdparty/python/tensorflow-hub:default",
"3rdparty/python/thriftpy2:default",
],
)
python3_library(
name = "twml_packer_deps_no_tf_py3",
tags = [
"known-to-fail-jira:CX-20246",
"no-mypy",
],
dependencies = [
":twml_packer_deps_no_tf",
"3rdparty/python/tensorflow-model-analysis",
],
)
alias(
name = "twml-test-shared",
target = ":twml_common",
)
python3_library(
name = "twml_common",
sources = ["twml_common/**/*.py"],
tags = [
"bazel-compatible",
"no-mypy",
],
)
# Alias twml-dev to twml to avoid breaking user targets.
alias(
name = "twml-dev",
target = "twml",
)
python3_library(
name = "twml-test-dev-deps",
tags = [
"bazel-compatible",
"no-mypy",
],
dependencies = [
":twml",
":twml-test-common-deps",
":twml-test-shared",
"3rdparty/python/freezegun:default",
"src/python/twitter/deepbird/keras/layers",
"src/thrift/com/twitter/ml/api:data-python",
"src/thrift/com/twitter/ml/prediction_service:prediction_service-python",
],
)
python3_library(
name = "twml-dev-python",
sources = twml_sources,
tags = [
"bazel-compatible",
"no-mypy",
],
dependencies = twml_deps + [
":twml_packer_deps_no_tf",
"3rdparty/python/tensorflow",
"3rdparty/python/twml:libtwml-universal",
"twml/libtwml:libtwml-python",
],
)
# Build a smaller .pex file that models can depend on.
# Tensorflow and other dependencies are downloaded from Packer on Aurora.
# Note: This gets the C++ ops through 3rdparty artifacts.
python3_library(
name = "twml-nodeps",
sources = twml_sources,
tags = [
"bazel-compatible",
"no-mypy",
],
dependencies = twml_deps + [
"3rdparty/python/twml:libtwml-universal",
],
)
python3_library(
name = "twml",
tags = [
"bazel-compatible",
"no-mypy",
],
dependencies = [
":twml-nodeps",
":twml_packer_deps_no_tf",
"3rdparty/python/tensorflow",
],
)
python37_binary(
name = "tensorboard",
source = "twml/tensorboard/__main__.py",
dependencies = [
"3rdparty/python/_closures/twml:tensorboard",
"3rdparty/python/tensorflow",
],
)
python37_binary(
name = "saved_model_cli",
source = "twml/saved_model_cli/__main__.py",
dependencies = [
"3rdparty/python/_closures/twml:saved_model_cli",
"3rdparty/python/tensorflow",
],
)
# This target is added so twml can be used regardless of the Tensorflow version:
# This target does not pull in TensorFlow 1.x or the related libtwml compiled using TF 1.x.
python3_library(
name = "twml-py-source-only",
sources = twml_sources,
tags = [
"known-to-fail-jira:CX-23416",
"no-mypy",
],
dependencies = twml_deps,
)

13
twml/README.md Normal file
View File

@ -0,0 +1,13 @@
# TWML
---
Note: `twml` is no longer under development. Much of the code here is not out of date and unused.
It is included here for completeness, because `twml` is still used to train the light ranker models
(see `src/python/twitter/deepbird/projects/timelines/scripts/models/earlybird/README.md`)
---
TWML is one of Twitter's machine learning frameworks, which uses Tensorflow under the hood. While it is mostly
deprecated,
it is still currently used to train the Earlybird light ranking models (
see `src/python/twitter/deepbird/projects/timelines/scripts/models/earlybird/train.py`).
The most relevant part of this is the `DataRecordTrainer` class, which is where the core training logic resides.

8
twml/libtwml/BUILD Normal file
View File

@ -0,0 +1,8 @@
python3_library(
name = "libtwml-python",
sources = ["libtwml/**/*.py"],
tags = [
"no-mypy",
"bazel-compatible",
],
)

View File

@ -0,0 +1,21 @@
#include <twml/defines.h>
#include <twml/Error.h>
#include <twml/functions.h>
#include <twml/Hashmap.h>
#include <twml/optim.h>
#include <twml/hashing_discretizer_impl.h>
#include <twml/discretizer_impl.h>
#include <twml/Tensor.h>
#include <twml/HashedDataRecord.h>
#include <twml/BatchPredictionRequest.h>
#include <twml/BatchPredictionResponse.h>
#include <twml/BlockFormatReader.h>
#include <twml/BlockFormatWriter.h>
#include <twml/ThriftReader.h>
#include <twml/ThriftWriter.h>
#include <twml/HashedDataRecordReader.h>
#include <twml/DataRecordReader.h>
#include <twml/DataRecordWriter.h>
#include <twml/TensorRecordWriter.h>
#include <twml/DataRecord.h>
#include <twml/io/IOError.h>

View File

@ -0,0 +1,45 @@
#pragma once
#ifdef __cplusplus
#include <twml/DataRecord.h>
#include <twml/HashedDataRecord.h>
#include <twml/Tensor.h>
namespace twml {
template<class RecordType>
class GenericBatchPredictionRequest {
static_assert(std::is_same<RecordType, HashedDataRecord>::value ||
std::is_same<RecordType, DataRecord>::value,
"RecordType has to be HashedDatarecord or DataRecord");
public:
typedef typename RecordType::Reader Reader;
GenericBatchPredictionRequest(int numOfLabels=0, int numOfWeights=0):
m_common_features(), m_requests(),
num_labels(numOfLabels), num_weights(numOfWeights)
{}
void decode(Reader &reader);
std::vector<RecordType>& requests() {
return m_requests;
}
RecordType& common() {
return m_common_features;
}
private:
RecordType m_common_features;
std::vector<RecordType> m_requests;
int num_labels;
int num_weights;
};
using HashedBatchPredictionRequest = GenericBatchPredictionRequest<HashedDataRecord>;
using BatchPredictionRequest = GenericBatchPredictionRequest<DataRecord>;
}
#endif

View File

@ -0,0 +1,58 @@
#pragma once
#include <twml/Tensor.h>
#include <twml/RawTensor.h>
#include <twml/ThriftWriter.h>
namespace twml {
// Encodes a batch of model predictions as a list of Thrift DataRecord
// objects inside a Thrift BatchPredictionResponse object. Prediction
// values are continousFeatures inside each DataRecord.
//
// The BatchPredictionResponseWriter TensorFlow operator uses this class
// to determine the size of the output tensor to allocate. The operator
// then allocates memory for the output tensor and uses this class to
// write binary Thrift to the output tensor.
//
class BatchPredictionResponse {
private:
uint64_t batch_size_;
const Tensor &keys_;
const Tensor &values_; // prediction values (batch_size * num_keys)
const Tensor &dense_keys_;
const std::vector<RawTensor> &dense_values_;
inline uint64_t getBatchSize() { return batch_size_; }
inline bool hasContinuous() { return keys_.getNumDims() > 0; }
inline bool hasDenseTensors() { return dense_keys_.getNumDims() > 0; }
inline uint64_t getPredictionSize() {
return values_.getNumDims() > 1 ? values_.getDim(1) : 1;
};
void encode(twml::ThriftWriter &thrift_writer);
template <typename T>
void serializePredictions(twml::ThriftWriter &thrift_writer);
public:
// keys: 'continuousFeatures' prediction keys
// values: 'continuousFeatures' prediction values (batch_size * num_keys)
// dense_keys: 'tensors' prediction keys
// dense_values: 'tensors' prediction values (batch_size * num_keys)
BatchPredictionResponse(
const Tensor &keys, const Tensor &values,
const Tensor &dense_keys, const std::vector<RawTensor> &dense_values);
// Calculate the size of the Thrift encoded output (but do not encode).
// The BatchPredictionResponseWriter TensorFlow operator uses this value
// to allocate the output tensor.
uint64_t encodedSize();
// Write the BatchPredictionResponse as binary Thrift. The
// BatchPredictionResponseWriter operator uses this method to populate
// the output tensor.
void write(Tensor &result);
};
}

View File

@ -0,0 +1,32 @@
#pragma once
#include <string>
#include <cstdlib>
#include <unistd.h>
#include <stdexcept>
#include <inttypes.h>
#include <stdint.h>
namespace twml {
class BlockFormatReader {
private:
int record_size_;
long block_pos_;
long block_end_;
char classname_[1024];
int read_one_record_size();
int read_int();
int consume_marker(int scan);
int unpack_varint_i32();
int unpack_tag_and_wiretype(uint32_t *tag, uint32_t *wiretype);
int unpack_string(char *out, uint64_t max_out_len);
public:
BlockFormatReader();
bool next();
uint64_t current_size() const { return record_size_; }
virtual uint64_t read_bytes(void *dest, int size, int count) = 0;
};
}

View File

@ -0,0 +1,61 @@
#pragma once
#include <twml/defines.h>
#include <cstdlib>
#include <cstdio>
#include <unistd.h>
#include <cinttypes>
#include <cstdint>
#ifndef PATH_MAX
#define PATH_MAX (8096)
#endif
#ifdef __cplusplus
extern "C" {
#endif
struct block_format_writer__;
typedef block_format_writer__ * block_format_writer;
#ifdef __cplusplus
}
#endif
#ifdef __cplusplus
namespace twml {
class BlockFormatWriter {
private:
const char *file_name_;
FILE *outputfile_;
char temp_file_name_[PATH_MAX];
int record_index_;
int records_per_block_;
int pack_tag_and_wiretype(FILE *file, uint32_t tag, uint32_t wiretype);
int pack_varint_i32(FILE *file, int value);
int pack_string(FILE *file, const char *in, size_t in_len);
int write_int(FILE *file, int value);
public:
BlockFormatWriter(const char *file_name, int record_per_block);
~BlockFormatWriter();
int write(const char *class_name, const char *record, int record_len) ;
int flush();
block_format_writer getHandle();
};
BlockFormatWriter *getBlockFormatWriter(block_format_writer w);
} //twml namespace
#endif
#ifdef __cplusplus
extern "C" {
#endif
twml_err block_format_writer_create(block_format_writer *w, const char *file_name, int records_per_block);
twml_err block_format_write(block_format_writer w, const char *class_name, const char *record, int record_len);
twml_err block_format_flush(block_format_writer w);
twml_err block_format_writer_delete(const block_format_writer w);
#ifdef __cplusplus
}
#endif

View File

@ -0,0 +1,108 @@
#pragma once
#ifdef __cplusplus
#include <twml/common.h>
#include <twml/defines.h>
#include <twml/TensorRecord.h>
#include <cstdint>
#include <cmath>
#include <string>
#include <unordered_map>
#include <unordered_set>
#include <vector>
namespace twml {
class DataRecordReader;
class TWMLAPI DataRecord : public TensorRecord {
public:
typedef std::vector<std::pair<std::string, double>> SparseContinuousValueType;
typedef std::vector<std::string> SparseBinaryValueType;
typedef Set<int64_t> BinaryFeatures;
typedef Map<int64_t, double> ContinuousFeatures;
typedef Map<int64_t, int64_t> DiscreteFeatures;
typedef Map<int64_t, std::string> StringFeatures;
typedef Map<int64_t, SparseBinaryValueType> SparseBinaryFeatures;
typedef Map<int64_t, SparseContinuousValueType> SparseContinuousFeatures;
typedef Map<int64_t, std::vector<uint8_t>> BlobFeatures;
private:
BinaryFeatures m_binary;
ContinuousFeatures m_continuous;
DiscreteFeatures m_discrete;
StringFeatures m_string;
SparseBinaryFeatures m_sparsebinary;
SparseContinuousFeatures m_sparsecontinuous;
BlobFeatures m_blob;
std::vector<float> m_labels;
std::vector<float> m_weights;
void addLabel(int64_t id, double label = 1);
void addWeight(int64_t id, double value);
public:
typedef DataRecordReader Reader;
DataRecord(int num_labels=0, int num_weights=0):
m_binary(),
m_continuous(),
m_discrete(),
m_string(),
m_sparsebinary(),
m_sparsecontinuous(),
m_blob(),
m_labels(num_labels, std::nanf("")),
m_weights(num_weights) {
#ifdef USE_DENSE_HASH
m_binary.set_empty_key(0);
m_continuous.set_empty_key(0);
m_discrete.set_empty_key(0);
m_string.set_empty_key(0);
m_sparsebinary.set_empty_key(0);
m_sparsecontinuous.set_empty_key(0);
#endif
m_binary.max_load_factor(0.5);
m_continuous.max_load_factor(0.5);
m_discrete.max_load_factor(0.5);
m_string.max_load_factor(0.5);
m_sparsebinary.max_load_factor(0.5);
m_sparsecontinuous.max_load_factor(0.5);
}
const BinaryFeatures &getBinary() const { return m_binary; }
const ContinuousFeatures &getContinuous() const { return m_continuous; }
const DiscreteFeatures &getDiscrete() const { return m_discrete; }
const StringFeatures &getString() const { return m_string; }
const SparseBinaryFeatures &getSparseBinary() const { return m_sparsebinary; }
const SparseContinuousFeatures &getSparseContinuous() const { return m_sparsecontinuous; }
const BlobFeatures &getBlob() const { return m_blob; }
const std::vector<float> &labels() const { return m_labels; }
const std::vector<float> &weights() const { return m_weights; }
// used by DataRecordWriter
template <typename T>
void addContinuous(std::vector<int64_t> feature_ids, std::vector<T> values) {
for (size_t i = 0; i < feature_ids.size(); ++i){
m_continuous[feature_ids[i]] = values[i];
}
}
template <typename T>
void addContinuous(const int64_t *keys, uint64_t num_keys, T *values) {
for (size_t i = 0; i < num_keys; ++i){
m_continuous[keys[i]] = values[i];
}
}
void decode(DataRecordReader &reader);
void clear();
friend class DataRecordReader;
};
}
#endif

View File

@ -0,0 +1,61 @@
#pragma once
#ifdef __cplusplus
#include <twml/common.h>
#include <twml/defines.h>
#include <twml/DataRecord.h>
#include <twml/TensorRecordReader.h>
#include <cstdint>
#include <vector>
#include <string>
#include <unordered_map>
namespace twml {
class TWMLAPI DataRecordReader : public TensorRecordReader {
private:
typedef Map<int64_t, int64_t> KeyMap_t;
KeyMap_t *m_keep_map;
KeyMap_t *m_labels_map;
KeyMap_t *m_weights_map;
public:
bool keepKey (const int64_t &key, int64_t &code);
bool isLabel (const int64_t &key, int64_t &code);
bool isWeight (const int64_t &key, int64_t &code);
void readBinary (const int feature_type , DataRecord *record);
void readContinuous (const int feature_type , DataRecord *record);
void readDiscrete (const int feature_type , DataRecord *record);
void readString (const int feature_type , DataRecord *record);
void readSparseBinary (const int feature_type , DataRecord *record);
void readSparseContinuous (const int feature_type , DataRecord *record);
void readBlob (const int feature_type , DataRecord *record);
DataRecordReader() :
TensorRecordReader(nullptr),
m_keep_map(nullptr),
m_labels_map(nullptr),
m_weights_map(nullptr)
{}
// Using a template instead of int64_t because tensorflow implements int64 based on compiler.
void setKeepMap(KeyMap_t *keep_map) {
m_keep_map = keep_map;
}
void setLabelsMap(KeyMap_t *labels_map) {
m_labels_map = labels_map;
}
void setWeightsMap(KeyMap_t *weights_map) {
m_weights_map = weights_map;
}
void setDecodeMode(int64_t mode) {}
};
}
#endif

View File

@ -0,0 +1,39 @@
#pragma once
#ifdef __cplusplus
#include <twml/defines.h>
#include <twml/DataRecord.h>
#include <twml/TensorRecordWriter.h>
namespace twml {
// Encodes DataRecords as binary Thrift. BatchPredictionResponse
// uses this class to encode prediction responses through our
// TensorFlow response writer operator.
class TWMLAPI DataRecordWriter {
private:
uint32_t m_records_written;
twml::ThriftWriter &m_thrift_writer;
twml::TensorRecordWriter m_tensor_writer;
void writeBinary(twml::DataRecord &record);
void writeContinuous(twml::DataRecord &record);
void writeDiscrete(twml::DataRecord &record);
void writeString(twml::DataRecord &record);
void writeSparseBinaryFeatures(twml::DataRecord &record);
void writeSparseContinuousFeatures(twml::DataRecord &record);
void writeBlobFeatures(twml::DataRecord &record);
void writeDenseTensors(twml::DataRecord &record);
public:
DataRecordWriter(twml::ThriftWriter &thrift_writer):
m_records_written(0),
m_thrift_writer(thrift_writer),
m_tensor_writer(twml::TensorRecordWriter(thrift_writer)) { }
uint32_t getRecordsWritten();
uint64_t write(twml::DataRecord &record);
};
}
#endif

View File

@ -0,0 +1,48 @@
#pragma once
#include <twml/defines.h>
#ifdef __cplusplus
#include <stddef.h>
#include <stdexcept>
#include <stdint.h>
#include <string>
namespace twml {
class Error : public std::runtime_error {
private:
twml_err m_err;
public:
Error(twml_err err, const std::string &msg) :
std::runtime_error(msg), m_err(err)
{
}
twml_err err() const
{
return m_err;
}
};
class ThriftInvalidField: public twml::Error {
public:
ThriftInvalidField(int16_t field_id, const std::string& func) :
Error(TWML_ERR_THRIFT,
"Found invalid field (" + std::to_string(field_id)
+ ") while reading thrift [" + func + "]")
{
}
};
class ThriftInvalidType: public twml::Error {
public:
ThriftInvalidType(uint8_t type_id, const std::string& func, const std::string type) :
Error(TWML_ERR_THRIFT,
"Found invalid type (" + std::to_string(type_id) +
") while reading thrift [" + func + "::" + type + "]")
{
}
};
}
#endif

View File

@ -0,0 +1,70 @@
#pragma once
#ifdef __cplusplus
#include <twml/defines.h>
#include <twml/TensorRecord.h>
#include <cstdint>
#include <cmath>
#include <vector>
namespace twml {
class HashedDataRecordReader;
class TWMLAPI HashedDataRecord : public TensorRecord {
public:
typedef HashedDataRecordReader Reader;
HashedDataRecord(int num_labels=0, int num_weights=0):
m_keys(),
m_transformed_keys(),
m_values(),
m_codes(),
m_types(),
m_labels(num_labels, std::nanf("")),
m_weights(num_weights) {}
void decode(HashedDataRecordReader &reader);
const std::vector<int64_t> &keys() const { return m_keys; }
const std::vector<int64_t> &transformed_keys() const { return m_transformed_keys; }
const std::vector<double> &values() const { return m_values; }
const std::vector<int64_t> &codes() const { return m_codes; }
const std::vector<uint8_t> &types() const { return m_types; }
const std::vector<float> &labels() const { return m_labels; }
const std::vector<float> &weights() const { return m_weights; }
void clear();
uint64_t totalSize() const { return m_keys.size(); }
void extendSize(int delta_size) {
int count = m_keys.size() + delta_size;
m_keys.reserve(count);
m_transformed_keys.reserve(count);
m_values.reserve(count);
m_codes.reserve(count);
m_types.reserve(count);
}
private:
std::vector<int64_t> m_keys;
std::vector<int64_t> m_transformed_keys;
std::vector<double> m_values;
std::vector<int64_t> m_codes;
std::vector<uint8_t> m_types;
std::vector<float> m_labels;
std::vector<float> m_weights;
void addKey(int64_t key, int64_t transformed_key, int64_t code, uint8_t type, double value=1);
void addLabel(int64_t id, double value = 1);
void addWeight(int64_t id, double value);
friend class HashedDataRecordReader;
};
}
#endif

View File

@ -0,0 +1,70 @@
#pragma once
#ifdef __cplusplus
#include <twml/common.h>
#include <twml/defines.h>
#include <twml/HashedDataRecord.h>
#include <twml/TensorRecordReader.h>
#include <cstdint>
#include <vector>
#include <string>
#include <unordered_map>
namespace twml {
enum class DecodeMode: int64_t
{
hash_valname = 0,
hash_fname_and_valname = 1,
};
class TWMLAPI HashedDataRecordReader : public TensorRecordReader {
private:
typedef Map<int64_t, int64_t> KeyMap_t;
KeyMap_t *m_keep_map;
KeyMap_t *m_labels_map;
KeyMap_t *m_weights_map;
DecodeMode m_decode_mode;
public:
bool keepId (const int64_t &key, int64_t &code);
bool isLabel (const int64_t &key, int64_t &code);
bool isWeight (const int64_t &key, int64_t &code);
void readBinary (const int feature_type , HashedDataRecord *record);
void readContinuous (const int feature_type , HashedDataRecord *record);
void readDiscrete (const int feature_type , HashedDataRecord *record);
void readString (const int feature_type , HashedDataRecord *record);
void readSparseBinary (const int feature_type , HashedDataRecord *record);
void readSparseContinuous (const int feature_type , HashedDataRecord *record);
void readBlob (const int feature_type , HashedDataRecord *record);
HashedDataRecordReader() :
TensorRecordReader(nullptr),
m_keep_map(nullptr),
m_labels_map(nullptr),
m_weights_map(nullptr),
m_decode_mode(DecodeMode::hash_valname)
{}
// Using a template instead of int64_t because tensorflow implements int64 based on compiler.
void setKeepMap(KeyMap_t *keep_map) {
m_keep_map = keep_map;
}
void setLabelsMap(KeyMap_t *labels_map) {
m_labels_map = labels_map;
}
void setWeightsMap(KeyMap_t *weights_map) {
m_weights_map = weights_map;
}
void setDecodeMode(int64_t mode) {
m_decode_mode = static_cast<DecodeMode>(mode);
}
};
}
#endif

View File

@ -0,0 +1,110 @@
#pragma once
#include <twml/defines.h>
#include <twml/Tensor.h>
#include <twml/Type.h>
#include <stddef.h>
#ifdef __cplusplus
extern "C" {
#endif
typedef void * twml_hashmap;
typedef int64_t tw_hash_key_t;
typedef int64_t tw_hash_val_t;
#ifdef __cplusplus
}
#endif
#ifdef __cplusplus
namespace twml {
typedef tw_hash_key_t HashKey_t;
typedef tw_hash_val_t HashVal_t;
class HashMap {
private:
twml_hashmap m_hashmap;
public:
HashMap();
~HashMap();
// Disable copy constructor and assignment
// TODO: Fix this after retain and release are added to twml_hashmap
HashMap(const HashMap &other) = delete;
HashMap& operator=(const HashMap &other) = delete;
void clear();
uint64_t size() const;
int8_t insert(const HashKey_t key);
int8_t insert(const HashKey_t key, const HashVal_t val);
void remove(const HashKey_t key);
int8_t get(HashVal_t &val, const HashKey_t key) const;
void insert(Tensor &mask, const Tensor keys);
void insert(Tensor &mask, const Tensor keys, const Tensor vals);
void remove(const Tensor keys);
void get(Tensor &mask, Tensor &vals, const Tensor keys) const;
void getInplace(Tensor &mask, Tensor &keys_vals) const;
void toTensors(Tensor &keys, Tensor &vals) const;
};
}
#endif
#ifdef __cplusplus
extern "C" {
#endif
TWMLAPI twml_err twml_hashmap_create(twml_hashmap *hashmap);
TWMLAPI twml_err twml_hashmap_clear(const twml_hashmap hashmap);
TWMLAPI twml_err twml_hashmap_get_size(uint64_t *size, const twml_hashmap hashmap);
TWMLAPI twml_err twml_hashmap_delete(const twml_hashmap hashmap);
// insert, get, remove single key / value
TWMLAPI twml_err twml_hashmap_insert_key(int8_t *mask,
const twml_hashmap hashmap,
const tw_hash_key_t key);
TWMLAPI twml_err twml_hashmap_insert_key_and_value(int8_t *mask, twml_hashmap hashmap,
const tw_hash_key_t key,
const tw_hash_val_t val);
TWMLAPI twml_err twml_hashmap_remove_key(const twml_hashmap hashmap,
const tw_hash_key_t key);
TWMLAPI twml_err twml_hashmap_get_value(int8_t *mask, tw_hash_val_t *val,
const twml_hashmap hashmap,
const tw_hash_key_t key);
TWMLAPI twml_err twml_hashmap_insert_keys(twml_tensor masks,
const twml_hashmap hashmap,
const twml_tensor keys);
// insert, get, remove tensors of keys / values
TWMLAPI twml_err twml_hashmap_insert_keys_and_values(twml_tensor masks,
twml_hashmap hashmap,
const twml_tensor keys,
const twml_tensor vals);
TWMLAPI twml_err twml_hashmap_remove_keys(const twml_hashmap hashmap,
const twml_tensor keys);
TWMLAPI twml_err twml_hashmap_get_values(twml_tensor masks,
twml_tensor vals,
const twml_hashmap hashmap,
const twml_tensor keys);
TWMLAPI twml_err twml_hashmap_get_values_inplace(twml_tensor masks,
twml_tensor keys_vals,
const twml_hashmap hashmap);
TWMLAPI twml_err twml_hashmap_to_tensors(twml_tensor keys,
twml_tensor vals,
const twml_hashmap hashmap);
#ifdef __cplusplus
}
#endif

View File

@ -0,0 +1,92 @@
#pragma once
#include <twml/Tensor.h>
#include <type_traits>
#ifdef __cplusplus
namespace twml {
// This class contains the raw pointers to tensors coming from thrift object.
class TWMLAPI RawTensor : public Tensor
{
private:
bool m_is_big_endian;
uint64_t m_raw_length;
public:
RawTensor() {}
RawTensor(void *data, const std::vector<uint64_t> &dims,
const std::vector<uint64_t> &strides, twml_type type, bool is_big_endian, uint64_t length)
: Tensor(data, dims, strides, type), m_is_big_endian(is_big_endian), m_raw_length(length) {}
bool is_big_endian() const {
return m_is_big_endian;
}
uint64_t getRawLength() const {
return m_raw_length;
}
// Extracts a slice from a tensor at idx0 along dimension 0
// Used in BatchPredictionResponse to write each slice in separate records
RawTensor getSlice(uint64_t idx0) const {
void *slice = nullptr;
uint64_t raw_length = 0;
if (getType() == TWML_TYPE_STRING) {
raw_length = getStride(0);
std::string *data = const_cast<std::string *>(static_cast<const std::string*>(getData<void>()));
slice = static_cast<void *>(data + raw_length * idx0);
} else {
raw_length = getStride(0) * getSizeOf(getType());
char *data = const_cast<char *>(static_cast<const char*>(getData<void>()));
slice = static_cast<void *>(data + raw_length * idx0);
}
std::vector<uint64_t> dims, strides;
for (int i = 1; i < getNumDims(); i++) {
dims.push_back(getDim(i));
strides.push_back(getStride(i));
}
return RawTensor(slice, dims, strides, getType(), m_is_big_endian, raw_length);
}
};
// Wrapper class around RawTensor to hold sparse tensors.
class TWMLAPI RawSparseTensor
{
private:
RawTensor m_indices;
RawTensor m_values;
std::vector<uint64_t> m_dense_shape;
public:
RawSparseTensor() {
}
RawSparseTensor(const RawTensor &indices_, const RawTensor &values_,
const std::vector<uint64_t> &dense_shape_) :
m_indices(indices_), m_values(values_), m_dense_shape(dense_shape_)
{
if (m_indices.getType() != TWML_TYPE_INT64) {
throw twml::Error(TWML_ERR_TYPE, "Indices of Sparse Tensor must be of type int64");
}
}
const RawTensor &indices() const {
return m_indices;
}
const RawTensor &values() const {
return m_values;
}
const std::vector<uint64_t>& denseShape() const {
return m_dense_shape;
}
};
}
#endif

View File

@ -0,0 +1,82 @@
#pragma once
#include <twml/defines.h>
#include <cstddef>
#include <vector>
#include <string>
#ifdef __cplusplus
extern "C" {
#endif
struct twml_tensor__;
typedef twml_tensor__ * twml_tensor;
#ifdef __cplusplus
}
#endif
#ifdef __cplusplus
namespace twml {
class TWMLAPI Tensor
{
private:
twml_type m_type;
void *m_data;
std::vector<uint64_t> m_dims;
std::vector<uint64_t> m_strides;
public:
Tensor() {}
Tensor(void *data, int ndims, const uint64_t *dims, const uint64_t *strides, twml_type type);
Tensor(void *data, const std::vector<uint64_t> &dims, const std::vector<uint64_t> &strides, twml_type type);
const std::vector<uint64_t>& getDims() const {
return m_dims;
}
int getNumDims() const;
uint64_t getDim(int dim) const;
uint64_t getStride(int dim) const;
uint64_t getNumElements() const;
twml_type getType() const;
twml_tensor getHandle();
const twml_tensor getHandle() const;
template<typename T> T *getData();
template<typename T> const T *getData() const;
};
TWMLAPI std::string getTypeName(twml_type type);
TWMLAPI const Tensor *getConstTensor(const twml_tensor t);
TWMLAPI Tensor *getTensor(twml_tensor t);
TWMLAPI uint64_t getSizeOf(twml_type type);
}
#endif
#ifdef __cplusplus
extern "C" {
#endif
TWMLAPI twml_err twml_tensor_create(twml_tensor *tensor, void *data,
int ndims, uint64_t *dims,
uint64_t *strides, twml_type type);
TWMLAPI twml_err twml_tensor_delete(const twml_tensor tensor);
TWMLAPI twml_err twml_tensor_get_type(twml_type *type, const twml_tensor tensor);
TWMLAPI twml_err twml_tensor_get_data(void **data, const twml_tensor tensor);
TWMLAPI twml_err twml_tensor_get_dim(uint64_t *dim, const twml_tensor tensor, int id);
TWMLAPI twml_err twml_tensor_get_num_dims(int *ndims, const twml_tensor tensor);
TWMLAPI twml_err twml_tensor_get_num_elements(uint64_t *nelements, const twml_tensor tensor);
TWMLAPI twml_err twml_tensor_get_stride(uint64_t *stride, const twml_tensor tensor, int id);
#ifdef __cplusplus
}
#endif

View File

@ -0,0 +1,47 @@
#pragma once
#ifdef __cplusplus
#include <twml/defines.h>
#include <twml/RawTensor.h>
#include <cstdint>
#include <unordered_map>
namespace twml {
class TensorRecordReader;
// A class containing the data from TensorRecord.
// - This serves as the base class from which DataRecord and HashedDataRecord are inherited.
class TWMLAPI TensorRecord {
public:
typedef std::unordered_map<int64_t, const RawTensor> RawTensors;
typedef std::unordered_map<int64_t, const RawSparseTensor> RawSparseTensors;
private:
RawTensors m_tensors;
RawSparseTensors m_sparse_tensors;
public:
const RawTensors &getRawTensors() {
return m_tensors;
}
const RawTensor& getRawTensor(int64_t id) const {
return m_tensors.at(id);
}
const RawSparseTensor& getRawSparseTensor(int64_t id) const {
return m_sparse_tensors.at(id);
}
void addRawTensor(int64_t id, const RawTensor &tensor) {
m_tensors.emplace(id, tensor);
}
friend class TensorRecordReader;
};
}
#endif

View File

@ -0,0 +1,34 @@
#pragma once
#ifdef __cplusplus
#include <twml/defines.h>
#include <twml/TensorRecord.h>
#include <twml/ThriftReader.h>
#include <cstdint>
#include <vector>
#include <string>
#include <unordered_map>
namespace twml {
// Class that parses the thrift objects as defined in tensor.thrift
class TWMLAPI TensorRecordReader : public ThriftReader {
std::vector<uint64_t> readShape();
template<typename T> RawTensor readTypedTensor();
RawTensor readRawTypedTensor();
RawTensor readStringTensor();
RawTensor readGeneralTensor();
RawSparseTensor readCOOSparseTensor();
public:
void readTensor(const int feature_type, TensorRecord *record);
void readSparseTensor(const int feature_type, TensorRecord *record);
TensorRecordReader(const uint8_t *buffer) : ThriftReader(buffer) {}
};
}
#endif

View File

@ -0,0 +1,35 @@
#pragma once
#ifdef __cplusplus
#include <twml/defines.h>
#include <twml/TensorRecord.h>
namespace twml {
// Encodes tensors as DataRecord/TensorRecord-compatible Thrift.
// DataRecordWriter relies on this class to encode the tensor fields.
class TWMLAPI TensorRecordWriter {
private:
uint32_t m_records_written;
twml::ThriftWriter &m_thrift_writer;
void writeTensor(const RawTensor &tensor);
void writeRawTensor(const RawTensor &tensor);
public:
TensorRecordWriter(twml::ThriftWriter &thrift_writer):
m_records_written(0),
m_thrift_writer(thrift_writer) { }
uint32_t getRecordsWritten();
// Caller (usually DataRecordWriter) must precede with struct header field
// like thrift_writer.writeStructFieldHeader(TTYPE_MAP, DR_GENERAL_TENSOR)
//
// All tensors written as RawTensors except for StringTensors
uint64_t write(twml::TensorRecord &record);
};
}
#endif

View File

@ -0,0 +1,56 @@
#pragma once
#ifdef __cplusplus
#include <twml/defines.h>
#include <cstdint>
#include <cstddef>
#include <cstring>
namespace twml {
class ThriftReader {
protected:
const uint8_t *m_buffer;
public:
ThriftReader(const uint8_t *buffer): m_buffer(buffer) {}
const uint8_t *getBuffer() { return m_buffer; }
void setBuffer(const uint8_t *buffer) { m_buffer = buffer; }
template<typename T> T readDirect() {
T val;
memcpy(&val, m_buffer, sizeof(T));
m_buffer += sizeof(T);
return val;
}
template<typename T> void skip() {
m_buffer += sizeof(T);
}
void skipLength(size_t length) {
m_buffer += length;
}
uint8_t readByte();
int16_t readInt16();
int32_t readInt32();
int64_t readInt64();
double readDouble();
template<typename T> inline
int32_t getRawBuffer(const uint8_t **begin) {
int32_t length = readInt32();
*begin = m_buffer;
skipLength(length * sizeof(T));
return length;
}
};
}
#endif

View File

@ -0,0 +1,59 @@
#pragma once
#ifdef __cplusplus
#include <twml/defines.h>
#include <cstdint>
#include <cstddef>
#include <cstring>
namespace twml {
// A low-level binary Thrift writer that can also compute output size
// in dry run mode without copying memory. See also https://git.io/vNPiv
//
// WARNING: Users of this class are responsible for generating valid Thrift
// by following the Thrift binary protocol (https://git.io/vNPiv).
class TWMLAPI ThriftWriter {
protected:
bool m_dry_run;
uint8_t *m_buffer;
size_t m_buffer_size;
size_t m_bytes_written;
template <typename T> inline uint64_t write(T val);
public:
// buffer: Memory to write the binary Thrift to.
// buffer_size: Length of the buffer.
// dry_run: If true, just count bytes 'written' but do not copy memory.
// If false, write binary Thrift to the buffer normally.
// Useful to determine output size for TensorFlow allocations.
ThriftWriter(uint8_t *buffer, size_t buffer_size, bool dry_run = false) :
m_dry_run(dry_run),
m_buffer(buffer),
m_buffer_size(buffer_size),
m_bytes_written(0) {}
// total bytes written to the buffer since object creation
uint64_t getBytesWritten();
// encode headers and values into the buffer
uint64_t writeStructFieldHeader(int8_t field_type, int16_t field_id);
uint64_t writeStructStop();
uint64_t writeListHeader(int8_t element_type, int32_t num_elems);
uint64_t writeMapHeader(int8_t key_type, int8_t val_type, int32_t num_elems);
uint64_t writeDouble(double val);
uint64_t writeInt8(int8_t val);
uint64_t writeInt16(int16_t val);
uint64_t writeInt32(int32_t val);
uint64_t writeInt64(int64_t val);
uint64_t writeBinary(const uint8_t *bytes, int32_t num_bytes);
// clients expect UTF-8-encoded strings per the Thrift protocol
// (often this is just used to send bytes, not real strings though)
uint64_t writeString(std::string str);
uint64_t writeBool(bool val);
};
}
#endif

View File

@ -0,0 +1,69 @@
#pragma once
#include <twml/defines.h>
#include <stddef.h>
#include <stdint.h>
#ifdef __cplusplus
namespace twml {
template<typename T> struct Type;
template<> struct Type<float>
{
enum {
type = TWML_TYPE_FLOAT,
};
};
template<> struct Type<std::string>
{
enum {
type = TWML_TYPE_STRING,
};
};
template<> struct Type<double>
{
enum {
type = TWML_TYPE_DOUBLE,
};
};
template<> struct Type<int64_t>
{
enum {
type = TWML_TYPE_INT64,
};
};
template<> struct Type<int32_t>
{
enum {
type = TWML_TYPE_INT32,
};
};
template<> struct Type<int8_t>
{
enum {
type = TWML_TYPE_INT8,
};
};
template<> struct Type<uint8_t>
{
enum {
type = TWML_TYPE_UINT8,
};
};
template<> struct Type<bool>
{
enum {
type = TWML_TYPE_BOOL,
};
};
}
#endif

View File

@ -0,0 +1,42 @@
#ifndef TWML_LIBTWML_INCLUDE_TWML_COMMON_H_
#define TWML_LIBTWML_INCLUDE_TWML_COMMON_H_
#define USE_ABSEIL_HASH 1
#if defined(USE_ABSEIL_HASH)
#include "absl/container/flat_hash_map.h"
#include "absl/container/flat_hash_set.h"
#elif defined(USE_DENSE_HASH)
#include <sparsehash/dense_hash_map>
#include <sparsehash/dense_hash_set>
#else
#include <unordered_map>
#include <unordered_set>
#endif // USE_ABSEIL_HASH
namespace twml {
#if defined(USE_ABSEIL_HASH)
template<typename KeyType, typename ValueType>
using Map = absl::flat_hash_map<KeyType, ValueType>;
template<typename KeyType>
using Set = absl::flat_hash_set<KeyType>;
#elif defined(USE_DENSE_HASH)
// Do not use this unless an proper empty key can be found.
template<typename KeyType, typename ValueType>
using Map = google::dense_hash_map<KeyType, ValueType>;
template<typename KeyType>
using Set = google::dense_hash_set<KeyType>;
#else
template<typename KeyType, typename ValueType>
using Map = std::unordered_map<KeyType, ValueType>;
template<typename KeyType>
using Set = std::unordered_set<KeyType>;
#endif // USE_DENSE_HASH
} // namespace twml
#endif // TWML_LIBTWML_INCLUDE_TWML_COMMON_H_

View File

@ -0,0 +1,36 @@
#pragma once
#include <stdbool.h>
#ifdef __cplusplus
extern "C" {
#endif
typedef enum {
TWML_TYPE_FLOAT32 = 1,
TWML_TYPE_FLOAT64 = 2,
TWML_TYPE_INT32 = 3,
TWML_TYPE_INT64 = 4,
TWML_TYPE_INT8 = 5,
TWML_TYPE_UINT8 = 6,
TWML_TYPE_BOOL = 7,
TWML_TYPE_STRING = 8,
TWML_TYPE_FLOAT = TWML_TYPE_FLOAT32,
TWML_TYPE_DOUBLE = TWML_TYPE_FLOAT64,
TWML_TYPE_UNKNOWN = -1,
} twml_type;
typedef enum {
TWML_ERR_NONE = 1000,
TWML_ERR_SIZE = 1001,
TWML_ERR_TYPE = 1002,
TWML_ERR_THRIFT = 1100,
TWML_ERR_IO = 1200,
TWML_ERR_UNKNOWN = 1999,
} twml_err;
#ifdef __cplusplus
}
#endif
#define TWMLAPI __attribute__((visibility("default")))
#ifndef TWML_INDEX_BASE
#define TWML_INDEX_BASE 0
#endif

View File

@ -0,0 +1,22 @@
#pragma once
#include <twml/common.h>
#include <twml/defines.h>
#include <twml/Tensor.h>
#ifdef __cplusplus
namespace twml {
TWMLAPI void discretizerInfer(
Tensor &output_keys,
Tensor &output_vals,
const Tensor &input_ids,
const Tensor &input_vals,
const Tensor &bin_ids,
const Tensor &bin_vals,
const Tensor &feature_offsets,
int output_bits,
const Map<int64_t, int64_t> &ID_to_index,
int start_compute,
int end_compute,
int output_start);
} // namespace twml
#endif

View File

@ -0,0 +1,26 @@
#pragma once
#include <twml/defines.h>
#include <twml/Tensor.h>
#ifdef __cplusplus
namespace twml {
// Adding these as an easy way to test the wrappers
TWMLAPI void add1(Tensor &output, const Tensor input);
TWMLAPI void copy(Tensor &output, const Tensor input);
TWMLAPI int64_t featureId(const std::string &feature);
}
#endif
#ifdef __cplusplus
extern "C" {
#endif
// Adding these as an easy way to test the wrappers
TWMLAPI twml_err twml_add1(twml_tensor output, const twml_tensor input);
TWMLAPI twml_err twml_copy(twml_tensor output, const twml_tensor input);
TWMLAPI twml_err twml_get_feature_id(int64_t *result, const uint64_t len, const char *str);
#ifdef __cplusplus
}
#endif

View File

@ -0,0 +1,22 @@
#pragma once
#include <twml/common.h>
#include <twml/defines.h>
#include <twml/Tensor.h>
#include <unordered_map>
#ifdef __cplusplus
namespace twml {
TWMLAPI void hashDiscretizerInfer(
Tensor &output_keys,
Tensor &output_vals,
const Tensor &input_ids,
const Tensor &input_vals,
int n_bin,
const Tensor &bin_vals,
int output_bits,
const Map<int64_t, int64_t> &ID_to_index,
int start_compute,
int end_compute,
int64_t options);
} // namespace twml
#endif

View File

@ -0,0 +1,45 @@
#pragma once
#include <twml/Error.h>
namespace twml {
namespace io {
class IOError : public twml::Error {
public:
enum Status {
OUT_OF_RANGE = 1,
WRONG_MAGIC = 2,
WRONG_HEADER = 3,
ERROR_HEADER_CHECKSUM = 4,
INVALID_METHOD = 5,
USING_RESERVED = 6,
ERROR_HEADER_EXTRA_FIELD_CHECKSUM = 7,
CANT_FIT_OUTPUT = 8,
SPLIT_FILE = 9,
BLOCK_SIZE_TOO_LARGE = 10,
SOURCE_LARGER_THAN_DESTINATION = 11,
DESTINATION_LARGER_THAN_CAPACITY = 12,
HEADER_FLAG_MISMATCH = 13,
NOT_ENOUGH_INPUT = 14,
ERROR_SOURCE_BLOCK_CHECKSUM = 15,
COMPRESSED_DATA_VIOLATION = 16,
ERROR_DESTINATION_BLOCK_CHECKSUM = 17,
EMPTY_RECORD = 18,
MALFORMED_MEMORY_RECORD = 19,
UNSUPPORTED_OUTPUT_TYPE = 20,
OTHER_ERROR
};
IOError(Status status);
Status status() const {
return m_status;
}
private:
Status m_status;
};
}
}

View File

@ -0,0 +1,51 @@
#pragma once
#include <twml/defines.h>
#include <twml/Tensor.h>
#ifdef __cplusplus
namespace twml {
TWMLAPI void linearInterpolation(
Tensor output,
const Tensor input,
const Tensor xs,
const Tensor ys);
TWMLAPI void nearestInterpolation(
Tensor output,
const Tensor input,
const Tensor xs,
const Tensor ys);
TWMLAPI void mdlInfer(
Tensor &output_keys,
Tensor &output_vals,
const Tensor &input_keys,
const Tensor &input_vals,
const Tensor &bin_ids,
const Tensor &bin_vals,
const Tensor &feature_offsets,
bool return_bin_indices = false);
}
#endif
#ifdef __cplusplus
extern "C" {
#endif
TWMLAPI twml_err twml_optim_nearest_interpolation(
twml_tensor output,
const twml_tensor input,
const twml_tensor xs,
const twml_tensor ys);
TWMLAPI twml_err twml_optim_mdl_infer(
twml_tensor output_keys,
twml_tensor output_vals,
const twml_tensor input_keys,
const twml_tensor input_vals,
const twml_tensor bin_ids,
const twml_tensor bin_vals,
const twml_tensor feature_offsets,
const bool return_bin_indices = false);
#ifdef __cplusplus
}
#endif

View File

@ -0,0 +1,18 @@
#pragma once
#ifdef __cplusplus
namespace twml {
inline int64_t mixDiscreteIdAndValue(int64_t key, int64_t value) {
key ^= ((17LL + value) * 2654435761LL);
return key;
}
inline int64_t mixStringIdAndValue(int64_t key, int32_t str_len, const uint8_t *str) {
int32_t hash = 0;
for (int32_t i = 0; i < str_len; i++) {
hash = (31 * hash) + (int32_t)str[i];
}
return key ^ hash;
}
}
#endif

9
twml/libtwml/setup.cfg Normal file
View File

@ -0,0 +1,9 @@
[bdist_wheel]
universal=1
[build]
build-lib=build_dir
build-temp=build_dir
[bdist]
bdist-base=build_dir

12
twml/libtwml/setup.py Normal file
View File

@ -0,0 +1,12 @@
"""
libtwml setup.py module
"""
from setuptools import setup, find_packages
setup(
name='libtwml',
version='2.0',
description="Tensorflow C++ ops for twml",
packages=find_packages(),
data_files=[('', ['libtwml_tf.so'])],
)

View File

@ -0,0 +1,52 @@
#include "internal/thrift.h"
#include "internal/error.h"
#include <twml/DataRecordReader.h>
#include <twml/HashedDataRecordReader.h>
#include <twml/BatchPredictionRequest.h>
#include <twml/Error.h>
#include <algorithm>
#include <cstring>
#include <cstdint>
namespace twml {
template<typename RecordType>
void GenericBatchPredictionRequest<RecordType>::decode(Reader &reader) {
uint8_t feature_type = reader.readByte();
while (feature_type != TTYPE_STOP) {
int16_t field_id = reader.readInt16();
switch (field_id) {
case 1: {
CHECK_THRIFT_TYPE(feature_type, TTYPE_LIST, "list");
CHECK_THRIFT_TYPE(reader.readByte(), TTYPE_STRUCT, "list_element");
int32_t length = reader.readInt32();
m_requests.resize(length, RecordType(this->num_labels, this->num_weights));
for (auto &request : m_requests) {
request.decode(reader);
}
break;
}
case 2: {
CHECK_THRIFT_TYPE(feature_type, TTYPE_STRUCT, "commonFeatures");
m_common_features.decode(reader);
break;
}
default: throw ThriftInvalidField(field_id, __func__);
}
feature_type = reader.readByte();
}
return;
}
// Instantiate decoders.
template void GenericBatchPredictionRequest<HashedDataRecord>::decode(HashedDataRecordReader &reader);
template void GenericBatchPredictionRequest<DataRecord>::decode(DataRecordReader &reader);
} // namespace twml

View File

@ -0,0 +1,125 @@
#include "internal/endianutils.h"
#include "internal/error.h"
#include "internal/thrift.h"
#include <twml/Tensor.h>
#include <twml/BatchPredictionResponse.h>
#include <twml/DataRecord.h>
#include <twml/ThriftWriter.h>
#include <twml/DataRecordWriter.h>
#include <inttypes.h>
#include <stdint.h>
#include <unistd.h>
#include <string.h>
#include <algorithm>
// When the number of predictions is very high, as some cases that Ads wants, the generic thrift
// encoder becomes super expensive because we have to deal with lua tables.
// This function is a special operation to efficiently write a batch prediction responses based on
// tensors.
namespace twml {
BatchPredictionResponse::BatchPredictionResponse(
const Tensor &keys, const Tensor &values,
const Tensor &dense_keys, const std::vector<RawTensor> &dense_values
) : keys_(keys), values_(values), dense_keys_(dense_keys), dense_values_(dense_values) {
// determine batch size
if (values_.getNumDims() > 0) {
batch_size_ = values_.getDim(0);
} else if (dense_keys_.getNumElements() < 1) {
throw twml::Error(TWML_ERR_TYPE, "Continuous values and dense tensors are both empty");
} else if (dense_keys_.getNumElements() != dense_values_.size()) {
throw twml::Error(TWML_ERR_TYPE, "Number of tensors not equal to number of keys");
} else {
// dim 0 for each tensor indexes batch elements
std::vector<uint64_t> batch_sizes;
batch_sizes.reserve(dense_values_.size());
for (int i = 0; i < dense_values_.size(); i++)
batch_sizes.push_back(dense_values_.at(i).getDim(0));
if (std::adjacent_find(
batch_sizes.begin(),
batch_sizes.end(),
std::not_equal_to<uint64_t>()) != batch_sizes.end())
throw twml::Error(TWML_ERR_TYPE, "Batch size (dim 0) for all tensors must be the same");
batch_size_ = dense_values.at(0).getDim(0);
}
}
void BatchPredictionResponse::encode(twml::ThriftWriter &thrift_writer) {
if (hasContinuous()) {
switch (values_.getType()) {
case TWML_TYPE_FLOAT:
serializePredictions<float>(thrift_writer);
break;
case TWML_TYPE_DOUBLE:
serializePredictions<double>(thrift_writer);
break;
default:
throw twml::Error(TWML_ERR_TYPE, "Predictions must be float or double.");
}
} else {
// dense tensor predictions
serializePredictions<double>(thrift_writer);
}
}
template <typename T>
void BatchPredictionResponse::serializePredictions(twml::ThriftWriter &thrift_writer) {
twml::DataRecordWriter record_writer = twml::DataRecordWriter(thrift_writer);
// start BatchPredictionResponse
thrift_writer.writeStructFieldHeader(TTYPE_LIST, BPR_PREDICTIONS);
thrift_writer.writeListHeader(TTYPE_STRUCT, getBatchSize());
for (int i = 0; i < getBatchSize(); i++) {
twml::DataRecord record = twml::DataRecord();
if (hasContinuous()) {
const T *values = values_.getData<T>();
const int64_t *local_keys = keys_.getData<int64_t>();
const T *local_values = values + (i * getPredictionSize());
record.addContinuous(local_keys, getPredictionSize(), local_values);
}
if (hasDenseTensors()) {
const int64_t *local_dense_keys = dense_keys_.getData<int64_t>();
for (int j = 0; j < dense_keys_.getNumElements(); j++) {
const RawTensor &dense_value = dense_values_.at(j).getSlice(i);
record.addRawTensor(local_dense_keys[j], dense_value);
}
}
record_writer.write(record);
}
// end BatchPredictionResponse
thrift_writer.writeStructStop();
}
// calculate expected binary Thrift size (no memory is copied)
uint64_t BatchPredictionResponse::encodedSize() {
bool dry_mode = true;
twml::ThriftWriter dry_writer = twml::ThriftWriter(nullptr, 0, dry_mode);
encode(dry_writer);
return dry_writer.getBytesWritten();
}
void BatchPredictionResponse::write(Tensor &result) {
size_t result_size = result.getNumElements();
uint8_t *result_data = result.getData<uint8_t>();
if (result_size != this->encodedSize()) {
throw twml::Error(TWML_ERR_SIZE, "Sizes do not match");
}
twml::ThriftWriter writer = twml::ThriftWriter(result_data, result_size);
encode(writer);
}
} // namespace twml

View File

@ -0,0 +1,145 @@
#include <twml/BlockFormatReader.h>
#include <cstring>
#include <stdexcept>
#define OFFSET_CHUNK (32768)
#define RECORDS_PER_BLOCK (100)
#define WIRE_TYPE_VARINT (0)
#define WIRE_TYPE_64BIT (1)
#define WIRE_TYPE_LENGTH_PREFIXED (2)
/*
This was all extracted from the ancient elephant bird scrolls
https://github.com/twitter/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/io/BinaryBlockReader.java
*/
#define MARKER_SIZE (16)
static uint8_t _marker[MARKER_SIZE] = {
0x29, 0xd8, 0xd5, 0x06, 0x58, 0xcd, 0x4c, 0x29,
0xb2, 0xbc, 0x57, 0x99, 0x21, 0x71, 0xbd, 0xff
};
namespace twml {
BlockFormatReader::BlockFormatReader():
record_size_(0), block_pos_(0), block_end_(0) {
memset(classname_, 0, sizeof(classname_));
}
bool BlockFormatReader::next() {
record_size_ = read_one_record_size();
if (record_size_ < 0) {
record_size_ = 0;
return false;
}
return true;
}
int BlockFormatReader::read_int() {
uint8_t buff[4];
if (read_bytes(buff, 1, 4) != 4)
return -1;
return static_cast<int>(buff[0])
| (static_cast<int>(buff[1] << 8))
| (static_cast<int>(buff[2] << 16))
| (static_cast<int>(buff[3] << 24));
}
int BlockFormatReader::consume_marker(int scan) {
uint8_t buff[MARKER_SIZE];
if (read_bytes(buff, 1, MARKER_SIZE) != MARKER_SIZE)
return 0;
while (memcmp(buff, _marker, MARKER_SIZE) != 0) {
if (!scan) return 0;
memmove(buff, buff + 1, MARKER_SIZE - 1);
if (read_bytes(buff + MARKER_SIZE - 1, 1, 1) != 1)
return 0;
}
return 1;
}
int BlockFormatReader::unpack_varint_i32() {
int value = 0;
for (int i = 0; i < 10; i++) {
uint8_t x;
if (read_bytes(&x, 1, 1) != 1)
return -1;
block_pos_++;
value |= (static_cast<int>(x & 0x7F)) << (i * 7);
if ((x & 0x80) == 0) break;
}
return value;
}
int BlockFormatReader::unpack_tag_and_wiretype(uint32_t *tag, uint32_t *wiretype) {
uint8_t x;
if (read_bytes(&x, 1, 1) != 1)
return -1;
block_pos_++;
*tag = (x & 0x7f) >> 3;
*wiretype = x & 7;
if ((x & 0x80) == 0)
return 0;
return -1;
}
int BlockFormatReader::unpack_string(char *out, uint64_t max_out_len) {
int len = unpack_varint_i32();
if (len < 0) return -1;
uint64_t slen = len;
if (slen + 1 > max_out_len) return -1;
uint64_t n = read_bytes(out, 1, slen);
if (n != slen) return -1;
block_pos_ += n;
out[n] = 0;
return 0;
}
int BlockFormatReader::read_one_record_size() {
for (int i = 0; i < 2; i++) {
if (block_end_ == 0) {
while (consume_marker(1)) {
int block_size = read_int();
if (block_size > 0) {
block_pos_ = 0;
block_end_ = block_size;
uint32_t tag, wiretype;
if (unpack_tag_and_wiretype(&tag, &wiretype))
throw std::invalid_argument("unsupported tag and wiretype");
if (tag != 1 && wiretype != WIRE_TYPE_VARINT)
throw std::invalid_argument("unexpected tag and wiretype");
int version = unpack_varint_i32();
if (version != 1)
throw std::invalid_argument("unsupported version");
if (unpack_tag_and_wiretype(&tag, &wiretype))
throw std::invalid_argument("unsupported tag and wiretype");
if (tag != 2 && wiretype != WIRE_TYPE_LENGTH_PREFIXED)
throw std::invalid_argument("unexpected tag and wiretype");
if (unpack_string(classname_, sizeof(classname_)-1))
throw std::invalid_argument("unsupported class name");
break;
}
}
}
if (block_pos_ < block_end_) {
uint32_t tag, wiretype;
if (unpack_tag_and_wiretype(&tag, &wiretype))
throw std::invalid_argument("unsupported tag and wiretype");
if (tag != 3 && wiretype != WIRE_TYPE_LENGTH_PREFIXED)
throw std::invalid_argument("unexpected tag and wiretype");
int record_size = unpack_varint_i32();
block_pos_ += record_size;
return record_size;
} else {
block_end_ = 0;
}
}
return -1;
}
} // namespace twml

View File

@ -0,0 +1,163 @@
#include "internal/error.h"
#include <cstring>
#include <iostream>
#include <twml/BlockFormatWriter.h>
#define WIRE_TYPE_LENGTH_PREFIXED (2)
#define WIRE_TYPE_VARINT (0)
#ifndef PATH_MAX
#define PATH_MAX (8096)
#endif
#define MARKER_SIZE (16)
static uint8_t _marker[MARKER_SIZE] = {
0x29, 0xd8, 0xd5, 0x06, 0x58, 0xcd, 0x4c, 0x29,
0xb2, 0xbc, 0x57, 0x99, 0x21, 0x71, 0xbd, 0xff
};
namespace twml {
BlockFormatWriter::BlockFormatWriter(const char *file_name, int record_per_block) :
file_name_(file_name), record_index_(0), records_per_block_(record_per_block) {
snprintf(temp_file_name_, PATH_MAX, "%s.block", file_name);
outputfile_ = fopen(file_name_, "a");
}
BlockFormatWriter::~BlockFormatWriter() {
fclose(outputfile_);
}
// TODO: use fstream
int BlockFormatWriter::pack_tag_and_wiretype(FILE *buffer, uint32_t tag, uint32_t wiretype) {
uint8_t x = ((tag & 0x0f) << 3) | (wiretype & 0x7);
size_t n = fwrite(&x, 1, 1, buffer);
if (n != 1) {
return -1;
}
return 0;
}
int BlockFormatWriter::pack_varint_i32(FILE *buffer, int value) {
for (int i = 0; i < 10; i++) {
uint8_t x = value & 0x7F;
value = value >> 7;
if (value != 0) x |= 0x80;
size_t n = fwrite(&x, 1, 1, buffer);
if (n != 1) {
return -1;
}
if (value == 0) break;
}
return 0;
}
int BlockFormatWriter::pack_string(FILE *buffer, const char *in, size_t in_len) {
if (pack_varint_i32(buffer, in_len)) return -1;
size_t n = fwrite(in, 1, in_len, buffer);
if (n != in_len) return -1;
return 0;
}
int BlockFormatWriter::write_int(FILE *buffer, int value) {
uint8_t buff[4];
buff[0] = value & 0xff;
buff[1] = (value >> 8) & 0xff;
buff[2] = (value >> 16) & 0xff;
buff[3] = (value >> 24) & 0xff;
size_t n = fwrite(buff, 1, 4, buffer);
if (n != 4) {
return -1;
}
return 0;
}
int BlockFormatWriter::write(const char *class_name, const char *record, int record_len) {
if (record) {
record_index_++;
// The buffer holds max records_per_block_ of records (block).
FILE *buffer = fopen(temp_file_name_, "a");
if (!buffer) return -1;
if (ftell(buffer) == 0) {
if (pack_tag_and_wiretype(buffer, 1, WIRE_TYPE_VARINT))
throw std::invalid_argument("Error writting tag and wiretype");
if (pack_varint_i32(buffer, 1))
throw std::invalid_argument("Error writting varint_i32");
if (pack_tag_and_wiretype(buffer, 2, WIRE_TYPE_LENGTH_PREFIXED))
throw std::invalid_argument("Error writting tag and wiretype");
if (pack_string(buffer, class_name, strlen(class_name)))
throw std::invalid_argument("Error writting class name");
}
if (pack_tag_and_wiretype(buffer, 3, WIRE_TYPE_LENGTH_PREFIXED))
throw std::invalid_argument("Error writtig tag and wiretype");
if (pack_string(buffer, record, record_len))
throw std::invalid_argument("Error writting record");
fclose(buffer);
}
if ((record_index_ % records_per_block_) == 0) {
flush();
}
return 0;
}
int BlockFormatWriter::flush() {
// Flush the records in the buffer to outputfile
FILE *buffer = fopen(temp_file_name_, "r");
if (buffer) {
fseek(buffer, 0, SEEK_END);
int64_t block_size = ftell(buffer);
fseek(buffer, 0, SEEK_SET);
if (fwrite(_marker, sizeof(_marker), 1, outputfile_) != 1) return 1;
if (write_int(outputfile_, block_size)) return 1;
uint8_t buff[4096];
while (1) {
size_t n = fread(buff, 1, sizeof(buff), buffer);
if (n) {
size_t x = fwrite(buff, 1, n, outputfile_);
if (x != n) return 1;
}
if (n != sizeof(buff)) break;
}
fclose(buffer);
// Remove the buffer
if (remove(temp_file_name_)) return 1;
}
return 0;
}
block_format_writer BlockFormatWriter::getHandle() {
return reinterpret_cast<block_format_writer>(this);
}
BlockFormatWriter *getBlockFormatWriter(block_format_writer w) {
return reinterpret_cast<BlockFormatWriter *>(w);
}
} // namespace twml
twml_err block_format_writer_create(block_format_writer *w, const char *file_name, int records_per_block) {
HANDLE_EXCEPTIONS(
twml::BlockFormatWriter *writer = new twml::BlockFormatWriter(file_name, records_per_block);
*w = reinterpret_cast<block_format_writer>(writer););
return TWML_ERR_NONE;
}
twml_err block_format_write(block_format_writer w, const char *class_name, const char *record, int record_len) {
HANDLE_EXCEPTIONS(
twml::BlockFormatWriter *writer = twml::getBlockFormatWriter(w);
writer->write(class_name, record, record_len););
return TWML_ERR_NONE;
}
twml_err block_format_flush(block_format_writer w) {
HANDLE_EXCEPTIONS(
twml::BlockFormatWriter *writer = twml::getBlockFormatWriter(w);
writer->flush(););
return TWML_ERR_NONE;
}
twml_err block_format_writer_delete(const block_format_writer w) {
HANDLE_EXCEPTIONS(
delete twml::getBlockFormatWriter(w););
return TWML_ERR_NONE;
}

View File

@ -0,0 +1,36 @@
set(CMAKE_MODULE_PATH ${PROJECT_SOURCE_DIR})
cmake_minimum_required(VERSION 2.8 FATAL_ERROR)
cmake_policy(VERSION 2.8)
set(TWML_VERSION "2.0.0")
string(REPLACE "." ";" TWML_VERSION_LIST ${TWML_VERSION})
list(GET TWML_VERSION_LIST 0 TWML_SOVERSION)
execute_process(
COMMAND
$ENV{LIBTWML_HOME}/src/ops/scripts/get_inc.sh
RESULT_VARIABLE
TF_RES
OUTPUT_VARIABLE
TF_INC)
file(GLOB_RECURSE sources *.cpp)
set (CMAKE_CXX_FLAGS "-Wall -std=c++11 ${CMAKE_CXX_FLAGS} -fPIC")
add_library(twml STATIC ${sources})
target_include_directories(
twml
PUBLIC
${CMAKE_CURRENT_SOURCE_DIR}/../../include
PRIVATE
${CMAKE_CURRENT_SOURCE_DIR}
${TF_INC} # Absail dependency from tensorflow
)
set_target_properties(twml PROPERTIES
VERSION "${TWML_VERSION}"
SOVERSION "${TWML_SOVERSION}"
)

View File

@ -0,0 +1 @@
exclude_files=murmur_hash3.cpp

View File

@ -0,0 +1,72 @@
#include "internal/thrift.h"
#include "internal/error.h"
#include <twml/utilities.h>
#include <twml/DataRecord.h>
#include <twml/DataRecordReader.h>
#include <twml/Error.h>
#include <cstring>
#include <cstdint>
namespace twml {
void DataRecord::decode(DataRecordReader &reader) {
uint8_t feature_type = reader.readByte();
while (feature_type != TTYPE_STOP) {
int16_t field_id = reader.readInt16();
switch (field_id) {
case DR_BINARY:
reader.readBinary(feature_type, this);
break;
case DR_CONTINUOUS:
reader.readContinuous(feature_type, this);
break;
case DR_DISCRETE:
reader.readDiscrete(feature_type, this);
break;
case DR_STRING:
reader.readString(feature_type, this);
break;
case DR_SPARSE_BINARY:
reader.readSparseBinary(feature_type, this);
break;
case DR_SPARSE_CONTINUOUS:
reader.readSparseContinuous(feature_type, this);
break;
case DR_BLOB:
reader.readBlob(feature_type, this);
break;
case DR_GENERAL_TENSOR:
reader.readTensor(feature_type, dynamic_cast<TensorRecord *>(this));
break;
case DR_SPARSE_TENSOR:
reader.readSparseTensor(feature_type, dynamic_cast<TensorRecord *>(this));
break;
default:
throw ThriftInvalidField(field_id, "DataRecord::decode");
}
feature_type = reader.readByte();
}
}
void DataRecord::addLabel(int64_t id, double label) {
m_labels[id] = label;
}
void DataRecord::addWeight(int64_t id, double val) {
m_weights[id] = val;
}
void DataRecord::clear() {
std::fill(m_labels.begin(), m_labels.end(), std::nanf(""));
std::fill(m_weights.begin(), m_weights.end(), 0.0);
m_binary.clear();
m_continuous.clear();
m_discrete.clear();
m_string.clear();
m_sparsebinary.clear();
m_sparsecontinuous.clear();
}
} // namespace twml

View File

@ -0,0 +1,230 @@
#include "internal/thrift.h"
#include "internal/error.h"
#include <string>
#include <cmath>
#include <twml/DataRecordReader.h>
namespace twml {
inline std::string bufferToString(int32_t str_len, const uint8_t *str) {
return std::string(str, str + str_len);
}
bool DataRecordReader::keepKey(const int64_t &key, int64_t &code) {
auto it = m_keep_map->find(key);
if (it == m_keep_map->end()) return false;
code = it->second;
return true;
}
bool DataRecordReader::isLabel(const int64_t &key, int64_t &code) {
if (m_labels_map == nullptr) return false;
auto it = m_labels_map->find(key);
if (it == m_labels_map->end()) return false;
code = it->second;
return true;
}
bool DataRecordReader::isWeight(const int64_t &key, int64_t &code) {
if (m_weights_map == nullptr) return false;
auto it = m_weights_map->find(key);
if (it == m_weights_map->end()) return false;
code = it->second;
return true;
}
void DataRecordReader::readBinary(
const int feature_type,
DataRecord *record) {
CHECK_THRIFT_TYPE(feature_type, TTYPE_SET, "type");
CHECK_THRIFT_TYPE(readByte(), TTYPE_I64, "key_type");
int32_t length = readInt32();
int64_t id, code;
#ifdef USE_DENSE_HASH
record->m_binary.resize(2 * length);
#else
record->m_binary.reserve(2 * length);
#endif
for (int32_t i = 0; i < length; i++) {
id = readInt64();
record->m_binary.insert(id);
if (isLabel(id, code)) {
record->addLabel(code);
}
}
}
void DataRecordReader::readContinuous(
const int feature_type,
DataRecord *record) {
CHECK_THRIFT_TYPE(feature_type, TTYPE_MAP, "type");
CHECK_THRIFT_TYPE(readByte(), TTYPE_I64, "key_type");
CHECK_THRIFT_TYPE(readByte(), TTYPE_DOUBLE, "value_type");
int32_t length = readInt32();
int64_t id, code;
#ifdef USE_DENSE_HASH
record->m_continuous.resize(2 * length);
#else
record->m_continuous.reserve(2 * length);
#endif
for (int32_t i = 0; i < length; i++) {
id = readInt64();
double val = readDouble();
if (!std::isnan(val)) {
record->m_continuous[id] = val;
}
if (isLabel(id, code)) {
record->addLabel(code, val);
} else if (isWeight(id, code)) {
record->addWeight(code, val);
}
}
}
void DataRecordReader::readDiscrete(
const int feature_type,
DataRecord *record) {
CHECK_THRIFT_TYPE(feature_type, TTYPE_MAP, "type");
CHECK_THRIFT_TYPE(readByte(), TTYPE_I64, "key_type");
CHECK_THRIFT_TYPE(readByte(), TTYPE_I64, "value_type");
int32_t length = readInt32();
int64_t id;
#ifdef USE_DENSE_HASH
record->m_discrete.resize(2 * length);
#else
record->m_discrete.reserve(2 * length);
#endif
for (int32_t i = 0; i < length; i++) {
id = readInt64();
record->m_discrete[id] = readInt64();
}
}
void DataRecordReader::readString(
const int feature_type,
DataRecord *record) {
CHECK_THRIFT_TYPE(feature_type, TTYPE_MAP, "type");
CHECK_THRIFT_TYPE(readByte(), TTYPE_I64, "key_type");
CHECK_THRIFT_TYPE(readByte(), TTYPE_STRING, "value_type");
int32_t length = readInt32();
int64_t id;
#ifdef USE_DENSE_HASH
record->m_string.resize(2 * length);
#else
record->m_string.reserve(2 * length);
#endif
for (int32_t i = 0; i < length; i++) {
id = readInt64();
const uint8_t *begin = nullptr;
int32_t str_len = getRawBuffer<uint8_t>(&begin);
record->m_string[id] = bufferToString(str_len, begin);
}
}
void DataRecordReader::readSparseBinary(
const int feature_type,
DataRecord *record) {
CHECK_THRIFT_TYPE(feature_type, TTYPE_MAP, "type");
CHECK_THRIFT_TYPE(readByte(), TTYPE_I64, "key_type");
CHECK_THRIFT_TYPE(readByte(), TTYPE_SET, "value_type");
int32_t length = readInt32();
int64_t id, code;
#ifdef USE_DENSE_HASH
record->m_sparsebinary.resize(2 * length);
#else
record->m_sparsebinary.reserve(2 * length);
#endif
for (int32_t i = 0; i < length; i++) {
id = readInt64();
CHECK_THRIFT_TYPE(readByte(), TTYPE_STRING, "set:key_type");
int32_t set_length = readInt32();
if (keepKey(id, code)) {
record->m_sparsebinary[id].reserve(set_length);
for (int32_t j = 0; j < set_length; j++) {
const uint8_t *begin = nullptr;
int32_t str_len = getRawBuffer<uint8_t>(&begin);
record->m_sparsebinary[id].push_back(bufferToString(str_len, begin));
}
} else {
for (int32_t j = 0; j < set_length; j++) {
int32_t str_len = readInt32();
skipLength(str_len);
}
}
}
}
void DataRecordReader::readSparseContinuous(
const int feature_type,
DataRecord *record) {
CHECK_THRIFT_TYPE(feature_type, TTYPE_MAP, "type");
CHECK_THRIFT_TYPE(readByte(), TTYPE_I64, "key_type");
CHECK_THRIFT_TYPE(readByte(), TTYPE_MAP, "value_type");
int32_t length = readInt32();
int64_t id, code;
#ifdef USE_DENSE_HASH
record->m_sparsecontinuous.resize(2 * length);
#else
record->m_sparsecontinuous.reserve(2 * length);
#endif
for (int32_t i = 0; i < length; i++) {
id = readInt64();
CHECK_THRIFT_TYPE(readByte(), TTYPE_STRING, "map::key_type");
CHECK_THRIFT_TYPE(readByte(), TTYPE_DOUBLE, "map::value_type");
int32_t map_length = readInt32();
if (keepKey(id, code)) {
record->m_sparsecontinuous[id].reserve(map_length);
for (int32_t j = 0; j < map_length; j++) {
const uint8_t *begin = nullptr;
int32_t str_len = getRawBuffer<uint8_t>(&begin);
double val = readDouble();
if (!std::isnan(val)) {
record->m_sparsecontinuous[id].push_back({bufferToString(str_len, begin), val});
}
}
} else {
for (int32_t j = 0; j < map_length; j++) {
int32_t str_len = readInt32();
skipLength(str_len);
skip<double>();
}
}
}
}
void DataRecordReader::readBlob(
const int feature_type,
DataRecord *record) {
CHECK_THRIFT_TYPE(feature_type, TTYPE_MAP, "type");
CHECK_THRIFT_TYPE(readByte(), TTYPE_I64, "key_type");
CHECK_THRIFT_TYPE(readByte(), TTYPE_STRING, "value_type");
int32_t length = readInt32();
int64_t id, code;
for (int32_t i = 0; i < length; i++) {
id = readInt64();
if (keepKey(id, code)) {
const uint8_t *begin = nullptr;
int32_t blob_len = getRawBuffer<uint8_t>(&begin);
record->m_blob[id] = std::vector<uint8_t>(begin, begin + blob_len);
} else {
int32_t str_len = readInt32();
skipLength(str_len);
}
}
}
} // namespace twml

View File

@ -0,0 +1,162 @@
#include "internal/error.h"
#include "internal/thrift.h"
#include <map>
#include <twml/ThriftWriter.h>
#include <twml/DataRecordWriter.h>
#include <twml/io/IOError.h>
#include <unordered_set>
using namespace twml::io;
namespace twml {
void DataRecordWriter::writeBinary(twml::DataRecord &record) {
const DataRecord::BinaryFeatures bin_features = record.getBinary();
if (bin_features.size() > 0) {
m_thrift_writer.writeStructFieldHeader(TTYPE_SET, DR_BINARY);
m_thrift_writer.writeListHeader(TTYPE_I64, bin_features.size());
for (const auto &it : bin_features) {
m_thrift_writer.writeInt64(it);
}
}
}
void DataRecordWriter::writeContinuous(twml::DataRecord &record) {
const DataRecord::ContinuousFeatures cont_features = record.getContinuous();
if (cont_features.size() > 0) {
m_thrift_writer.writeStructFieldHeader(TTYPE_MAP, DR_CONTINUOUS);
m_thrift_writer.writeMapHeader(TTYPE_I64, TTYPE_DOUBLE, cont_features.size());
for (const auto &it : cont_features) {
m_thrift_writer.writeInt64(it.first);
m_thrift_writer.writeDouble(it.second);
}
}
}
void DataRecordWriter::writeDiscrete(twml::DataRecord &record) {
const DataRecord::DiscreteFeatures disc_features = record.getDiscrete();
if (disc_features.size() > 0) {
m_thrift_writer.writeStructFieldHeader(TTYPE_MAP, DR_DISCRETE);
m_thrift_writer.writeMapHeader(TTYPE_I64, TTYPE_I64, disc_features.size());
for (const auto &it : disc_features) {
m_thrift_writer.writeInt64(it.first);
m_thrift_writer.writeInt64(it.second);
}
}
}
void DataRecordWriter::writeString(twml::DataRecord &record) {
const DataRecord::StringFeatures str_features = record.getString();
if (str_features.size() > 0) {
m_thrift_writer.writeStructFieldHeader(TTYPE_MAP, DR_STRING);
m_thrift_writer.writeMapHeader(TTYPE_I64, TTYPE_STRING, str_features.size());
for (const auto &it : str_features) {
m_thrift_writer.writeInt64(it.first);
m_thrift_writer.writeString(it.second);
}
}
}
// convert from internal representation list<(i64, string)>
// to Thrift representation map<i64, set<string>>
void DataRecordWriter::writeSparseBinaryFeatures(twml::DataRecord &record) {
const DataRecord::SparseBinaryFeatures sp_bin_features = record.getSparseBinary();
// write map<i64, set<string>> as Thrift
if (sp_bin_features.size() > 0) {
m_thrift_writer.writeStructFieldHeader(TTYPE_MAP, DR_SPARSE_BINARY);
m_thrift_writer.writeMapHeader(TTYPE_I64, TTYPE_SET, sp_bin_features.size());
for (auto key_vals : sp_bin_features) {
m_thrift_writer.writeInt64(key_vals.first);
m_thrift_writer.writeListHeader(TTYPE_STRING, key_vals.second.size());
for (auto name : key_vals.second)
m_thrift_writer.writeString(name);
}
}
}
// convert from internal representation list<(i64, string, double)>
// to Thrift representation map<i64, map<string, double>>
void DataRecordWriter::writeSparseContinuousFeatures(twml::DataRecord &record) {
const DataRecord::SparseContinuousFeatures sp_cont_features = record.getSparseContinuous();
// write map<i64, map<string, double>> as Thrift
if (sp_cont_features.size() > 0) {
m_thrift_writer.writeStructFieldHeader(TTYPE_MAP, DR_SPARSE_CONTINUOUS);
m_thrift_writer.writeMapHeader(TTYPE_I64, TTYPE_MAP, sp_cont_features.size());
for (auto key_vals : sp_cont_features) {
m_thrift_writer.writeInt64(key_vals.first);
if (key_vals.second.size() == 0)
throw IOError(IOError::MALFORMED_MEMORY_RECORD);
m_thrift_writer.writeMapHeader(TTYPE_STRING, TTYPE_DOUBLE, key_vals.second.size());
for (auto map_str_double : key_vals.second) {
m_thrift_writer.writeString(map_str_double.first);
m_thrift_writer.writeDouble(map_str_double.second);
}
}
}
}
void DataRecordWriter::writeBlobFeatures(twml::DataRecord &record) {
const DataRecord::BlobFeatures blob_features = record.getBlob();
if (blob_features.size() > 0) {
m_thrift_writer.writeStructFieldHeader(TTYPE_MAP, DR_BLOB);
m_thrift_writer.writeMapHeader(TTYPE_I64, TTYPE_STRING, blob_features.size());
for (const auto &it : blob_features) {
m_thrift_writer.writeInt64(it.first);
std::vector<uint8_t> value = it.second;
m_thrift_writer.writeBinary(value.data(), value.size());
}
}
}
void DataRecordWriter::writeDenseTensors(twml::DataRecord &record) {
TensorRecord::RawTensors raw_tensors = record.getRawTensors();
if (raw_tensors.size() > 0) {
m_thrift_writer.writeStructFieldHeader(TTYPE_MAP, DR_GENERAL_TENSOR);
m_tensor_writer.write(record);
}
}
TWMLAPI uint32_t DataRecordWriter::getRecordsWritten() {
return m_records_written;
}
TWMLAPI uint64_t DataRecordWriter::write(twml::DataRecord &record) {
uint64_t bytes_written_before = m_thrift_writer.getBytesWritten();
writeBinary(record);
writeContinuous(record);
writeDiscrete(record);
writeString(record);
writeSparseBinaryFeatures(record);
writeSparseContinuousFeatures(record);
writeBlobFeatures(record);
writeDenseTensors(record);
// TODO add sparse tensor field
m_thrift_writer.writeStructStop();
m_records_written++;
return m_thrift_writer.getBytesWritten() - bytes_written_before;
}
} // namespace twml

View File

@ -0,0 +1,80 @@
#include "internal/thrift.h"
#include "internal/error.h"
#include <twml/HashedDataRecord.h>
#include <twml/HashedDataRecordReader.h>
#include <twml/Error.h>
#include <algorithm>
#include <cstring>
#include <cstdint>
namespace twml {
void HashedDataRecord::decode(HashedDataRecordReader &reader) {
uint8_t feature_type = reader.readByte();
while (feature_type != TTYPE_STOP) {
int16_t field_id = reader.readInt16();
switch (field_id) {
case DR_BINARY:
reader.readBinary(feature_type, this);
break;
case DR_CONTINUOUS:
reader.readContinuous(feature_type, this);
break;
case DR_DISCRETE:
reader.readDiscrete(feature_type, this);
break;
case DR_STRING:
reader.readString(feature_type, this);
break;
case DR_SPARSE_BINARY:
reader.readSparseBinary(feature_type, this);
break;
case DR_SPARSE_CONTINUOUS:
reader.readSparseContinuous(feature_type, this);
break;
case DR_BLOB:
reader.readBlob(feature_type, this);
break;
case DR_GENERAL_TENSOR:
reader.readTensor(feature_type, dynamic_cast<TensorRecord *>(this));
break;
case DR_SPARSE_TENSOR:
reader.readSparseTensor(feature_type, dynamic_cast<TensorRecord *>(this));
break;
default:
throw ThriftInvalidField(field_id, "HashedDataRecord::readThrift");
}
feature_type = reader.readByte();
}
}
void HashedDataRecord::addKey(int64_t key, int64_t transformed_key,
int64_t code, uint8_t type, double value) {
m_keys.push_back(key);
m_transformed_keys.push_back(transformed_key);
m_values.push_back(value);
m_codes.push_back(code);
m_types.push_back(type);
}
void HashedDataRecord::addLabel(int64_t id, double label) {
m_labels[id] = label;
}
void HashedDataRecord::addWeight(int64_t id, double val) {
m_weights[id] = val;
}
void HashedDataRecord::clear() {
std::fill(m_labels.begin(), m_labels.end(), std::nanf(""));
std::fill(m_weights.begin(), m_weights.end(), 0.0);
m_keys.clear();
m_transformed_keys.clear();
m_values.clear();
m_codes.clear();
m_types.clear();
}
} // namespace twml

View File

@ -0,0 +1,218 @@
#include "internal/thrift.h"
#include "internal/error.h"
#include <twml/HashedDataRecordReader.h>
#include <twml/utilities.h>
#include <twml/functions.h>
#include <cmath>
namespace twml {
bool HashedDataRecordReader::keepId(const int64_t &key, int64_t &code) {
auto it = m_keep_map->find(key);
if (it == m_keep_map->end()) return false;
code = it->second;
return true;
}
bool HashedDataRecordReader::isLabel(const int64_t &key, int64_t &code) {
if (m_labels_map == nullptr) return false;
auto it = m_labels_map->find(key);
if (it == m_labels_map->end()) return false;
code = it->second;
return true;
}
bool HashedDataRecordReader::isWeight(const int64_t &key, int64_t &code) {
if (m_weights_map == nullptr) return false;
auto it = m_weights_map->find(key);
if (it == m_weights_map->end()) return false;
code = it->second;
return true;
}
void HashedDataRecordReader::readBinary(
const int feature_type,
HashedDataRecord *record) {
CHECK_THRIFT_TYPE(feature_type, TTYPE_SET, "type");
CHECK_THRIFT_TYPE(readByte(), TTYPE_I64, "key_type");
int32_t length = readInt32();
record->extendSize(length);
int64_t id, code;
for (int32_t i = 0; i < length; i++) {
id = readInt64();
if (keepId(id, code)) {
record->addKey(id, id, code, DR_BINARY);
} else if (isLabel(id, code)) {
record->addLabel(code);
}
}
}
void HashedDataRecordReader::readContinuous(
const int feature_type,
HashedDataRecord *record) {
CHECK_THRIFT_TYPE(feature_type, TTYPE_MAP, "type");
CHECK_THRIFT_TYPE(readByte(), TTYPE_I64, "key_type");
CHECK_THRIFT_TYPE(readByte(), TTYPE_DOUBLE, "value_type");
int32_t length = readInt32();
record->extendSize(length);
int64_t id, code;
for (int32_t i = 0; i < length; i++) {
id = readInt64();
if (keepId(id, code)) {
double value = readDouble();
if (!std::isnan(value)) {
record->addKey(id, id, code, DR_CONTINUOUS, value);
}
} else if (isLabel(id, code)) {
record->addLabel(code, readDouble());
} else if (isWeight(id, code)) {
record->addWeight(code, readDouble());
} else {
skip<double>();
}
}
}
void HashedDataRecordReader::readDiscrete(
const int feature_type,
HashedDataRecord *record) {
CHECK_THRIFT_TYPE(feature_type, TTYPE_MAP, "type");
CHECK_THRIFT_TYPE(readByte(), TTYPE_I64, "key_type");
CHECK_THRIFT_TYPE(readByte(), TTYPE_I64, "value_type");
int32_t length = readInt32();
record->extendSize(length);
int64_t id, code;
for (int32_t i = 0; i < length; i++) {
id = readInt64();
if (keepId(id, code)) {
int64_t transformed_key = mixDiscreteIdAndValue(id, readInt64());
record->addKey(id, transformed_key, code, DR_DISCRETE);
} else {
skip<int64_t>();
}
}
}
void HashedDataRecordReader::readString(
const int feature_type,
HashedDataRecord *record) {
CHECK_THRIFT_TYPE(feature_type, TTYPE_MAP, "type");
CHECK_THRIFT_TYPE(readByte(), TTYPE_I64, "key_type");
CHECK_THRIFT_TYPE(readByte(), TTYPE_STRING, "value_type");
int32_t length = readInt32();
record->extendSize(length);
int64_t id, code;
for (int32_t i = 0; i < length; i++) {
id = readInt64();
if (keepId(id, code)) {
const uint8_t *begin = nullptr;
int32_t str_len = getRawBuffer<uint8_t>(&begin);
int64_t transformed_key = mixStringIdAndValue(id, str_len, begin);
record->addKey(id, transformed_key, code, DR_STRING);
} else {
int32_t str_len = readInt32();
skipLength(str_len);
}
}
}
void HashedDataRecordReader::readSparseBinary(
const int feature_type,
HashedDataRecord *record) {
CHECK_THRIFT_TYPE(feature_type, TTYPE_MAP, "type");
CHECK_THRIFT_TYPE(readByte(), TTYPE_I64, "key_type");
CHECK_THRIFT_TYPE(readByte(), TTYPE_SET, "value_type");
int32_t length = readInt32();
record->extendSize(length);
int64_t id, code;
for (int32_t i = 0; i < length; i++) {
id = readInt64();
if (keepId(id, code)) {
CHECK_THRIFT_TYPE(readByte(), TTYPE_STRING, "set:key_type");
int32_t set_length = readInt32();
for (int32_t j = 0; j < set_length; j++) {
const uint8_t *begin = nullptr;
int32_t str_len = getRawBuffer<uint8_t>(&begin);
int64_t transformed_key = mixStringIdAndValue(id, str_len, begin);
record->addKey(id, transformed_key, code, DR_SPARSE_BINARY);
}
} else {
CHECK_THRIFT_TYPE(readByte(), TTYPE_STRING, "set:key_type");
int32_t set_length = readInt32();
for (int32_t j = 0; j < set_length; j++) {
int32_t str_len = readInt32();
skipLength(str_len);
}
}
}
}
void HashedDataRecordReader::readSparseContinuous(
const int feature_type,
HashedDataRecord *record) {
CHECK_THRIFT_TYPE(feature_type, TTYPE_MAP, "type");
CHECK_THRIFT_TYPE(readByte(), TTYPE_I64, "key_type");
CHECK_THRIFT_TYPE(readByte(), TTYPE_MAP, "value_type");
int32_t length = readInt32();
record->extendSize(length);
int64_t id, code;
for (int32_t i = 0; i < length; i++) {
id = readInt64();
if (keepId(id, code)) {
CHECK_THRIFT_TYPE(readByte(), TTYPE_STRING, "map::key_type");
CHECK_THRIFT_TYPE(readByte(), TTYPE_DOUBLE, "map::value_type");
int32_t map_length = readInt32();
for (int32_t j = 0; j < map_length; j++) {
const uint8_t *begin = nullptr;
int32_t str_len = getRawBuffer<uint8_t>(&begin);
int64_t transformed_key = 0;
switch(m_decode_mode) {
case DecodeMode::hash_fname_and_valname:
transformed_key = mixStringIdAndValue(id, str_len, begin);
break;
default: // m_decode_mode == DecodeMode::hash_valname == 0 is default
twml_get_feature_id(&transformed_key, str_len, reinterpret_cast<const char *>(begin));
}
double value = readDouble();
if (!std::isnan(value)) {
record->addKey(id, transformed_key, code, DR_SPARSE_CONTINUOUS, value);
}
}
} else {
CHECK_THRIFT_TYPE(readByte(), TTYPE_STRING, "map::key_type");
CHECK_THRIFT_TYPE(readByte(), TTYPE_DOUBLE, "map::value_type");
int32_t map_length = readInt32();
for (int32_t j = 0; j < map_length; j++) {
int32_t str_len = readInt32();
skipLength(str_len);
skip<double>();
}
}
}
}
void HashedDataRecordReader::readBlob(
const int feature_type,
HashedDataRecord *record) {
CHECK_THRIFT_TYPE(feature_type, TTYPE_MAP, "type");
CHECK_THRIFT_TYPE(readByte(), TTYPE_I64, "key_type");
CHECK_THRIFT_TYPE(readByte(), TTYPE_STRING, "value_type");
int32_t length = readInt32();
int64_t id;
for (int32_t i = 0; i < length; i++) {
// Skips the BlobFeatures if they are defined or not in the FeatureConfig
id = readInt64();
int32_t str_len = readInt32();
skipLength(str_len);
}
}
} // namespace twml

View File

@ -0,0 +1,380 @@
#include "internal/khash.h"
#include "internal/error.h"
#include <twml/defines.h>
#include <twml/Hashmap.h>
#include <cstdint>
namespace twml {
HashMap::HashMap() :
m_hashmap(nullptr) {
TWML_CHECK(twml_hashmap_create(&m_hashmap), "Failed to create HashMap");
}
HashMap::~HashMap() {
// Do not throw exceptions from the destructor
twml_hashmap_delete(m_hashmap);
}
void HashMap::clear() {
TWML_CHECK(twml_hashmap_clear(m_hashmap), "Failed to clear HashMap");
}
uint64_t HashMap::size() const {
uint64_t size;
TWML_CHECK(twml_hashmap_get_size(&size, m_hashmap), "Failed to get HashMap size");
return size;
}
int8_t HashMap::insert(const HashKey_t key) {
int8_t result;
TWML_CHECK(twml_hashmap_insert_key(&result, m_hashmap, key),
"Failed to insert key");
return result;
}
int8_t HashMap::insert(const HashKey_t key, const HashKey_t val) {
int8_t result;
TWML_CHECK(twml_hashmap_insert_key_and_value(&result, m_hashmap, key, val),
"Failed to insert key");
return result;
}
int8_t HashMap::get(HashVal_t &val, const HashKey_t key) const {
int8_t result;
TWML_CHECK(twml_hashmap_get_value(&result, &val, m_hashmap, key),
"Failed to insert key,value pair");
return result;
}
void HashMap::insert(Tensor &mask, const Tensor keys) {
TWML_CHECK(twml_hashmap_insert_keys(mask.getHandle(), m_hashmap, keys.getHandle()),
"Failed to insert keys tensor");
}
void HashMap::insert(Tensor &mask, const Tensor keys, const Tensor vals) {
TWML_CHECK(twml_hashmap_insert_keys_and_values(mask.getHandle(), m_hashmap,
keys.getHandle(), vals.getHandle()),
"Failed to insert keys,values tensor pair");
}
void HashMap::remove(const Tensor keys) {
TWML_CHECK(twml_hashmap_remove_keys(m_hashmap, keys.getHandle()),
"Failed to remove keys tensor");
}
void HashMap::get(Tensor &mask, Tensor &vals, const Tensor keys) const {
TWML_CHECK(twml_hashmap_get_values(mask.getHandle(), vals.getHandle(),
m_hashmap, keys.getHandle()),
"Failed to get values tensor");
}
void HashMap::getInplace(Tensor &mask, Tensor &keys_vals) const {
TWML_CHECK(twml_hashmap_get_values_inplace(mask.getHandle(),
keys_vals.getHandle(),
m_hashmap),
"Failed to get values tensor");
}
void HashMap::toTensors(Tensor &keys, Tensor &vals) const {
TWML_CHECK(twml_hashmap_to_tensors(keys.getHandle(),
vals.getHandle(),
m_hashmap),
"Failed to get keys,values tensors from HashMap");
}
} // namespace twml
using twml::HashKey_t;
using twml::HashVal_t;
KHASH_MAP_INIT_INT64(HashKey_t, HashVal_t);
typedef khash_t(HashKey_t)* hash_map_t;
twml_err twml_hashmap_create(twml_hashmap *hashmap) {
hash_map_t *h = reinterpret_cast<hash_map_t *>(hashmap);
*h = kh_init(HashKey_t);
return TWML_ERR_NONE;
}
twml_err twml_hashmap_clear(const twml_hashmap hashmap) {
hash_map_t h = (hash_map_t)hashmap;
kh_clear(HashKey_t, h);
return TWML_ERR_NONE;
}
twml_err twml_hashmap_get_size(uint64_t *size, const twml_hashmap hashmap) {
hash_map_t h = (hash_map_t)hashmap;
*size = kh_size(h);
return TWML_ERR_NONE;
}
twml_err twml_hashmap_delete(const twml_hashmap hashmap) {
hash_map_t h = (hash_map_t)hashmap;
kh_destroy(HashKey_t, h);
return TWML_ERR_NONE;
}
// insert, remove, get single key / value
twml_err twml_hashmap_insert_key(int8_t *mask,
const twml_hashmap hashmap,
const HashKey_t key) {
hash_map_t h = (hash_map_t)hashmap;
int ret = 0;
khiter_t k = kh_put(HashKey_t, h, key, &ret);
*mask = ret >= 0;
if (*mask) {
HashVal_t v = kh_size(h);
kh_value(h, k) = v;
}
return TWML_ERR_NONE;
}
twml_err twml_hashmap_insert_key_and_value(int8_t *mask, twml_hashmap hashmap,
const HashKey_t key, const HashVal_t val) {
hash_map_t h = (hash_map_t)hashmap;
int ret = 0;
khiter_t k = kh_put(HashKey_t, h, key, &ret);
*mask = ret >= 0;
if (*mask) {
kh_value(h, k) = val;
}
return TWML_ERR_NONE;
}
twml_err twml_hashmap_remove_key(const twml_hashmap hashmap,
const HashKey_t key) {
hash_map_t h = (hash_map_t)hashmap;
khiter_t k = kh_get(HashKey_t, h, key);
if (k != kh_end(h)) {
kh_del(HashKey_t, h, k);
}
return TWML_ERR_NONE;
}
twml_err twml_hashmap_get_value(int8_t *mask, HashVal_t *val,
const twml_hashmap hashmap, const HashKey_t key) {
hash_map_t h = (hash_map_t)hashmap;
khiter_t k = kh_get(HashKey_t, h, key);
if (k == kh_end(h)) {
*mask = false;
} else {
*val = kh_value(h, k);
*mask = true;
}
return TWML_ERR_NONE;
}
// insert, get, remove tensors of keys / values
twml_err twml_hashmap_insert_keys(twml_tensor masks,
const twml_hashmap hashmap,
const twml_tensor keys) {
auto masks_tensor = twml::getTensor(masks);
auto keys_tensor = twml::getConstTensor(keys);
if (masks_tensor->getType() != TWML_TYPE_INT8) {
return TWML_ERR_TYPE;
}
if (keys_tensor->getType() != TWML_TYPE_INT64) {
return TWML_ERR_TYPE;
}
if (keys_tensor->getNumElements() != masks_tensor->getNumElements()) {
return TWML_ERR_SIZE;
}
int8_t *mptr = masks_tensor->getData<int8_t>();
const HashKey_t *kptr = keys_tensor->getData<HashKey_t>();
uint64_t num_elements = keys_tensor->getNumElements();
hash_map_t h = (hash_map_t)hashmap;
for (uint64_t i = 0; i < num_elements; i++) {
int ret = 0;
khiter_t k = kh_put(HashKey_t, h, kptr[i], &ret);
mptr[i] = ret >= 0;
if (mptr[i]) {
HashVal_t v = kh_size(h);
kh_value(h, k) = v;
}
}
return TWML_ERR_NONE;
}
twml_err twml_hashmap_insert_keys_and_values(twml_tensor masks,
twml_hashmap hashmap,
const twml_tensor keys,
const twml_tensor vals) {
auto masks_tensor = twml::getTensor(masks);
auto keys_tensor = twml::getConstTensor(keys);
auto vals_tensor = twml::getConstTensor(vals);
if (masks_tensor->getType() != TWML_TYPE_INT8) {
return TWML_ERR_TYPE;
}
if (keys_tensor->getType() != TWML_TYPE_INT64) {
return TWML_ERR_TYPE;
}
if (vals_tensor->getType() != TWML_TYPE_INT64) {
return TWML_ERR_TYPE;
}
if (keys_tensor->getNumElements() != vals_tensor->getNumElements() ||
keys_tensor->getNumElements() != masks_tensor->getNumElements()) {
return TWML_ERR_SIZE;
}
int8_t *mptr = masks_tensor->getData<int8_t>();
const HashKey_t *kptr = keys_tensor->getData<HashKey_t>();
const HashVal_t *vptr = twml::getConstTensor(vals)->getData<HashVal_t>();
uint64_t num_elements = keys_tensor->getNumElements();
hash_map_t h = (hash_map_t)hashmap;
for (uint64_t i = 0; i < num_elements; i++) {
int ret = 0;
khiter_t k = kh_put(HashKey_t, h, kptr[i], &ret);
mptr[i] = ret >= 0;
if (mptr[i]) {
kh_value(h, k) = vptr[i];
}
}
return TWML_ERR_NONE;
}
twml_err twml_hashmap_remove_keys(const twml_hashmap hashmap,
const twml_tensor keys) {
auto keys_tensor = twml::getConstTensor(keys);
if (keys_tensor->getType() != TWML_TYPE_INT64) {
return TWML_ERR_TYPE;
}
const HashKey_t *kptr = keys_tensor->getData<HashKey_t>();
uint64_t num_elements = keys_tensor->getNumElements();
hash_map_t h = (hash_map_t)hashmap;
for (uint64_t i = 0; i < num_elements; i++) {
khiter_t k = kh_get(HashKey_t, h, kptr[i]);
if (k != kh_end(h)) {
kh_del(HashKey_t, h, kptr[i]);
}
}
return TWML_ERR_NONE;
}
twml_err twml_hashmap_get_values(twml_tensor masks, twml_tensor vals,
const twml_hashmap hashmap, const twml_tensor keys) {
auto masks_tensor = twml::getTensor(masks);
auto vals_tensor = twml::getTensor(vals);
auto keys_tensor = twml::getConstTensor(keys);
if (masks_tensor->getType() != TWML_TYPE_INT8) {
return TWML_ERR_TYPE;
}
if (keys_tensor->getType() != TWML_TYPE_INT64) {
return TWML_ERR_TYPE;
}
if (vals_tensor->getType() != TWML_TYPE_INT64) {
return TWML_ERR_TYPE;
}
if (keys_tensor->getNumElements() != vals_tensor->getNumElements() ||
keys_tensor->getNumElements() != masks_tensor->getNumElements()) {
return TWML_ERR_SIZE;
}
int8_t *mptr = masks_tensor->getData<int8_t>();
HashVal_t *vptr = vals_tensor->getData<HashVal_t>();
const HashKey_t *kptr = keys_tensor->getData<HashKey_t>();
uint64_t num_elements = keys_tensor->getNumElements();
hash_map_t h = (hash_map_t)hashmap;
for (uint64_t i = 0; i < num_elements; i++) {
khiter_t k = kh_get(HashKey_t, h, kptr[i]);
if (k == kh_end(h)) {
mptr[i] = false;
} else {
mptr[i] = true;
vptr[i] = kh_value(h, k);
}
}
return TWML_ERR_NONE;
}
twml_err twml_hashmap_get_values_inplace(twml_tensor masks, twml_tensor keys_vals,
const twml_hashmap hashmap) {
auto masks_tensor = twml::getTensor(masks);
auto keys_tensor = twml::getTensor(keys_vals);
if (masks_tensor->getType() != TWML_TYPE_INT8) {
return TWML_ERR_TYPE;
}
if (keys_tensor->getType() != TWML_TYPE_INT64) {
return TWML_ERR_TYPE;
}
if (keys_tensor->getNumElements() != masks_tensor->getNumElements()) {
return TWML_ERR_SIZE;
}
int8_t *mptr = masks_tensor->getData<int8_t>();
HashKey_t *kptr = keys_tensor->getData<HashKey_t>();
uint64_t num_elements = keys_tensor->getNumElements();
hash_map_t h = (hash_map_t)hashmap;
for (uint64_t i = 0; i < num_elements; i++) {
khiter_t k = kh_get(HashKey_t, h, kptr[i]);
if (k == kh_end(h)) {
mptr[i] = false;
} else {
mptr[i] = true;
kptr[i] = kh_value(h, k);
}
}
return TWML_ERR_NONE;
}
twml_err twml_hashmap_to_tensors(twml_tensor keys, twml_tensor vals,
const twml_hashmap hashmap) {
hash_map_t h = (hash_map_t)hashmap;
const uint64_t size = kh_size(h);
auto keys_tensor = twml::getTensor(keys);
auto vals_tensor = twml::getTensor(vals);
if (keys_tensor->getType() != TWML_TYPE_INT64) {
return TWML_ERR_TYPE;
}
if (vals_tensor->getType() != TWML_TYPE_INT64) {
return TWML_ERR_TYPE;
}
if (size != keys_tensor->getNumElements() ||
size != vals_tensor->getNumElements()) {
return TWML_ERR_SIZE;
}
HashKey_t *kptr = keys_tensor->getData<HashKey_t>();
HashVal_t *vptr = vals_tensor->getData<HashVal_t>();
HashKey_t key, i = 0;
HashKey_t val;
kh_foreach(h, key, val, {
kptr[i] = key;
vptr[i] = val;
i++;
});
return TWML_ERR_NONE;
}

View File

@ -0,0 +1,191 @@
#include "internal/error.h"
#include <twml/Tensor.h>
#include <twml/Type.h>
#include <type_traits>
#include <algorithm>
#include <numeric>
namespace twml {
using std::vector;
Tensor::Tensor(void *data, int ndims, const uint64_t *dims, const uint64_t *strides, twml_type type) :
m_type(type), m_data(data),
m_dims(dims, dims + ndims),
m_strides(strides, strides + ndims) {
}
Tensor::Tensor(void *data,
const vector<uint64_t> &dims,
const vector<uint64_t> &strides,
twml_type type) :
m_type(type), m_data(data),
m_dims(dims.begin(), dims.end()),
m_strides(strides.begin(), strides.end()) {
if (dims.size() != strides.size()) {
throw twml::Error(TWML_ERR_SIZE, "The number size of dims and strides don't match");
}
}
int Tensor::getNumDims() const {
return static_cast<int>(m_dims.size());
}
uint64_t Tensor::getDim(int id) const {
if (id >= this->getNumDims()) {
throw twml::Error(TWML_ERR_SIZE, "Requested dimension exceeds tensor dimension");
}
return m_dims[id];
}
uint64_t Tensor::getStride(int id) const {
if (id >= this->getNumDims()) {
throw twml::Error(TWML_ERR_SIZE, "Requested dimension exceeds tensor dimension");
}
return m_strides[id];
}
uint64_t Tensor::getNumElements() const {
return std::accumulate(m_dims.begin(), m_dims.end(), 1, std::multiplies<int>());
}
twml_type Tensor::getType() const {
return m_type;
}
twml_tensor Tensor::getHandle() {
return reinterpret_cast<twml_tensor>(this);
}
const twml_tensor Tensor::getHandle() const {
return reinterpret_cast<const twml_tensor>(const_cast<Tensor *>(this));
}
const Tensor *getConstTensor(const twml_tensor t) {
return reinterpret_cast<const Tensor *>(t);
}
Tensor *getTensor(twml_tensor t) {
return reinterpret_cast<Tensor *>(t);
}
#define INSTANTIATE(T) \
template<> TWMLAPI T *Tensor::getData() { \
if ((twml_type)Type<T>::type != m_type) { \
throw twml::Error(TWML_ERR_TYPE, \
"Requested invalid type"); \
} \
return reinterpret_cast<T *>(m_data); \
} \
template<> TWMLAPI const T *Tensor::getData() const { \
if ((twml_type)Type<T>::type != m_type) { \
throw twml::Error(TWML_ERR_TYPE, \
"Requested invalid type"); \
} \
return (const T *)m_data; \
} \
INSTANTIATE(int32_t)
INSTANTIATE(int64_t)
INSTANTIATE(int8_t)
INSTANTIATE(uint8_t)
INSTANTIATE(float)
INSTANTIATE(double)
INSTANTIATE(bool)
INSTANTIATE(std::string)
// This is used for the C api. No checks needed for void.
template<> TWMLAPI void *Tensor::getData() {
return m_data;
}
template<> TWMLAPI const void *Tensor::getData() const {
return (const void *)m_data;
}
std::string getTypeName(twml_type type) {
switch (type) {
case TWML_TYPE_FLOAT32 : return "float32";
case TWML_TYPE_FLOAT64 : return "float64";
case TWML_TYPE_INT32 : return "int32";
case TWML_TYPE_INT64 : return "int64";
case TWML_TYPE_INT8 : return "int8";
case TWML_TYPE_UINT8 : return "uint8";
case TWML_TYPE_BOOL : return "bool";
case TWML_TYPE_STRING : return "string";
case TWML_TYPE_UNKNOWN : return "Unknown type";
}
throw twml::Error(TWML_ERR_TYPE, "Uknown type");
}
uint64_t getSizeOf(twml_type dtype) {
switch (dtype) {
case TWML_TYPE_FLOAT : return 4;
case TWML_TYPE_DOUBLE : return 8;
case TWML_TYPE_INT64 : return 8;
case TWML_TYPE_INT32 : return 4;
case TWML_TYPE_UINT8 : return 1;
case TWML_TYPE_BOOL : return 1;
case TWML_TYPE_INT8 : return 1;
case TWML_TYPE_STRING :
throw twml::Error(TWML_ERR_THRIFT, "getSizeOf not supported for strings");
case TWML_TYPE_UNKNOWN:
throw twml::Error(TWML_ERR_THRIFT, "Can't get size of unknown types");
}
throw twml::Error(TWML_ERR_THRIFT, "Invalid twml_type");
}
} // namespace twml
twml_err twml_tensor_create(twml_tensor *t, void *data, int ndims, uint64_t *dims,
uint64_t *strides, twml_type type) {
HANDLE_EXCEPTIONS(
twml::Tensor *res = new twml::Tensor(data, ndims, dims, strides, type);
*t = reinterpret_cast<twml_tensor>(res););
return TWML_ERR_NONE;
}
twml_err twml_tensor_delete(const twml_tensor t) {
HANDLE_EXCEPTIONS(
delete twml::getConstTensor(t););
return TWML_ERR_NONE;
}
twml_err twml_tensor_get_type(twml_type *type, const twml_tensor t) {
HANDLE_EXCEPTIONS(
*type = twml::getConstTensor(t)->getType(););
return TWML_ERR_NONE;
}
twml_err twml_tensor_get_data(void **data, const twml_tensor t) {
HANDLE_EXCEPTIONS(
*data = twml::getTensor(t)->getData<void>(););
return TWML_ERR_NONE;
}
twml_err twml_tensor_get_dim(uint64_t *dim, const twml_tensor t, int id) {
HANDLE_EXCEPTIONS(
const twml::Tensor *tensor = twml::getConstTensor(t);
*dim = tensor->getDim(id););
return TWML_ERR_NONE;
}
twml_err twml_tensor_get_stride(uint64_t *stride, const twml_tensor t, int id) {
HANDLE_EXCEPTIONS(
const twml::Tensor *tensor = twml::getConstTensor(t);
*stride = tensor->getStride(id););
return TWML_ERR_NONE;
}
twml_err twml_tensor_get_num_dims(int *ndim, const twml_tensor t) {
HANDLE_EXCEPTIONS(
const twml::Tensor *tensor = twml::getConstTensor(t);
*ndim = tensor->getNumDims(););
return TWML_ERR_NONE;
}
twml_err twml_tensor_get_num_elements(uint64_t *nelements, const twml_tensor t) {
HANDLE_EXCEPTIONS(
const twml::Tensor *tensor = twml::getConstTensor(t);
*nelements = tensor->getNumElements(););
return TWML_ERR_NONE;
}

View File

@ -0,0 +1,323 @@
#include "internal/thrift.h"
#include "internal/error.h"
#include <string>
#include <twml/TensorRecordReader.h>
#include <twml/RawTensor.h>
namespace twml {
template<typename T> struct TensorTraits;
#define INSTANTIATE(TYPE, THRIFT_TYPE, TWML_TYPE) \
template<> struct TensorTraits<TYPE> { \
static const TTYPES ThriftType = THRIFT_TYPE; \
static const twml_type TwmlType = TWML_TYPE; \
}; \
INSTANTIATE(int64_t, TTYPE_I64, TWML_TYPE_INT64)
INSTANTIATE(int32_t, TTYPE_I32, TWML_TYPE_INT32)
INSTANTIATE(double, TTYPE_DOUBLE, TWML_TYPE_DOUBLE)
INSTANTIATE(bool, TTYPE_BOOL, TWML_TYPE_BOOL)
static
std::vector<uint64_t> calcStrides(const std::vector<uint64_t> &shape) {
int ndims = static_cast<int>(shape.size());
std::vector<uint64_t> strides(ndims);
uint64_t stride = 1;
for (int i = ndims-1; i >= 0; i--) {
strides[i] = stride;
stride *= shape[i];
}
return strides;
}
static twml_type getTwmlType(int dtype) {
// Convert tensor.thrift enum to twml enum
switch (dtype) {
case DATA_TYPE_FLOAT:
return TWML_TYPE_FLOAT;
case DATA_TYPE_DOUBLE:
return TWML_TYPE_DOUBLE;
case DATA_TYPE_INT64:
return TWML_TYPE_INT64;
case DATA_TYPE_INT32:
return TWML_TYPE_INT32;
case DATA_TYPE_UINT8:
return TWML_TYPE_UINT8;
case DATA_TYPE_STRING:
return TWML_TYPE_STRING;
case DATA_TYPE_BOOL:
return TWML_TYPE_BOOL;
}
return TWML_TYPE_UNKNOWN;
}
std::vector<uint64_t> TensorRecordReader::readShape() {
int32_t length = readInt32();
std::vector<uint64_t> shape;
shape.reserve(length);
for (int32_t i = 0; i < length; i++) {
shape.push_back(static_cast<uint64_t>(readInt64()));
}
return shape;
}
template<typename T>
RawTensor TensorRecordReader::readTypedTensor() {
std::vector<uint64_t> shape;
int32_t length = 0;
const uint8_t *data = nullptr;
uint64_t raw_length = 0;
uint8_t field_type = TTYPE_STOP;
while ((field_type = readByte()) != TTYPE_STOP) {
int16_t field_id = readInt16();
switch (field_id) {
case 1:
CHECK_THRIFT_TYPE(field_type, TTYPE_LIST, "data");
CHECK_THRIFT_TYPE(readByte(), TensorTraits<T>::ThriftType, "data_type");
length = getRawBuffer<T>(&data);
raw_length = length * sizeof(T);
break;
case 2:
CHECK_THRIFT_TYPE(field_type, TTYPE_LIST, "shape");
CHECK_THRIFT_TYPE(readByte(), TTYPE_I64, "shape_type");
shape = readShape();
break;
default:
throw ThriftInvalidField(field_id, "TensorRecordReader::readTypedTensor");
}
}
// data is required
if (data == nullptr) {
throw twml::Error(TWML_ERR_THRIFT, "data field not found for TypedTensor");
}
// shape is optional
if (shape.size() == 0) {
shape.push_back((uint64_t)length);
}
// TODO: Try avoiding stride calculation
std::vector<uint64_t> strides = calcStrides(shape);
// FIXME: Try to use const void * in Tensors.
return RawTensor(const_cast<void *>(static_cast<const void *>(data)),
shape, strides, (twml_type)TensorTraits<T>::TwmlType, true, raw_length);
}
RawTensor TensorRecordReader::readRawTypedTensor() {
std::vector<uint64_t> shape;
const uint8_t *data = nullptr;
twml_type type = TWML_TYPE_UNKNOWN;
uint64_t raw_length = 0;
uint8_t field_type = TTYPE_STOP;
while ((field_type = readByte()) != TTYPE_STOP) {
int16_t field_id = readInt16();
switch (field_id) {
case 1:
CHECK_THRIFT_TYPE(field_type, TTYPE_I32, "DataType");
type = getTwmlType(readInt32());
break;
case 2:
CHECK_THRIFT_TYPE(field_type, TTYPE_STRING, "content");
raw_length = getRawBuffer<uint8_t>(&data);
break;
case 3:
CHECK_THRIFT_TYPE(field_type, TTYPE_LIST, "shape");
CHECK_THRIFT_TYPE(readByte(), TTYPE_I64, "shape_type");
shape = readShape();
break;
default:
throw ThriftInvalidField(field_id, "TensorRecordReader::readRawTypedTensor");
}
}
// data type is required
if (type == TWML_TYPE_UNKNOWN) {
throw twml::Error(TWML_ERR_THRIFT, "DataType is a required field for RawTypedTensor");
}
// data is required
if (data == nullptr) {
throw twml::Error(TWML_ERR_THRIFT, "content is a required field for RawTypedTensor");
}
// shape is optional in the thrift file, but it is really required for string types.
if (shape.size() == 0) {
if (type == TWML_TYPE_STRING) {
throw twml::Error(TWML_ERR_THRIFT, "shape required for string types in RawTypedTensor");
}
shape.push_back((uint64_t)(raw_length / getSizeOf(type)));
}
// TODO: Try avoiding stride calculation
std::vector<uint64_t> strides = calcStrides(shape);
// FIXME: Try to use const void * data inside Tensors.
return RawTensor(const_cast<void *>(static_cast<const void *>(data)),
shape, strides, type, false, raw_length);
}
RawTensor TensorRecordReader::readStringTensor() {
std::vector<uint64_t> shape;
int32_t length = 0;
const uint8_t *data = nullptr;
uint64_t raw_length = 0;
uint8_t field_type = TTYPE_STOP;
const uint8_t *dummy = nullptr;
while ((field_type = readByte()) != TTYPE_STOP) {
int16_t field_id = readInt16();
switch (field_id) {
case 1:
CHECK_THRIFT_TYPE(field_type, TTYPE_LIST, "data");
CHECK_THRIFT_TYPE(readByte(), TTYPE_STRING, "data_type");
length = readInt32();
// Store the current location of the byte stream.
// Use this at to "deocde strings" at a later point.
data = getBuffer();
for (int32_t i = 0; i < length; i++) {
// Skip reading the strings
getRawBuffer<uint8_t>(&dummy);
}
raw_length = length;
break;
case 2:
CHECK_THRIFT_TYPE(field_type, TTYPE_LIST, "shape");
CHECK_THRIFT_TYPE(readByte(), TTYPE_I64, "shape_type");
shape = readShape();
break;
default:
throw ThriftInvalidField(field_id, "TensorRecordReader::readTypedTensor");
}
}
// data is required
if (data == nullptr) {
throw twml::Error(TWML_ERR_THRIFT, "data field not found for TypedTensor");
}
// shape is optional
if (shape.size() == 0) {
shape.push_back((uint64_t)length);
}
// TODO: Try avoiding stride calculation
std::vector<uint64_t> strides = calcStrides(shape);
// FIXME: Try to use const void * in Tensors.
return RawTensor(const_cast<void *>(static_cast<const void *>(data)),
shape, strides, TWML_TYPE_UINT8, false, raw_length);
}
RawTensor TensorRecordReader::readGeneralTensor() {
// No loop is required because GeneralTensor is union. It is going to contain one field only.
// All the fields are structs
CHECK_THRIFT_TYPE(readByte(), TTYPE_STRUCT, "type");
int16_t field_id = readInt16();
RawTensor output;
switch (field_id) {
case GT_RAW:
output = readRawTypedTensor();
break;
case GT_STRING:
output = readStringTensor();
break;
case GT_INT32:
output = readTypedTensor<int32_t>();
break;
case GT_INT64:
output = readTypedTensor<int64_t>();
break;
case GT_FLOAT:
case GT_DOUBLE:
// Store both FloatTensor and DoubleTensor as double tensor as both are list of doubles.
output = readTypedTensor<double>();
break;
case GT_BOOL:
output = readTypedTensor<bool>();
break;
default:
throw ThriftInvalidField(field_id, "TensorRecordReader::readGeneralTensor()");
}
CHECK_THRIFT_TYPE(readByte(), TTYPE_STOP, "stop");
return output;
}
RawSparseTensor TensorRecordReader::readCOOSparseTensor() {
std::vector<uint64_t> shape;
uint8_t field_type = TTYPE_STOP;
RawTensor indices, values;
while ((field_type = readByte()) != TTYPE_STOP) {
int16_t field_id = readInt16();
switch (field_id) {
case 1:
CHECK_THRIFT_TYPE(field_type, TTYPE_LIST, "shape");
CHECK_THRIFT_TYPE(readByte(), TTYPE_I64, "shape_type");
shape = readShape();
break;
case 2:
indices = readTypedTensor<int64_t>();
break;
case 3:
values = readGeneralTensor();
break;
default:
throw twml::Error(TWML_ERR_THRIFT, "Invalid field when deocidng COOSparseTensor");
}
}
return RawSparseTensor(indices, values, shape);
}
void TensorRecordReader::readTensor(const int feature_type, TensorRecord *record) {
CHECK_THRIFT_TYPE(feature_type, TTYPE_MAP, "type");
CHECK_THRIFT_TYPE(readByte(), TTYPE_I64, "key_type");
CHECK_THRIFT_TYPE(readByte(), TTYPE_STRUCT, "value_type");
int32_t length = readInt32();
for (int32_t i = 0; i < length; i++) {
int64_t id = readInt64();
record->m_tensors.emplace(id, readGeneralTensor());
}
}
void TensorRecordReader::readSparseTensor(const int feature_type, TensorRecord *record) {
CHECK_THRIFT_TYPE(feature_type, TTYPE_MAP, "type");
CHECK_THRIFT_TYPE(readByte(), TTYPE_I64, "key_type");
CHECK_THRIFT_TYPE(readByte(), TTYPE_STRUCT, "value_type");
int32_t length = readInt32();
for (int32_t i = 0; i < length; i++) {
int64_t id = readInt64();
// No loop is required because SparseTensor is union. It is going to contain one field only.
// All the fields are structs
CHECK_THRIFT_TYPE(readByte(), TTYPE_STRUCT, "field");
int16_t field_id = readInt16();
RawSparseTensor output;
// Only COOSparsetensor is supported.
switch (field_id) {
case SP_COO:
output = readCOOSparseTensor();
break;
default:
throw ThriftInvalidField(field_id, "TensorRecordReader::readSparseTensor()");
}
// Read the last byte of the struct.
CHECK_THRIFT_TYPE(readByte(), TTYPE_STOP, "stop");
// Add to the map.
record->m_sparse_tensors.emplace(id, output);
}
}
} // namespace twml

View File

@ -0,0 +1,162 @@
#include "internal/error.h"
#include "internal/thrift.h"
#include <map>
#include <twml/ThriftWriter.h>
#include <twml/TensorRecordWriter.h>
#include <twml/io/IOError.h>
using namespace twml::io;
namespace twml {
static int32_t getRawThriftType(twml_type dtype) {
// convert twml enum to tensor.thrift enum
switch (dtype) {
case TWML_TYPE_FLOAT:
return DATA_TYPE_FLOAT;
case TWML_TYPE_DOUBLE:
return DATA_TYPE_DOUBLE;
case TWML_TYPE_INT64:
return DATA_TYPE_INT64;
case TWML_TYPE_INT32:
return DATA_TYPE_INT32;
case TWML_TYPE_UINT8:
return DATA_TYPE_UINT8;
case TWML_TYPE_STRING:
return DATA_TYPE_STRING;
case TWML_TYPE_BOOL:
return DATA_TYPE_BOOL;
default:
throw IOError(IOError::UNSUPPORTED_OUTPUT_TYPE);
}
}
void TensorRecordWriter::writeTensor(const RawTensor &tensor) {
if (tensor.getType() == TWML_TYPE_INT32) {
m_thrift_writer.writeStructFieldHeader(TTYPE_STRUCT, GT_INT32);
m_thrift_writer.writeStructFieldHeader(TTYPE_LIST, 1);
m_thrift_writer.writeListHeader(TTYPE_I32, tensor.getNumElements());
const int32_t *data = tensor.getData<int32_t>();
for (uint64_t i = 0; i < tensor.getNumElements(); i++)
m_thrift_writer.writeInt32(data[i]);
} else if (tensor.getType() == TWML_TYPE_INT64) {
m_thrift_writer.writeStructFieldHeader(TTYPE_STRUCT, GT_INT64);
m_thrift_writer.writeStructFieldHeader(TTYPE_LIST, 1);
m_thrift_writer.writeListHeader(TTYPE_I64, tensor.getNumElements());
const int64_t *data = tensor.getData<int64_t>();
for (uint64_t i = 0; i < tensor.getNumElements(); i++)
m_thrift_writer.writeInt64(data[i]);
} else if (tensor.getType() == TWML_TYPE_FLOAT) {
m_thrift_writer.writeStructFieldHeader(TTYPE_STRUCT, GT_FLOAT);
m_thrift_writer.writeStructFieldHeader(TTYPE_LIST, 1);
m_thrift_writer.writeListHeader(TTYPE_DOUBLE, tensor.getNumElements());
const float *data = tensor.getData<float>();
for (uint64_t i = 0; i < tensor.getNumElements(); i++)
m_thrift_writer.writeDouble(static_cast<double>(data[i]));
} else if (tensor.getType() == TWML_TYPE_DOUBLE) {
m_thrift_writer.writeStructFieldHeader(TTYPE_STRUCT, GT_DOUBLE);
m_thrift_writer.writeStructFieldHeader(TTYPE_LIST, 1);
m_thrift_writer.writeListHeader(TTYPE_DOUBLE, tensor.getNumElements());
const double *data = tensor.getData<double>();
for (uint64_t i = 0; i < tensor.getNumElements(); i++)
m_thrift_writer.writeDouble(data[i]);
} else if (tensor.getType() == TWML_TYPE_STRING) {
m_thrift_writer.writeStructFieldHeader(TTYPE_STRUCT, GT_STRING);
m_thrift_writer.writeStructFieldHeader(TTYPE_LIST, 1);
m_thrift_writer.writeListHeader(TTYPE_STRING, tensor.getNumElements());
const std::string *data = tensor.getData<std::string>();
for (uint64_t i = 0; i < tensor.getNumElements(); i++)
m_thrift_writer.writeString(data[i]);
} else if (tensor.getType() == TWML_TYPE_BOOL) {
m_thrift_writer.writeStructFieldHeader(TTYPE_STRUCT, GT_BOOL);
m_thrift_writer.writeStructFieldHeader(TTYPE_LIST, 1);
m_thrift_writer.writeListHeader(TTYPE_BOOL, tensor.getNumElements());
const bool *data = tensor.getData<bool>();
for (uint64_t i = 0; i < tensor.getNumElements(); i++)
m_thrift_writer.writeBool(data[i]);
} else {
throw IOError(IOError::UNSUPPORTED_OUTPUT_TYPE);
}
// write tensor shape field
m_thrift_writer.writeStructFieldHeader(TTYPE_LIST, 2);
m_thrift_writer.writeListHeader(TTYPE_I64, tensor.getNumDims());
for (uint64_t i = 0; i < tensor.getNumDims(); i++)
m_thrift_writer.writeInt64(tensor.getDim(i));
m_thrift_writer.writeStructStop();
m_thrift_writer.writeStructStop();
}
void TensorRecordWriter::writeRawTensor(const RawTensor &tensor) {
m_thrift_writer.writeStructFieldHeader(TTYPE_STRUCT, GT_RAW);
// dataType field
m_thrift_writer.writeStructFieldHeader(TTYPE_I32, 1);
m_thrift_writer.writeInt32(getRawThriftType(tensor.getType()));
// content field
uint64_t type_size = getSizeOf(tensor.getType());
m_thrift_writer.writeStructFieldHeader(TTYPE_STRING, 2);
const uint8_t *data = reinterpret_cast<const uint8_t *>(tensor.getData<void>());
m_thrift_writer.writeBinary(data, tensor.getNumElements() * type_size);
// shape field
m_thrift_writer.writeStructFieldHeader(TTYPE_LIST, 3);
m_thrift_writer.writeListHeader(TTYPE_I64, tensor.getNumDims());
for (uint64_t i = 0; i < tensor.getNumDims(); i++)
m_thrift_writer.writeInt64(tensor.getDim(i));
m_thrift_writer.writeStructStop();
m_thrift_writer.writeStructStop();
}
TWMLAPI uint32_t TensorRecordWriter::getRecordsWritten() {
return m_records_written;
}
// Caller (usually DataRecordWriter) must precede with struct header field
// like thrift_writer.writeStructFieldHeader(TTYPE_MAP, DR_GENERAL_TENSOR)
TWMLAPI uint64_t TensorRecordWriter::write(twml::TensorRecord &record) {
uint64_t bytes_written_before = m_thrift_writer.getBytesWritten();
m_thrift_writer.writeMapHeader(TTYPE_I64, TTYPE_STRUCT, record.getRawTensors().size());
for (auto id_tensor_pairs : record.getRawTensors()) {
m_thrift_writer.writeInt64(id_tensor_pairs.first);
// all tensors written as RawTensor Thrift except for StringTensors
// this avoids the overhead of converting little endian to big endian
if (id_tensor_pairs.second.getType() == TWML_TYPE_STRING)
writeTensor(id_tensor_pairs.second);
else
writeRawTensor(id_tensor_pairs.second);
}
m_records_written++;
return m_thrift_writer.getBytesWritten() - bytes_written_before;
}
} // namespace twml

View File

@ -0,0 +1,33 @@
#include "internal/endianutils.h"
#include <twml/ThriftReader.h>
#include <twml/Error.h>
#include <cstring>
namespace twml {
uint8_t ThriftReader::readByte() {
return readDirect<uint8_t>();
}
int16_t ThriftReader::readInt16() {
return betoh16(readDirect<int16_t>());
}
int32_t ThriftReader::readInt32() {
return betoh32(readDirect<int32_t>());
}
int64_t ThriftReader::readInt64() {
return betoh64(readDirect<int64_t>());
}
double ThriftReader::readDouble() {
double val;
int64_t *val_proxy = reinterpret_cast<int64_t*>(&val);
*val_proxy = readInt64();
return val;
}
} // namespace twml

View File

@ -0,0 +1,91 @@
#include "internal/endianutils.h"
#include "internal/error.h"
#include "internal/thrift.h"
#include <twml/ThriftWriter.h>
#include <twml/Error.h>
#include <twml/io/IOError.h>
#include <cstring>
using namespace twml::io;
namespace twml {
template <typename T> inline
uint64_t ThriftWriter::write(T val) {
if (!m_dry_run) {
if (m_bytes_written + sizeof(T) > m_buffer_size)
throw IOError(IOError::DESTINATION_LARGER_THAN_CAPACITY);
memcpy(m_buffer, &val, sizeof(T));
m_buffer += sizeof(T);
}
m_bytes_written += sizeof(T);
return sizeof(T);
}
TWMLAPI uint64_t ThriftWriter::getBytesWritten() {
return m_bytes_written;
}
TWMLAPI uint64_t ThriftWriter::writeStructFieldHeader(int8_t field_type, int16_t field_id) {
return writeInt8(field_type) + writeInt16(field_id);
}
TWMLAPI uint64_t ThriftWriter::writeStructStop() {
return writeInt8(static_cast<int8_t>(TTYPE_STOP));
}
TWMLAPI uint64_t ThriftWriter::writeListHeader(int8_t element_type, int32_t num_elems) {
return writeInt8(element_type) + writeInt32(num_elems);
}
TWMLAPI uint64_t ThriftWriter::writeMapHeader(int8_t key_type, int8_t val_type, int32_t num_elems) {
return writeInt8(key_type) + writeInt8(val_type) + writeInt32(num_elems);
}
TWMLAPI uint64_t ThriftWriter::writeDouble(double val) {
int64_t bin_value;
memcpy(&bin_value, &val, sizeof(int64_t));
return writeInt64(bin_value);
}
TWMLAPI uint64_t ThriftWriter::writeInt8(int8_t val) {
return write(val);
}
TWMLAPI uint64_t ThriftWriter::writeInt16(int16_t val) {
return write(betoh16(val));
}
TWMLAPI uint64_t ThriftWriter::writeInt32(int32_t val) {
return write(betoh32(val));
}
TWMLAPI uint64_t ThriftWriter::writeInt64(int64_t val) {
return write(betoh64(val));
}
TWMLAPI uint64_t ThriftWriter::writeBinary(const uint8_t *bytes, int32_t num_bytes) {
writeInt32(num_bytes);
if (!m_dry_run) {
if (m_bytes_written + num_bytes > m_buffer_size)
throw IOError(IOError::DESTINATION_LARGER_THAN_CAPACITY);
memcpy(m_buffer, bytes, num_bytes);
m_buffer += num_bytes;
}
m_bytes_written += num_bytes;
return 4 + num_bytes;
}
TWMLAPI uint64_t ThriftWriter::writeString(std::string str) {
return writeBinary(reinterpret_cast<const uint8_t *>(str.data()), str.length());
}
TWMLAPI uint64_t ThriftWriter::writeBool(bool val) {
return write(val);
}
} // namespace twml

View File

@ -0,0 +1,167 @@
#include "internal/interpolate.h"
#include "internal/error.h"
#include <twml/discretizer_impl.h>
#include <twml/optim.h>
namespace twml {
// it is assumed that start_compute and end_compute are valid
template<typename T>
void discretizerInfer(Tensor &output_keys,
Tensor &output_vals,
const Tensor &input_ids,
const Tensor &input_vals,
const Tensor &bin_ids,
const Tensor &bin_vals,
const Tensor &feature_offsets,
int output_bits,
const Map<int64_t, int64_t> &ID_to_index,
int64_t start_compute,
int64_t end_compute,
int64_t output_start) {
auto out_keysData = output_keys.getData<int64_t>();
auto out_valsData = output_vals.getData<T>();
uint64_t out_keysStride = output_keys.getStride(0);
uint64_t out_valsStride = output_vals.getStride(0);
auto in_idsData = input_ids.getData<int64_t>();
auto in_valsData = input_vals.getData<T>();
uint64_t in_idsStride = input_ids.getStride(0);
uint64_t in_valsStride = input_vals.getStride(0);
auto xsData = bin_vals.getData<T>();
auto ysData = bin_ids.getData<int64_t>();
uint64_t xsStride = bin_vals.getStride(0);
uint64_t ysStride = bin_ids.getStride(0);
auto offsetData = feature_offsets.getData<int64_t>();
uint64_t total_bins = bin_ids.getNumElements();
uint64_t fsize = feature_offsets.getNumElements();
uint64_t output_size = (1 << output_bits);
for (uint64_t i = start_compute; i < end_compute; i++) {
int64_t feature_ID = in_idsData[i * in_idsStride];
T val = in_valsData[i * in_valsStride];
auto iter = ID_to_index.find(feature_ID);
if (iter == ID_to_index.end()) {
// feature not calibrated
// modulo add operation for new key from feature ID
int64_t ikey = feature_ID % (output_size - total_bins) + total_bins;
out_keysData[(i + output_start - start_compute) * out_keysStride] = ikey;
out_valsData[(i + output_start - start_compute) * out_valsStride] = val;
continue;
}
int64_t ikey = iter->second;
// Perform interpolation
uint64_t offset = offsetData[ikey];
uint64_t next_offset = (ikey == (int64_t)(fsize - 1)) ? total_bins : offsetData[ikey + 1];
uint64_t mainSize = next_offset - offset;
const T *lxsData = xsData + offset;
const int64_t *lysData = ysData + offset;
int64_t okey;
okey = interpolation<T, int64_t>(lxsData, xsStride,
lysData, ysStride,
val, mainSize,
NEAREST, 0);
out_keysData[(i + output_start - start_compute) * out_keysStride] = okey;
out_valsData[(i + output_start - start_compute) * out_valsStride] = 1;
}
}
void discretizerInfer(Tensor &output_keys,
Tensor &output_vals,
const Tensor &input_ids,
const Tensor &input_vals,
const Tensor &bin_ids,
const Tensor &bin_vals,
const Tensor &feature_offsets,
int output_bits,
const Map<int64_t, int64_t> &ID_to_index,
int start_compute,
int end_compute,
int output_start) {
if (input_ids.getType() != TWML_TYPE_INT64) {
throw twml::Error(TWML_ERR_TYPE, "input_ids must be a Long Tensor");
}
if (output_keys.getType() != TWML_TYPE_INT64) {
throw twml::Error(TWML_ERR_TYPE, "output_keys must be a Long Tensor");
}
if (bin_ids.getType() != TWML_TYPE_INT64) {
throw twml::Error(TWML_ERR_TYPE, "bin_ids must be a Long Tensor");
}
if (feature_offsets.getType() != TWML_TYPE_INT64) {
throw twml::Error(TWML_ERR_TYPE, "bin_ids must be a Long Tensor");
}
if (input_vals.getType() != bin_vals.getType()) {
throw twml::Error(TWML_ERR_TYPE,
"Data type of input_vals does not match type of bin_vals");
}
if (bin_vals.getNumDims() != 1) {
throw twml::Error(TWML_ERR_SIZE,
"bin_vals must be 1 Dimensional");
}
if (bin_ids.getNumDims() != 1) {
throw twml::Error(TWML_ERR_SIZE,
"bin_ids must be 1 Dimensional");
}
if (bin_vals.getNumElements() != bin_ids.getNumElements()) {
throw twml::Error(TWML_ERR_SIZE,
"Dimensions of bin_vals and bin_ids do not match");
}
if (feature_offsets.getStride(0) != 1) {
throw twml::Error(TWML_ERR_SIZE,
"feature_offsets must be contiguous");
}
uint64_t size = input_ids.getDim(0);
if (end_compute == -1) {
end_compute = size;
}
if (start_compute < 0 || start_compute >= size) {
throw twml::Error(TWML_ERR_SIZE,
"start_compute out of range");
}
if (end_compute < -1 || end_compute > size) {
throw twml::Error(TWML_ERR_SIZE,
"end_compute out of range");
}
if (start_compute > end_compute && end_compute != -1) {
throw twml::Error(TWML_ERR_SIZE,
"must have start_compute <= end_compute, or end_compute==-1");
}
switch (input_vals.getType()) {
case TWML_TYPE_FLOAT:
twml::discretizerInfer<float>(output_keys, output_vals,
input_ids, input_vals,
bin_ids, bin_vals, feature_offsets, output_bits, ID_to_index,
start_compute, end_compute, output_start);
break;
case TWML_TYPE_DOUBLE:
twml::discretizerInfer<double>(output_keys, output_vals,
input_ids, input_vals,
bin_ids, bin_vals, feature_offsets, output_bits, ID_to_index,
start_compute, end_compute, output_start);
break;
default:
throw twml::Error(TWML_ERR_TYPE,
"Unsupported datatype for discretizerInfer");
}
}
} // namespace twml

View File

@ -0,0 +1,158 @@
#include "internal/error.h"
#include "internal/murmur_hash3.h"
#include "internal/utf_converter.h"
#include <twml/functions.h>
#include <cstring>
#include <algorithm>
namespace twml {
template<typename T>
void add1(Tensor &output, const Tensor input) {
T *odata = output.getData<T>();
const T *idata = input.getData<T>();
const uint64_t num_elements = input.getNumElements();
for (uint64_t i = 0; i < num_elements; i++) {
odata[i] = idata[i] + 1;
}
}
template<typename T>
void copy(Tensor &output, const Tensor input) {
T *odata = output.getData<T>();
const T *idata = input.getData<T>();
const uint64_t num_elements = input.getNumElements();
for (uint64_t i = 0; i < num_elements; i++) {
odata[i] = idata[i];
}
}
void add1(Tensor &output, const Tensor input) {
auto type = input.getType();
if (output.getType() != type) {
throw twml::Error(TWML_ERR_TYPE, "Output type does not match input type");
}
if (output.getNumElements() != input.getNumElements()) {
throw twml::Error(TWML_ERR_SIZE, "Output size does not match input size");
}
// TODO: Implement an easier dispatch function
switch (type) {
case TWML_TYPE_FLOAT:
twml::add1<float>(output, input);
break;
case TWML_TYPE_DOUBLE:
twml::add1<double>(output, input);
break;
default:
throw twml::Error(TWML_ERR_TYPE, "add1 only supports float and double tensors");
}
}
void copy(Tensor &output, const Tensor input) {
auto type = input.getType();
if (output.getType() != type) {
throw twml::Error(TWML_ERR_TYPE, "Output type does not match input type");
}
if (output.getNumElements() != input.getNumElements()) {
throw twml::Error(TWML_ERR_SIZE, "Output size does not match input size");
}
// TODO: Implement an easier dispatch function
switch (type) {
case TWML_TYPE_FLOAT:
twml::copy<float>(output, input);
break;
case TWML_TYPE_DOUBLE:
twml::copy<double>(output, input);
break;
default:
throw twml::Error(TWML_ERR_TYPE, "copy only supports float and double tensors");
}
}
int64_t featureId(const std::string &feature) {
const char *str = feature.c_str();
uint64_t len = feature.size();
int64_t id = 0;
TWML_CHECK(twml_get_feature_id(&id, len, str), "Error getting featureId");
return id;
}
} // namespace twml
twml_err twml_add1(twml_tensor output, const twml_tensor input) {
HANDLE_EXCEPTIONS(
auto out = twml::getTensor(output);
auto in = twml::getConstTensor(input);
twml::add1(*out, *in););
return TWML_ERR_NONE;
}
twml_err twml_copy(twml_tensor output, const twml_tensor input) {
HANDLE_EXCEPTIONS(
auto out = twml::getTensor(output);
auto in = twml::getConstTensor(input);
twml::copy(*out, *in););
return TWML_ERR_NONE;
}
inline twml_err twml_get_feature_id_internal(int64_t *result,
uint64_t out_size, uint16_t *out,
uint64_t out2_size, uint16_t *out2,
const uint64_t len, const char *str) {
uint64_t k = 0;
for (uint64_t i = 0; i < len; i++) {
if (str[i] == '#') {
k = i;
break;
}
}
uint8_t hash[16];
if (k != 0) {
ssize_t n = utf8_to_utf16((const uint8_t *) str, k, out, out_size);
if (n < 0) throw std::invalid_argument("error while converting from utf8 to utf16");
MurmurHash3_x64_128(out, n * sizeof(uint16_t), 0, out2);
n = utf8_to_utf16((const uint8_t *) (str + k + 1), len - k - 1, &out2[4], out2_size - 8);
if (n < 0) throw std::invalid_argument("error while converting from utf8 to utf16");
MurmurHash3_x64_128(out2, (n * sizeof(uint16_t)) + 8, 0, hash);
} else {
ssize_t n = utf8_to_utf16((const uint8_t *)str, len, out, out_size);
if (n < 0) throw std::invalid_argument("error while converting from utf8 to utf16");
MurmurHash3_x64_128(out, n * sizeof(uint16_t), 0, hash);
}
int64_t id;
memcpy(&id, hash, sizeof(int64_t));
*result = id;
return TWML_ERR_NONE;
}
static const int UTF16_STR_MAX_SIZE = 1024;
twml_err twml_get_feature_id(int64_t *result, const uint64_t len, const char *str) {
try {
uint16_t out[UTF16_STR_MAX_SIZE];
uint16_t out2[UTF16_STR_MAX_SIZE];
return twml_get_feature_id_internal(result,
UTF16_STR_MAX_SIZE, out,
UTF16_STR_MAX_SIZE, out2,
len, str);
} catch(const std::invalid_argument &ex) {
// If the space on the stack is not enough, try using the heap.
// len + 1 is needed because a null terminating character is added at the end.
std::vector<uint16_t> out(len + 1);
std::vector<uint16_t> out2(len + 1);
return twml_get_feature_id_internal(result,
len + 1, out.data(),
len + 1, out2.data(),
len, str);
}
}

View File

@ -0,0 +1,241 @@
#include "internal/linear_search.h"
#include "internal/error.h"
#include <twml/hashing_discretizer_impl.h>
#include <twml/optim.h>
#include <algorithm>
namespace twml {
template<typename Tx>
static int64_t lower_bound_search(const Tx *data, const Tx val, const int64_t buf_size) {
auto index_temp = std::lower_bound(data, data + buf_size, val);
return static_cast<int64_t>(index_temp - data);
}
template<typename Tx>
static int64_t upper_bound_search(const Tx *data, const Tx val, const int64_t buf_size) {
auto index_temp = std::upper_bound(data, data + buf_size, val);
return static_cast<int64_t>(index_temp - data);
}
template<typename Tx>
using search_method = int64_t (*)(const Tx *, const Tx, const int64_t);
typedef uint64_t (*hash_signature)(uint64_t, int64_t, uint64_t);
// uint64_t integer_multiplicative_hashing()
//
// A function to hash discretized feature_ids into one of 2**output_bits buckets.
// This function hashes the feature_ids to achieve a uniform distribution of
// IDs, so the hashed IDs are with high probability far apart
// Then, bucket_indices can simply be added, resulting in unique new IDs with high probability
// We integer hash again to again spread out the new IDs
// Finally we take the upper
// Required args:
// feature_id:
// The feature id of the feature to be hashed.
// bucket_index:
// The bucket index of the discretized feature value
// output_bits:
// The number of bits of output space for the features to be hashed into.
//
// Note - feature_ids may have arbitrary distribution within int32s
// Note - 64 bit feature_ids can be processed with this, but the upper
// 32 bits have no effect on the output
// e.g. all feature ids 0 through 255 exist in movie-lens.
// this hashing constant is good for 32 LSBs. will use N=32. (can use N<32 also)
// this hashing constant is co-prime with 2**32, therefore we have that
// a != b, a and b in [0,2**32)
// implies
// f(a) != f(b) where f(x) = (hashing_constant * x) % (2**32)
// note that we are mostly ignoring the upper 32 bits, using modulo 2**32 arithmetic
uint64_t integer_multiplicative_hashing(uint64_t feature_id,
int64_t bucket_index,
uint64_t output_bits) {
// possibly use 14695981039346656037 for 64 bit unsigned??
// = 20921 * 465383 * 1509404459
// alternatively, 14695981039346656039 is prime
// We would also need to use N = 64
const uint64_t hashing_constant = 2654435761;
const uint64_t N = 32;
// hash once to prevent problems from anomalous input id distributions
feature_id *= hashing_constant;
feature_id += bucket_index;
// this hash enables the following right shift operation
// without losing the bucket information (lower bits)
feature_id *= hashing_constant;
// output size is a power of 2
feature_id >>= N - output_bits;
uint64_t mask = (1 << output_bits) - 1;
return mask & feature_id;
}
uint64_t integer64_multiplicative_hashing(uint64_t feature_id,
int64_t bucket_index,
uint64_t output_bits) {
const uint64_t hashing_constant = 14695981039346656039UL;
const uint64_t N = 64;
// hash once to prevent problems from anomalous input id distributions
feature_id *= hashing_constant;
feature_id += bucket_index;
// this hash enables the following right shift operation
// without losing the bucket information (lower bits)
feature_id *= hashing_constant;
// output size is a power of 2
feature_id >>= N - output_bits;
uint64_t mask = (1 << output_bits) - 1;
return mask & feature_id;
}
int64_t option_bits(int64_t options, int64_t high, int64_t low) {
options >>= low;
options &= (1 << (high - low + 1)) - 1;
return options;
}
// it is assumed that start_compute and end_compute are valid
template<typename T>
void hashDiscretizerInfer(Tensor &output_keys,
Tensor &output_vals,
const Tensor &input_ids,
const Tensor &input_vals,
const Tensor &bin_vals,
int output_bits,
const Map<int64_t, int64_t> &ID_to_index,
int64_t start_compute,
int64_t end_compute,
int64_t n_bin,
int64_t options) {
auto output_keys_data = output_keys.getData<int64_t>();
auto output_vals_data = output_vals.getData<T>();
auto input_ids_data = input_ids.getData<int64_t>();
auto input_vals_data = input_vals.getData<T>();
auto bin_vals_data = bin_vals.getData<T>();
// The function pointer implementation removes the option_bits
// function call (might be inlined) and corresponding branch from
// the hot loop, but it prevents inlining these functions, so
// there will be function call overhead. Uncertain which would
// be faster, testing needed. Also, code optimizers do weird things...
hash_signature hash_fn = integer_multiplicative_hashing;
switch (option_bits(options, 4, 2)) {
case 0:
hash_fn = integer_multiplicative_hashing;
break;
case 1:
hash_fn = integer64_multiplicative_hashing;
break;
default:
hash_fn = integer_multiplicative_hashing;
}
search_method<T> search_fn = lower_bound_search;
switch (option_bits(options, 1, 0)) {
case 0:
search_fn = lower_bound_search<T>;
break;
case 1:
search_fn = linear_search<T>;
break;
case 2:
search_fn = upper_bound_search<T>;
break;
default:
search_fn = lower_bound_search<T>;
}
for (uint64_t i = start_compute; i < end_compute; i++) {
int64_t id = input_ids_data[i];
T val = input_vals_data[i];
auto iter = ID_to_index.find(id);
if (iter != ID_to_index.end()) {
int64_t feature_idx = iter->second;
const T *bin_vals_start = bin_vals_data + feature_idx * n_bin;
int64_t out_bin_idx = search_fn(bin_vals_start, val, n_bin);
output_keys_data[i] = hash_fn(id, out_bin_idx, output_bits);
output_vals_data[i] = 1;
} else {
// feature not calibrated
output_keys_data[i] = id & ((1 << output_bits) - 1);
output_vals_data[i] = val;
}
}
}
void hashDiscretizerInfer(Tensor &output_keys,
Tensor &output_vals,
const Tensor &input_ids,
const Tensor &input_vals,
int n_bin,
const Tensor &bin_vals,
int output_bits,
const Map<int64_t, int64_t> &ID_to_index,
int start_compute,
int end_compute,
int64_t options) {
if (input_ids.getType() != TWML_TYPE_INT64) {
throw twml::Error(TWML_ERR_TYPE, "input_ids must be a Long Tensor");
}
if (output_keys.getType() != TWML_TYPE_INT64) {
throw twml::Error(TWML_ERR_TYPE, "output_keys must be a Long Tensor");
}
if (input_vals.getType() != bin_vals.getType()) {
throw twml::Error(TWML_ERR_TYPE,
"Data type of input_vals does not match type of bin_vals");
}
if (bin_vals.getNumDims() != 1) {
throw twml::Error(TWML_ERR_SIZE,
"bin_vals must be 1 Dimensional");
}
uint64_t size = input_ids.getDim(0);
if (end_compute == -1) {
end_compute = size;
}
if (start_compute < 0 || start_compute >= size) {
throw twml::Error(TWML_ERR_SIZE,
"start_compute out of range");
}
if (end_compute < -1 || end_compute > size) {
throw twml::Error(TWML_ERR_SIZE,
"end_compute out of range");
}
if (start_compute > end_compute && end_compute != -1) {
throw twml::Error(TWML_ERR_SIZE,
"must have start_compute <= end_compute, or end_compute==-1");
}
if (output_keys.getStride(0) != 1 || output_vals.getStride(0) != 1 ||
input_ids.getStride(0) != 1 || input_vals.getStride(0) != 1 ||
bin_vals.getStride(0) != 1) {
throw twml::Error(TWML_ERR_SIZE,
"All Strides must be 1.");
}
switch (input_vals.getType()) {
case TWML_TYPE_FLOAT:
twml::hashDiscretizerInfer<float>(output_keys, output_vals,
input_ids, input_vals,
bin_vals, output_bits, ID_to_index,
start_compute, end_compute, n_bin, options);
break;
case TWML_TYPE_DOUBLE:
twml::hashDiscretizerInfer<double>(output_keys, output_vals,
input_ids, input_vals,
bin_vals, output_bits, ID_to_index,
start_compute, end_compute, n_bin, options);
break;
default:
throw twml::Error(TWML_ERR_TYPE,
"Unsupported datatype for hashDiscretizerInfer");
}
}
} // namespace twml

View File

@ -0,0 +1,137 @@
//
// endian_fix.h
// ImageCore
//
// For OSes that use glibc < 2.9 (like RHEL5)
//
#pragma once
#ifdef __APPLE__
#include <libkern/OSByteOrder.h>
#define htobe16(x) OSSwapHostToBigInt16(x)
#define htole16(x) OSSwapHostToLittleInt16(x)
#define betoh16(x) OSSwapBigToHostInt16(x)
#define letoh16(x) OSSwapLittleToHostInt16(x)
#define htobe32(x) OSSwapHostToBigInt32(x)
#define htole32(x) OSSwapHostToLittleInt32(x)
#define betoh32(x) OSSwapBigToHostInt32(x)
#define letoh32(x) OSSwapLittleToHostInt32(x)
#define htobe64(x) OSSwapHostToBigInt64(x)
#define htole64(x) OSSwapHostToLittleInt64(x)
#define betoh64(x) OSSwapBigToHostInt64(x)
#define letoh64(x) OSSwapLittleToHostInt64(x)
#else
#include <endian.h>
#ifdef __USE_BSD
/* Conversion interfaces. */
#include <byteswap.h>
#if __BYTE_ORDER == __LITTLE_ENDIAN
#ifndef htobe16
#define htobe16(x) __bswap_16(x)
#endif
#ifndef htole16
#define htole16(x) (x)
#endif
#ifndef betoh16
#define betoh16(x) __bswap_16(x)
#endif
#ifndef letoh16
#define letoh16(x) (x)
#endif
#ifndef htobe32
#define htobe32(x) __bswap_32(x)
#endif
#ifndef htole32
#define htole32(x) (x)
#endif
#ifndef betoh32
#define betoh32(x) __bswap_32(x)
#endif
#ifndef letoh32
#define letoh32(x) (x)
#endif
#ifndef htobe64
#define htobe64(x) __bswap_64(x)
#endif
#ifndef htole64
#define htole64(x) (x)
#endif
#ifndef betoh64
#define betoh64(x) __bswap_64(x)
#endif
#ifndef letoh64
#define letoh64(x) (x)
#endif
#else /* __BYTE_ORDER == __LITTLE_ENDIAN */
#ifndef htobe16
#define htobe16(x) (x)
#endif
#ifndef htole16
#define htole16(x) __bswap_16(x)
#endif
#ifndef be16toh
#define be16toh(x) (x)
#endif
#ifndef le16toh
#define le16toh(x) __bswap_16(x)
#endif
#ifndef htobe32
#define htobe32(x) (x)
#endif
#ifndef htole32
#define htole32(x) __bswap_32(x)
#endif
#ifndef betoh32
#define betoh32(x) (x)
#endif
#ifndef letoh32
#define letoh32(x) __bswap_32(x)
#endif
#ifndef htobe64
#define htobe64(x) (x)
#endif
#ifndef htole64
#define htole64(x) __bswap_64(x)
#endif
#ifndef betoh64
#define betoh64(x) (x)
#endif
#ifndef letoh64
#define letoh64(x) __bswap_64(x)
#endif
#endif /* __BYTE_ORDER == __LITTLE_ENDIAN */
#else /* __USE_BSD */
#ifndef betoh16
#define betoh16 be16toh
#endif
#ifndef betoh32
#define betoh32 be32toh
#endif
#ifndef betoh64
#define betoh64 be64toh
#endif
#ifndef letoh16
#define letoh16 le16toh
#endif
#ifndef letoh32
#define letoh32 le32toh
#endif
#ifndef letoh64
#define letoh64 le64toh
#endif
#endif /* __USE_BSD */
#endif /* __APPLE__ */

View File

@ -0,0 +1,29 @@
#pragma once
#include <twml/Error.h>
#include <iostream>
#define HANDLE_EXCEPTIONS(fn) do { \
try { \
fn \
} catch(const twml::Error &e) { \
std::cerr << e.what() << std::endl; \
return e.err(); \
} catch(...) { \
std::cerr << "Unknown error\n"; \
return TWML_ERR_UNKNOWN; \
} \
} while(0)
#define TWML_CHECK(fn, msg) do { \
twml_err err = fn; \
if (err == TWML_ERR_NONE) break; \
throw twml::Error(err, msg); \
} while(0)
#define CHECK_THRIFT_TYPE(real_type, expected_type, type) do { \
int real_type_val = real_type; \
if (real_type_val != expected_type) { \
throw twml::ThriftInvalidType(real_type_val, __func__, type); \
} \
} while(0)

View File

@ -0,0 +1,74 @@
#pragma once
#ifdef __cplusplus
#include <twml/optim.h>
namespace twml {
enum InterpolationMode {LINEAR, NEAREST};
template<typename Tx, typename Ty>
static Tx interpolation(const Tx *xsData, const int64_t xsStride,
const Ty *ysData, const int64_t ysStride,
const Tx val, const int64_t mainSize,
const InterpolationMode mode,
const int64_t lowest,
const bool return_local_index = false) {
int64_t left = 0;
int64_t right = mainSize-1;
if (val <= xsData[0]) {
right = 0;
} else if (val >= xsData[right*xsStride]) {
left = right;
} else {
while (left < right) {
int64_t middle = (left+right)/2;
if (middle < mainSize - 1 &&
val >= xsData[middle*xsStride] &&
val <= xsData[(middle+1)*xsStride]) {
left = middle;
right = middle + 1;
break;
} else if (val > xsData[middle*xsStride]) {
left = middle;
} else {
right = middle;
}
}
if (lowest) {
while (left > 0 &&
val >= xsData[(left - 1) * xsStride] &&
val == xsData[left * xsStride]) {
left--;
right--;
}
}
}
Ty out = 0;
if (return_local_index) {
out = left;
} else if (mode == NEAREST) {
out = ysData[left*ysStride];
} else {
int64_t leftys = left*ysStride;
int64_t rightys = right*ysStride;
int64_t leftxs = left*xsStride;
int64_t rightxs = right*xsStride;
if (right != left+1 ||
xsData[leftxs] == xsData[rightxs]) {
out = ysData[leftys];
} else {
Tx xLeft = xsData[leftxs];
Tx xRight = xsData[rightxs];
Tx yLeft = ysData[leftys];
Tx ratio = (val - xLeft) / (xRight - xLeft);
out = ratio*(ysData[rightys] - yLeft) + yLeft;
}
}
return out;
}
} // namespace twml
#endif

View File

@ -0,0 +1,627 @@
/* The MIT License
Copyright (c) 2008, 2009, 2011 by Attractive Chaos <attractor@live.co.uk>
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
*/
/*
An example:
#include "khash.h"
KHASH_MAP_INIT_INT(32, char)
int main() {
int ret, is_missing;
khiter_t k;
khash_t(32) *h = kh_init(32);
k = kh_put(32, h, 5, &ret);
kh_value(h, k) = 10;
k = kh_get(32, h, 10);
is_missing = (k == kh_end(h));
k = kh_get(32, h, 5);
kh_del(32, h, k);
for (k = kh_begin(h); k != kh_end(h); ++k)
if (kh_exist(h, k)) kh_value(h, k) = 1;
kh_destroy(32, h);
return 0;
}
*/
/*
2013-05-02 (0.2.8):
* Use quadratic probing. When the capacity is power of 2, stepping function
i*(i+1)/2 guarantees to traverse each bucket. It is better than double
hashing on cache performance and is more robust than linear probing.
In theory, double hashing should be more robust than quadratic probing.
However, my implementation is probably not for large hash tables, because
the second hash function is closely tied to the first hash function,
which reduce the effectiveness of double hashing.
Reference: http://research.cs.vt.edu/AVresearch/hashing/quadratic.php
2011-12-29 (0.2.7):
* Minor code clean up; no actual effect.
2011-09-16 (0.2.6):
* The capacity is a power of 2. This seems to dramatically improve the
speed for simple keys. Thank Zilong Tan for the suggestion. Reference:
- http://code.google.com/p/ulib/
- http://nothings.org/computer/judy/
* Allow to optionally use linear probing which usually has better
performance for random input. Double hashing is still the default as it
is more robust to certain non-random input.
* Added Wang's integer hash function (not used by default). This hash
function is more robust to certain non-random input.
2011-02-14 (0.2.5):
* Allow to declare global functions.
2009-09-26 (0.2.4):
* Improve portability
2008-09-19 (0.2.3):
* Corrected the example
* Improved interfaces
2008-09-11 (0.2.2):
* Improved speed a little in kh_put()
2008-09-10 (0.2.1):
* Added kh_clear()
* Fixed a compiling error
2008-09-02 (0.2.0):
* Changed to token concatenation which increases flexibility.
2008-08-31 (0.1.2):
* Fixed a bug in kh_get(), which has not been tested previously.
2008-08-31 (0.1.1):
* Added destructor
*/
#ifndef __AC_KHASH_H
#define __AC_KHASH_H
/*!
@header
Generic hash table library.
*/
#define AC_VERSION_KHASH_H "0.2.8"
#include <stdlib.h>
#include <string.h>
#include <limits.h>
/* compiler specific configuration */
#if UINT_MAX == 0xffffffffu
typedef unsigned int khint32_t;
#elif ULONG_MAX == 0xffffffffu
typedef unsigned long khint32_t;
#endif
#if ULONG_MAX == ULLONG_MAX
typedef unsigned long khint64_t;
#else
typedef uint64_t khint64_t;
#endif
#ifndef kh_inline
#ifdef _MSC_VER
#define kh_inline __inline
#else
#define kh_inline inline
#endif
#endif /* kh_inline */
#ifndef klib_unused
#if (defined __clang__ && __clang_major__ >= 3) || (defined __GNUC__ && __GNUC__ >= 3)
#define klib_unused __attribute__ ((__unused__))
#else
#define klib_unused
#endif
#endif /* klib_unused */
typedef khint32_t khint_t;
typedef khint_t khiter_t;
#define __ac_isempty(flag, i) ((flag[i>>4]>>((i&0xfU)<<1))&2)
#define __ac_isdel(flag, i) ((flag[i>>4]>>((i&0xfU)<<1))&1)
#define __ac_iseither(flag, i) ((flag[i>>4]>>((i&0xfU)<<1))&3)
#define __ac_set_isdel_false(flag, i) (flag[i>>4]&=~(1ul<<((i&0xfU)<<1)))
#define __ac_set_isempty_false(flag, i) (flag[i>>4]&=~(2ul<<((i&0xfU)<<1)))
#define __ac_set_isboth_false(flag, i) (flag[i>>4]&=~(3ul<<((i&0xfU)<<1)))
#define __ac_set_isdel_true(flag, i) (flag[i>>4]|=1ul<<((i&0xfU)<<1))
#define __ac_fsize(m) ((m) < 16? 1 : (m)>>4)
#ifndef kroundup32
#define kroundup32(x) (--(x), (x)|=(x)>>1, (x)|=(x)>>2, (x)|=(x)>>4, (x)|=(x)>>8, (x)|=(x)>>16, ++(x))
#endif
#ifndef kcalloc
#define kcalloc(N,Z) calloc(N,Z)
#endif
#ifndef kmalloc
#define kmalloc(Z) malloc(Z)
#endif
#ifndef krealloc
#define krealloc(P,Z) realloc(P,Z)
#endif
#ifndef kfree
#define kfree(P) free(P)
#endif
static const double __ac_HASH_UPPER = 0.77;
#define __KHASH_TYPE(name, khkey_t, khval_t) \
typedef struct kh_##name##_s { \
khint_t n_buckets, size, n_occupied, upper_bound; \
khint32_t *flags; \
khkey_t *keys; \
khval_t *vals; \
} kh_##name##_t;
#define __KHASH_PROTOTYPES(name, khkey_t, khval_t) \
extern kh_##name##_t *kh_init_##name(void); \
extern void kh_destroy_##name(kh_##name##_t *h); \
extern void kh_clear_##name(kh_##name##_t *h); \
extern khint_t kh_get_##name(const kh_##name##_t *h, khkey_t key); \
extern int kh_resize_##name(kh_##name##_t *h, khint_t new_n_buckets); \
extern khint_t kh_put_##name(kh_##name##_t *h, khkey_t key, int *ret); \
extern void kh_del_##name(kh_##name##_t *h, khint_t x);
#define __KHASH_IMPL(name, SCOPE, khkey_t, khval_t, kh_is_map, __hash_func, __hash_equal) \
SCOPE kh_##name##_t *kh_init_##name(void) { \
return (kh_##name##_t*)kcalloc(1, sizeof(kh_##name##_t)); \
} \
SCOPE void kh_destroy_##name(kh_##name##_t *h) \
{ \
if (h) { \
kfree((void *)h->keys); kfree(h->flags); \
kfree((void *)h->vals); \
kfree(h); \
} \
} \
SCOPE void kh_clear_##name(kh_##name##_t *h) \
{ \
if (h && h->flags) { \
memset(h->flags, 0xaa, __ac_fsize(h->n_buckets) * sizeof(khint32_t)); \
h->size = h->n_occupied = 0; \
} \
} \
SCOPE khint_t kh_get_##name(const kh_##name##_t *h, khkey_t key) \
{ \
if (h->n_buckets) { \
khint_t k, i, last, mask, step = 0; \
mask = h->n_buckets - 1; \
k = __hash_func(key); i = k & mask; \
last = i; \
while (!__ac_isempty(h->flags, i) && (__ac_isdel(h->flags, i) || !__hash_equal(h->keys[i], key))) { \
i = (i + (++step)) & mask; \
if (i == last) return h->n_buckets; \
} \
return __ac_iseither(h->flags, i)? h->n_buckets : i; \
} else return 0; \
} \
SCOPE int kh_resize_##name(kh_##name##_t *h, khint_t new_n_buckets) \
{ /* This function uses 0.25*n_buckets bytes of working space instead of [sizeof(key_t+val_t)+.25]*n_buckets. */ \
khint32_t *new_flags = 0; \
khint_t j = 1; \
{ \
kroundup32(new_n_buckets); \
if (new_n_buckets < 4) new_n_buckets = 4; \
if (h->size >= (khint_t)(new_n_buckets * __ac_HASH_UPPER + 0.5)) j = 0; /* requested size is too small */ \
else { /* hash table size to be changed (shrink or expand); rehash */ \
new_flags = (khint32_t*)kmalloc(__ac_fsize(new_n_buckets) * sizeof(khint32_t)); \
if (!new_flags) return -1; \
memset(new_flags, 0xaa, __ac_fsize(new_n_buckets) * sizeof(khint32_t)); \
if (h->n_buckets < new_n_buckets) { /* expand */ \
khkey_t *new_keys = (khkey_t*)krealloc((void *)h->keys, new_n_buckets * sizeof(khkey_t)); \
if (!new_keys) { kfree(new_flags); return -1; } \
h->keys = new_keys; \
if (kh_is_map) { \
khval_t *new_vals = (khval_t*)krealloc((void *)h->vals, new_n_buckets * sizeof(khval_t)); \
if (!new_vals) { kfree(new_flags); return -1; } \
h->vals = new_vals; \
} \
} /* otherwise shrink */ \
} \
} \
if (j) { /* rehashing is needed */ \
for (j = 0; j != h->n_buckets; ++j) { \
if (__ac_iseither(h->flags, j) == 0) { \
khkey_t key = h->keys[j]; \
khval_t val; \
khint_t new_mask; \
new_mask = new_n_buckets - 1; \
if (kh_is_map) val = h->vals[j]; \
__ac_set_isdel_true(h->flags, j); \
while (1) { /* kick-out process; sort of like in Cuckoo hashing */ \
khint_t k, i, step = 0; \
k = __hash_func(key); \
i = k & new_mask; \
while (!__ac_isempty(new_flags, i)) i = (i + (++step)) & new_mask; \
__ac_set_isempty_false(new_flags, i); \
if (i < h->n_buckets && __ac_iseither(h->flags, i) == 0) { /* kick out the existing element */ \
{ khkey_t tmp = h->keys[i]; h->keys[i] = key; key = tmp; } \
if (kh_is_map) { khval_t tmp = h->vals[i]; h->vals[i] = val; val = tmp; } \
__ac_set_isdel_true(h->flags, i); /* mark it as deleted in the old hash table */ \
} else { /* write the element and jump out of the loop */ \
h->keys[i] = key; \
if (kh_is_map) h->vals[i] = val; \
break; \
} \
} \
} \
} \
if (h->n_buckets > new_n_buckets) { /* shrink the hash table */ \
h->keys = (khkey_t*)krealloc((void *)h->keys, new_n_buckets * sizeof(khkey_t)); \
if (kh_is_map) h->vals = (khval_t*)krealloc((void *)h->vals, new_n_buckets * sizeof(khval_t)); \
} \
kfree(h->flags); /* free the working space */ \
h->flags = new_flags; \
h->n_buckets = new_n_buckets; \
h->n_occupied = h->size; \
h->upper_bound = (khint_t)(h->n_buckets * __ac_HASH_UPPER + 0.5); \
} \
return 0; \
} \
SCOPE khint_t kh_put_##name(kh_##name##_t *h, khkey_t key, int *ret) \
{ \
khint_t x; \
if (h->n_occupied >= h->upper_bound) { /* update the hash table */ \
if (h->n_buckets > (h->size<<1)) { \
if (kh_resize_##name(h, h->n_buckets - 1) < 0) { /* clear "deleted" elements */ \
*ret = -1; return h->n_buckets; \
} \
} else if (kh_resize_##name(h, h->n_buckets + 1) < 0) { /* expand the hash table */ \
*ret = -1; return h->n_buckets; \
} \
} /* TODO: to implement automatically shrinking; resize() already support shrinking */ \
{ \
khint_t k, i, site, last, mask = h->n_buckets - 1, step = 0; \
x = site = h->n_buckets; k = __hash_func(key); i = k & mask; \
if (__ac_isempty(h->flags, i)) x = i; /* for speed up */ \
else { \
last = i; \
while (!__ac_isempty(h->flags, i) && (__ac_isdel(h->flags, i) || !__hash_equal(h->keys[i], key))) { \
if (__ac_isdel(h->flags, i)) site = i; \
i = (i + (++step)) & mask; \
if (i == last) { x = site; break; } \
} \
if (x == h->n_buckets) { \
if (__ac_isempty(h->flags, i) && site != h->n_buckets) x = site; \
else x = i; \
} \
} \
} \
if (__ac_isempty(h->flags, x)) { /* not present at all */ \
h->keys[x] = key; \
__ac_set_isboth_false(h->flags, x); \
++h->size; ++h->n_occupied; \
*ret = 1; \
} else if (__ac_isdel(h->flags, x)) { /* deleted */ \
h->keys[x] = key; \
__ac_set_isboth_false(h->flags, x); \
++h->size; \
*ret = 2; \
} else *ret = 0; /* Don't touch h->keys[x] if present and not deleted */ \
return x; \
} \
SCOPE void kh_del_##name(kh_##name##_t *h, khint_t x) \
{ \
if (x != h->n_buckets && !__ac_iseither(h->flags, x)) { \
__ac_set_isdel_true(h->flags, x); \
--h->size; \
} \
}
#define KHASH_DECLARE(name, khkey_t, khval_t) \
__KHASH_TYPE(name, khkey_t, khval_t) \
__KHASH_PROTOTYPES(name, khkey_t, khval_t)
#define KHASH_INIT2(name, SCOPE, khkey_t, khval_t, kh_is_map, __hash_func, __hash_equal) \
__KHASH_TYPE(name, khkey_t, khval_t) \
__KHASH_IMPL(name, SCOPE, khkey_t, khval_t, kh_is_map, __hash_func, __hash_equal)
#define KHASH_INIT(name, khkey_t, khval_t, kh_is_map, __hash_func, __hash_equal) \
KHASH_INIT2(name, static kh_inline klib_unused, khkey_t, khval_t, kh_is_map, __hash_func, __hash_equal)
/* --- BEGIN OF HASH FUNCTIONS --- */
/*! @function
@abstract Integer hash function
@param key The integer [khint32_t]
@return The hash value [khint_t]
*/
#define kh_int_hash_func(key) (khint32_t)(key)
/*! @function
@abstract Integer comparison function
*/
#define kh_int_hash_equal(a, b) ((a) == (b))
/*! @function
@abstract 64-bit integer hash function
@param key The integer [khint64_t]
@return The hash value [khint_t]
*/
#define kh_int64_hash_func(key) (khint32_t)((key)>>33^(key)^(key)<<11)
/*! @function
@abstract 64-bit integer comparison function
*/
#define kh_int64_hash_equal(a, b) ((a) == (b))
/*! @function
@abstract const char* hash function
@param s Pointer to a null terminated string
@return The hash value
*/
static kh_inline khint_t __ac_X31_hash_string(const char *s)
{
khint_t h = (khint_t)*s;
if (h) for (++s ; *s; ++s) h = (h << 5) - h + (khint_t)*s;
return h;
}
/*! @function
@abstract Another interface to const char* hash function
@param key Pointer to a null terminated string [const char*]
@return The hash value [khint_t]
*/
#define kh_str_hash_func(key) __ac_X31_hash_string(key)
/*! @function
@abstract Const char* comparison function
*/
#define kh_str_hash_equal(a, b) (strcmp(a, b) == 0)
static kh_inline khint_t __ac_Wang_hash(khint_t key)
{
key += ~(key << 15);
key ^= (key >> 10);
key += (key << 3);
key ^= (key >> 6);
key += ~(key << 11);
key ^= (key >> 16);
return key;
}
#define kh_int_hash_func2(key) __ac_Wang_hash((khint_t)key)
/* --- END OF HASH FUNCTIONS --- */
/* Other convenient macros... */
/*!
@abstract Type of the hash table.
@param name Name of the hash table [symbol]
*/
#define khash_t(name) kh_##name##_t
/*! @function
@abstract Initiate a hash table.
@param name Name of the hash table [symbol]
@return Pointer to the hash table [khash_t(name)*]
*/
#define kh_init(name) kh_init_##name()
/*! @function
@abstract Destroy a hash table.
@param name Name of the hash table [symbol]
@param h Pointer to the hash table [khash_t(name)*]
*/
#define kh_destroy(name, h) kh_destroy_##name(h)
/*! @function
@abstract Reset a hash table without deallocating memory.
@param name Name of the hash table [symbol]
@param h Pointer to the hash table [khash_t(name)*]
*/
#define kh_clear(name, h) kh_clear_##name(h)
/*! @function
@abstract Resize a hash table.
@param name Name of the hash table [symbol]
@param h Pointer to the hash table [khash_t(name)*]
@param s New size [khint_t]
*/
#define kh_resize(name, h, s) kh_resize_##name(h, s)
/*! @function
@abstract Insert a key to the hash table.
@param name Name of the hash table [symbol]
@param h Pointer to the hash table [khash_t(name)*]
@param k Key [type of keys]
@param r Extra return code: -1 if the operation failed;
0 if the key is present in the hash table;
1 if the bucket is empty (never used); 2 if the element in
the bucket has been deleted [int*]
@return Iterator to the inserted element [khint_t]
*/
#define kh_put(name, h, k, r) kh_put_##name(h, k, r)
/*! @function
@abstract Retrieve a key from the hash table.
@param name Name of the hash table [symbol]
@param h Pointer to the hash table [khash_t(name)*]
@param k Key [type of keys]
@return Iterator to the found element, or kh_end(h) if the element is absent [khint_t]
*/
#define kh_get(name, h, k) kh_get_##name(h, k)
/*! @function
@abstract Remove a key from the hash table.
@param name Name of the hash table [symbol]
@param h Pointer to the hash table [khash_t(name)*]
@param k Iterator to the element to be deleted [khint_t]
*/
#define kh_del(name, h, k) kh_del_##name(h, k)
/*! @function
@abstract Test whether a bucket contains data.
@param h Pointer to the hash table [khash_t(name)*]
@param x Iterator to the bucket [khint_t]
@return 1 if containing data; 0 otherwise [int]
*/
#define kh_exist(h, x) (!__ac_iseither((h)->flags, (x)))
/*! @function
@abstract Get key given an iterator
@param h Pointer to the hash table [khash_t(name)*]
@param x Iterator to the bucket [khint_t]
@return Key [type of keys]
*/
#define kh_key(h, x) ((h)->keys[x])
/*! @function
@abstract Get value given an iterator
@param h Pointer to the hash table [khash_t(name)*]
@param x Iterator to the bucket [khint_t]
@return Value [type of values]
@discussion For hash sets, calling this results in segfault.
*/
#define kh_val(h, x) ((h)->vals[x])
/*! @function
@abstract Alias of kh_val()
*/
#define kh_value(h, x) ((h)->vals[x])
/*! @function
@abstract Get the start iterator
@param h Pointer to the hash table [khash_t(name)*]
@return The start iterator [khint_t]
*/
#define kh_begin(h) (khint_t)(0)
/*! @function
@abstract Get the end iterator
@param h Pointer to the hash table [khash_t(name)*]
@return The end iterator [khint_t]
*/
#define kh_end(h) ((h)->n_buckets)
/*! @function
@abstract Get the number of elements in the hash table
@param h Pointer to the hash table [khash_t(name)*]
@return Number of elements in the hash table [khint_t]
*/
#define kh_size(h) ((h)->size)
/*! @function
@abstract Get the number of buckets in the hash table
@param h Pointer to the hash table [khash_t(name)*]
@return Number of buckets in the hash table [khint_t]
*/
#define kh_n_buckets(h) ((h)->n_buckets)
/*! @function
@abstract Iterate over the entries in the hash table
@param h Pointer to the hash table [khash_t(name)*]
@param kvar Variable to which key will be assigned
@param vvar Variable to which value will be assigned
@param code Block of code to execute
*/
#define kh_foreach(h, kvar, vvar, code) { khint_t __i; \
for (__i = kh_begin(h); __i != kh_end(h); ++__i) { \
if (!kh_exist(h,__i)) continue; \
(kvar) = kh_key(h,__i); \
(vvar) = kh_val(h,__i); \
code; \
} }
/*! @function
@abstract Iterate over the values in the hash table
@param h Pointer to the hash table [khash_t(name)*]
@param vvar Variable to which value will be assigned
@param code Block of code to execute
*/
#define kh_foreach_value(h, vvar, code) { khint_t __i; \
for (__i = kh_begin(h); __i != kh_end(h); ++__i) { \
if (!kh_exist(h,__i)) continue; \
(vvar) = kh_val(h,__i); \
code; \
} }
/* More conenient interfaces */
/*! @function
@abstract Instantiate a hash set containing integer keys
@param name Name of the hash table [symbol]
*/
#define KHASH_SET_INIT_INT(name) \
KHASH_INIT(name, khint32_t, char, 0, kh_int_hash_func, kh_int_hash_equal)
/*! @function
@abstract Instantiate a hash map containing integer keys
@param name Name of the hash table [symbol]
@param khval_t Type of values [type]
*/
#define KHASH_MAP_INIT_INT(name, khval_t) \
KHASH_INIT(name, khint32_t, khval_t, 1, kh_int_hash_func, kh_int_hash_equal)
/*! @function
@abstract Instantiate a hash map containing 64-bit integer keys
@param name Name of the hash table [symbol]
*/
#define KHASH_SET_INIT_INT64(name) \
KHASH_INIT(name, khint64_t, char, 0, kh_int64_hash_func, kh_int64_hash_equal)
/*! @function
@abstract Instantiate a hash map containing 64-bit integer keys
@param name Name of the hash table [symbol]
@param khval_t Type of values [type]
*/
#define KHASH_MAP_INIT_INT64(name, khval_t) \
KHASH_INIT(name, khint64_t, khval_t, 1, kh_int64_hash_func, kh_int64_hash_equal)
typedef const char *kh_cstr_t;
/*! @function
@abstract Instantiate a hash map containing const char* keys
@param name Name of the hash table [symbol]
*/
#define KHASH_SET_INIT_STR(name) \
KHASH_INIT(name, kh_cstr_t, char, 0, kh_str_hash_func, kh_str_hash_equal)
/*! @function
@abstract Instantiate a hash map containing const char* keys
@param name Name of the hash table [symbol]
@param khval_t Type of values [type]
*/
#define KHASH_MAP_INIT_STR(name, khval_t) \
KHASH_INIT(name, kh_cstr_t, khval_t, 1, kh_str_hash_func, kh_str_hash_equal)
#endif /* __AC_KHASH_H */

View File

@ -0,0 +1,17 @@
#pragma once
#ifdef __cplusplus
#include <twml/optim.h>
namespace twml {
template<typename Tx>
static int64_t linear_search(const Tx *xsData, const Tx val, const int64_t mainSize) {
int64_t left = 0;
int64_t right = mainSize-1;
while(left <= right && val > xsData[left])
left++;
return left;
}
} // namespace twml
#endif

View File

@ -0,0 +1,37 @@
//-----------------------------------------------------------------------------
// MurmurHash3 was written by Austin Appleby, and is placed in the public
// domain. The author hereby disclaims copyright to this source code.
#ifndef _MURMURHASH3_H_
#define _MURMURHASH3_H_
//-----------------------------------------------------------------------------
// Platform-specific functions and macros
// Microsoft Visual Studio
#if defined(_MSC_VER) && (_MSC_VER < 1600)
typedef unsigned char uint8_t;
typedef unsigned int uint32_t;
typedef unsigned __int64 uint64_t;
// Other compilers
#else // defined(_MSC_VER)
#include <stdint.h>
#endif // !defined(_MSC_VER)
//-----------------------------------------------------------------------------
void MurmurHash3_x86_32 ( const void * key, int len, uint32_t seed, void * out );
void MurmurHash3_x86_128 ( const void * key, int len, uint32_t seed, void * out );
void MurmurHash3_x64_128 ( const void * key, int len, uint32_t seed, void * out );
//-----------------------------------------------------------------------------
#endif // _MURMURHASH3_H_

View File

@ -0,0 +1,69 @@
// For details of how to encode and decode thrift, check
// https://github.com/apache/thrift/blob/master/doc/specs/thrift-binary-protocol.md
// Definitions of the thrift binary format
typedef enum {
TTYPE_STOP = 0,
TTYPE_VOID = 1,
TTYPE_BOOL = 2,
TTYPE_BYTE = 3,
TTYPE_DOUBLE = 4,
TTYPE_I16 = 6,
TTYPE_I32 = 8,
TTYPE_I64 = 10,
TTYPE_STRING = 11,
TTYPE_STRUCT = 12,
TTYPE_MAP = 13,
TTYPE_SET = 14,
TTYPE_LIST = 15,
TTYPE_ENUM = 16,
} TTYPES;
// Fields of a batch prediction response
typedef enum {
BPR_DUMMY ,
BPR_PREDICTIONS,
} BPR_FIELDS;
// Fields of a datarecord
typedef enum {
DR_CROSS , // fake field for crosses
DR_BINARY ,
DR_CONTINUOUS ,
DR_DISCRETE ,
DR_STRING ,
DR_SPARSE_BINARY ,
DR_SPARSE_CONTINUOUS ,
DR_BLOB ,
DR_GENERAL_TENSOR ,
DR_SPARSE_TENSOR ,
} DR_FIELDS;
// Fields for General tensor
typedef enum {
GT_DUMMY , // dummy field
GT_RAW ,
GT_STRING ,
GT_INT32 ,
GT_INT64 ,
GT_FLOAT ,
GT_DOUBLE ,
GT_BOOL ,
} GT_FIELDS;
typedef enum {
SP_DUMMY , // dummy field
SP_COO ,
} SP_FIELDS;
// Enum values from tensor.thrift
typedef enum {
DATA_TYPE_FLOAT ,
DATA_TYPE_DOUBLE ,
DATA_TYPE_INT32 ,
DATA_TYPE_INT64 ,
DATA_TYPE_UINT8 ,
DATA_TYPE_STRING ,
DATA_TYPE_BYTE ,
DATA_TYPE_BOOL ,
} DATA_TYPES;

View File

@ -0,0 +1,10 @@
#ifndef _UTF_CONVERTER_H_
#define _UTF_CONVERTER_H_
#include <stddef.h>
#include <stdint.h>
#include <sys/types.h>
ssize_t utf8_to_utf16(const uint8_t *in, uint64_t in_len, uint16_t *out, uint64_t max_out);
#endif

View File

@ -0,0 +1,61 @@
#include <twml/io/IOError.h>
namespace twml {
namespace io {
namespace {
std::string messageFromStatus(IOError::Status status) {
switch (status) {
case IOError::OUT_OF_RANGE:
return "failed to read enough input";
case IOError::WRONG_MAGIC:
return "wrong magic in stream";
case IOError::WRONG_HEADER:
return "wrong header in stream";
case IOError::ERROR_HEADER_CHECKSUM:
return "header checksum doesn't match";
case IOError::INVALID_METHOD:
return "using invalid method";
case IOError::USING_RESERVED:
return "using reserved flag";
case IOError::ERROR_HEADER_EXTRA_FIELD_CHECKSUM:
return "extra header field checksum doesn't match";
case IOError::CANT_FIT_OUTPUT:
return "can't fit output in the given space";
case IOError::SPLIT_FILE:
return "split files aren't supported";
case IOError::BLOCK_SIZE_TOO_LARGE:
return "block size is too large";
case IOError::SOURCE_LARGER_THAN_DESTINATION:
return "source is larger than destination";
case IOError::DESTINATION_LARGER_THAN_CAPACITY:
return "destination buffer is too small to fit uncompressed result";
case IOError::HEADER_FLAG_MISMATCH:
return "failed to match flags for compressed and decompressed data";
case IOError::NOT_ENOUGH_INPUT:
return "not enough input to proceed with decompression";
case IOError::ERROR_SOURCE_BLOCK_CHECKSUM:
return "source block checksum doesn't match";
case IOError::COMPRESSED_DATA_VIOLATION:
return "error occurred while decompressing the data";
case IOError::ERROR_DESTINATION_BLOCK_CHECKSUM:
return "destination block checksum doesn't match";
case IOError::EMPTY_RECORD:
return "can't write an empty record";
case IOError::MALFORMED_MEMORY_RECORD:
return "can't write malformed record";
case IOError::UNSUPPORTED_OUTPUT_TYPE:
return "output data type is not supported";
case IOError::OTHER_ERROR:
default:
return "unknown error occurred";
}
}
} // namespace
IOError::IOError(Status status): twml::Error(TWML_ERR_IO, "Found error while processing stream: " +
messageFromStatus(status)), m_status(status) {}
} // namespace io
} // namespace twml

View File

@ -0,0 +1,335 @@
//-----------------------------------------------------------------------------
// MurmurHash3 was written by Austin Appleby, and is placed in the public
// domain. The author hereby disclaims copyright to this source code.
// Note - The x86 and x64 versions do _not_ produce the same results, as the
// algorithms are optimized for their respective platforms. You can still
// compile and run any of them on any platform, but your performance with the
// non-native version will be less than optimal.
#include "internal/murmur_hash3.h"
//-----------------------------------------------------------------------------
// Platform-specific functions and macros
// Microsoft Visual Studio
#if defined(_MSC_VER)
#define FORCE_INLINE __forceinline
#include <stdlib.h>
#define ROTL32(x,y) _rotl(x,y)
#define ROTL64(x,y) _rotl64(x,y)
#define BIG_CONSTANT(x) (x)
// Other compilers
#else // defined(_MSC_VER)
#define FORCE_INLINE inline __attribute__((always_inline))
FORCE_INLINE uint32_t rotl32 ( uint32_t x, int8_t r )
{
return (x << r) | (x >> (32 - r));
}
FORCE_INLINE uint64_t rotl64 ( uint64_t x, int8_t r )
{
return (x << r) | (x >> (64 - r));
}
#define ROTL32(x,y) rotl32(x,y)
#define ROTL64(x,y) rotl64(x,y)
#define BIG_CONSTANT(x) (x##LLU)
#endif // !defined(_MSC_VER)
//-----------------------------------------------------------------------------
// Block read - if your platform needs to do endian-swapping or can only
// handle aligned reads, do the conversion here
FORCE_INLINE uint32_t getblock32 ( const uint32_t * p, int i )
{
return p[i];
}
FORCE_INLINE uint64_t getblock64 ( const uint64_t * p, int i )
{
return p[i];
}
//-----------------------------------------------------------------------------
// Finalization mix - force all bits of a hash block to avalanche
FORCE_INLINE uint32_t fmix32 ( uint32_t h )
{
h ^= h >> 16;
h *= 0x85ebca6b;
h ^= h >> 13;
h *= 0xc2b2ae35;
h ^= h >> 16;
return h;
}
//----------
FORCE_INLINE uint64_t fmix64 ( uint64_t k )
{
k ^= k >> 33;
k *= BIG_CONSTANT(0xff51afd7ed558ccd);
k ^= k >> 33;
k *= BIG_CONSTANT(0xc4ceb9fe1a85ec53);
k ^= k >> 33;
return k;
}
//-----------------------------------------------------------------------------
void MurmurHash3_x86_32 ( const void * key, int len,
uint32_t seed, void * out )
{
const uint8_t * data = (const uint8_t*)key;
const int nblocks = len / 4;
uint32_t h1 = seed;
const uint32_t c1 = 0xcc9e2d51;
const uint32_t c2 = 0x1b873593;
//----------
// body
const uint32_t * blocks = (const uint32_t *)(data + nblocks*4);
for(int i = -nblocks; i; i++)
{
uint32_t k1 = getblock32(blocks,i);
k1 *= c1;
k1 = ROTL32(k1,15);
k1 *= c2;
h1 ^= k1;
h1 = ROTL32(h1,13);
h1 = h1*5+0xe6546b64;
}
//----------
// tail
const uint8_t * tail = (const uint8_t*)(data + nblocks*4);
uint32_t k1 = 0;
switch(len & 3)
{
case 3: k1 ^= tail[2] << 16;
case 2: k1 ^= tail[1] << 8;
case 1: k1 ^= tail[0];
k1 *= c1; k1 = ROTL32(k1,15); k1 *= c2; h1 ^= k1;
};
//----------
// finalization
h1 ^= len;
h1 = fmix32(h1);
*(uint32_t*)out = h1;
}
//-----------------------------------------------------------------------------
void MurmurHash3_x86_128 ( const void * key, const int len,
uint32_t seed, void * out )
{
const uint8_t * data = (const uint8_t*)key;
const int nblocks = len / 16;
uint32_t h1 = seed;
uint32_t h2 = seed;
uint32_t h3 = seed;
uint32_t h4 = seed;
const uint32_t c1 = 0x239b961b;
const uint32_t c2 = 0xab0e9789;
const uint32_t c3 = 0x38b34ae5;
const uint32_t c4 = 0xa1e38b93;
//----------
// body
const uint32_t * blocks = (const uint32_t *)(data + nblocks*16);
for(int i = -nblocks; i; i++)
{
uint32_t k1 = getblock32(blocks,i*4+0);
uint32_t k2 = getblock32(blocks,i*4+1);
uint32_t k3 = getblock32(blocks,i*4+2);
uint32_t k4 = getblock32(blocks,i*4+3);
k1 *= c1; k1 = ROTL32(k1,15); k1 *= c2; h1 ^= k1;
h1 = ROTL32(h1,19); h1 += h2; h1 = h1*5+0x561ccd1b;
k2 *= c2; k2 = ROTL32(k2,16); k2 *= c3; h2 ^= k2;
h2 = ROTL32(h2,17); h2 += h3; h2 = h2*5+0x0bcaa747;
k3 *= c3; k3 = ROTL32(k3,17); k3 *= c4; h3 ^= k3;
h3 = ROTL32(h3,15); h3 += h4; h3 = h3*5+0x96cd1c35;
k4 *= c4; k4 = ROTL32(k4,18); k4 *= c1; h4 ^= k4;
h4 = ROTL32(h4,13); h4 += h1; h4 = h4*5+0x32ac3b17;
}
//----------
// tail
const uint8_t * tail = (const uint8_t*)(data + nblocks*16);
uint32_t k1 = 0;
uint32_t k2 = 0;
uint32_t k3 = 0;
uint32_t k4 = 0;
switch(len & 15)
{
case 15: k4 ^= tail[14] << 16;
case 14: k4 ^= tail[13] << 8;
case 13: k4 ^= tail[12] << 0;
k4 *= c4; k4 = ROTL32(k4,18); k4 *= c1; h4 ^= k4;
case 12: k3 ^= tail[11] << 24;
case 11: k3 ^= tail[10] << 16;
case 10: k3 ^= tail[ 9] << 8;
case 9: k3 ^= tail[ 8] << 0;
k3 *= c3; k3 = ROTL32(k3,17); k3 *= c4; h3 ^= k3;
case 8: k2 ^= tail[ 7] << 24;
case 7: k2 ^= tail[ 6] << 16;
case 6: k2 ^= tail[ 5] << 8;
case 5: k2 ^= tail[ 4] << 0;
k2 *= c2; k2 = ROTL32(k2,16); k2 *= c3; h2 ^= k2;
case 4: k1 ^= tail[ 3] << 24;
case 3: k1 ^= tail[ 2] << 16;
case 2: k1 ^= tail[ 1] << 8;
case 1: k1 ^= tail[ 0] << 0;
k1 *= c1; k1 = ROTL32(k1,15); k1 *= c2; h1 ^= k1;
};
//----------
// finalization
h1 ^= len; h2 ^= len; h3 ^= len; h4 ^= len;
h1 += h2; h1 += h3; h1 += h4;
h2 += h1; h3 += h1; h4 += h1;
h1 = fmix32(h1);
h2 = fmix32(h2);
h3 = fmix32(h3);
h4 = fmix32(h4);
h1 += h2; h1 += h3; h1 += h4;
h2 += h1; h3 += h1; h4 += h1;
((uint32_t*)out)[0] = h1;
((uint32_t*)out)[1] = h2;
((uint32_t*)out)[2] = h3;
((uint32_t*)out)[3] = h4;
}
//-----------------------------------------------------------------------------
void MurmurHash3_x64_128 ( const void * key, const int len,
const uint32_t seed, void * out )
{
const uint8_t * data = (const uint8_t*)key;
const int nblocks = len / 16;
uint64_t h1 = seed;
uint64_t h2 = seed;
const uint64_t c1 = BIG_CONSTANT(0x87c37b91114253d5);
const uint64_t c2 = BIG_CONSTANT(0x4cf5ad432745937f);
//----------
// body
const uint64_t * blocks = (const uint64_t *)(data);
for(int i = 0; i < nblocks; i++)
{
uint64_t k1 = getblock64(blocks,i*2+0);
uint64_t k2 = getblock64(blocks,i*2+1);
k1 *= c1; k1 = ROTL64(k1,31); k1 *= c2; h1 ^= k1;
h1 = ROTL64(h1,27); h1 += h2; h1 = h1*5+0x52dce729;
k2 *= c2; k2 = ROTL64(k2,33); k2 *= c1; h2 ^= k2;
h2 = ROTL64(h2,31); h2 += h1; h2 = h2*5+0x38495ab5;
}
//----------
// tail
const uint8_t * tail = (const uint8_t*)(data + nblocks*16);
uint64_t k1 = 0;
uint64_t k2 = 0;
switch(len & 15)
{
case 15: k2 ^= ((uint64_t)tail[14]) << 48;
case 14: k2 ^= ((uint64_t)tail[13]) << 40;
case 13: k2 ^= ((uint64_t)tail[12]) << 32;
case 12: k2 ^= ((uint64_t)tail[11]) << 24;
case 11: k2 ^= ((uint64_t)tail[10]) << 16;
case 10: k2 ^= ((uint64_t)tail[ 9]) << 8;
case 9: k2 ^= ((uint64_t)tail[ 8]) << 0;
k2 *= c2; k2 = ROTL64(k2,33); k2 *= c1; h2 ^= k2;
case 8: k1 ^= ((uint64_t)tail[ 7]) << 56;
case 7: k1 ^= ((uint64_t)tail[ 6]) << 48;
case 6: k1 ^= ((uint64_t)tail[ 5]) << 40;
case 5: k1 ^= ((uint64_t)tail[ 4]) << 32;
case 4: k1 ^= ((uint64_t)tail[ 3]) << 24;
case 3: k1 ^= ((uint64_t)tail[ 2]) << 16;
case 2: k1 ^= ((uint64_t)tail[ 1]) << 8;
case 1: k1 ^= ((uint64_t)tail[ 0]) << 0;
k1 *= c1; k1 = ROTL64(k1,31); k1 *= c2; h1 ^= k1;
};
//----------
// finalization
h1 ^= len; h2 ^= len;
h1 += h2;
h2 += h1;
h1 = fmix64(h1);
h2 = fmix64(h2);
h1 += h2;
h2 += h1;
((uint64_t*)out)[0] = h1;
((uint64_t*)out)[1] = h2;
}
//-----------------------------------------------------------------------------

View File

@ -0,0 +1,274 @@
#include "internal/interpolate.h"
#include "internal/error.h"
#include <twml/optim.h>
namespace twml {
template<typename T>
void mdlInfer(Tensor &output_keys, Tensor &output_vals,
const Tensor &input_keys, const Tensor &input_vals,
const Tensor &bin_ids,
const Tensor &bin_vals,
const Tensor &feature_offsets,
bool return_bin_indices) {
auto okeysData = output_keys.getData<int64_t>();
auto ovalsData = output_vals.getData<T>();
uint64_t okeysStride = output_keys.getStride(0);
uint64_t ovaluesStride = output_vals.getStride(0);
auto ikeysData = input_keys.getData<int64_t>();
auto ivalsData = input_vals.getData<T>();
uint64_t ikeysStride = input_keys.getStride(0);
uint64_t ivaluesStride = input_vals.getStride(0);
auto xsData = bin_vals.getData<T>();
auto ysData = bin_ids.getData<int64_t>();
uint64_t xsStride = bin_vals.getStride(0);
uint64_t ysStride = bin_ids.getStride(0);
auto offsetData = feature_offsets.getData<int64_t>();
uint64_t size = input_keys.getDim(0);
uint64_t total_bins = bin_ids.getNumElements();
uint64_t fsize = feature_offsets.getNumElements();
for (uint64_t i = 0; i < size; i++) {
int64_t ikey = ikeysData[i * ikeysStride] - TWML_INDEX_BASE;
T val = ivalsData[i * ivaluesStride];
if (ikey == -1) {
ovalsData[i * ovaluesStride] = val;
continue;
}
// Perform interpolation
uint64_t offset = offsetData[ikey];
uint64_t next_offset = (ikey == (int64_t)(fsize - 1)) ? total_bins : offsetData[ikey + 1];
uint64_t mainSize = next_offset - offset;
const T *lxsData = xsData + offset;
const int64_t *lysData = ysData + offset;
int64_t okey = interpolation<T, int64_t>(lxsData, xsStride,
lysData, ysStride,
val, mainSize, NEAREST, 0,
return_bin_indices);
okeysData[i * okeysStride] = okey + TWML_INDEX_BASE;
ovalsData[i * ovaluesStride] = 1;
}
}
void mdlInfer(Tensor &output_keys, Tensor &output_vals,
const Tensor &input_keys, const Tensor &input_vals,
const Tensor &bin_ids,
const Tensor &bin_vals,
const Tensor &feature_offsets,
bool return_bin_indices) {
if (input_keys.getType() != TWML_TYPE_INT64) {
throw twml::Error(TWML_ERR_TYPE, "input_keys must be a Long Tensor");
}
if (output_keys.getType() != TWML_TYPE_INT64) {
throw twml::Error(TWML_ERR_TYPE, "output_keys must be a Long Tensor");
}
if (bin_ids.getType() != TWML_TYPE_INT64) {
throw twml::Error(TWML_ERR_TYPE, "bin_ids must be a Long Tensor");
}
if (feature_offsets.getType() != TWML_TYPE_INT64) {
throw twml::Error(TWML_ERR_TYPE, "bin_ids must be a Long Tensor");
}
if (input_vals.getType() != bin_vals.getType()) {
throw twml::Error(TWML_ERR_TYPE,
"Data type of input_vals does not match type of bin_vals");
}
if (bin_vals.getNumDims() != 1) {
throw twml::Error(TWML_ERR_SIZE,
"bin_vals must be 1 Dimensional");
}
if (bin_ids.getNumDims() != 1) {
throw twml::Error(TWML_ERR_SIZE,
"bin_ids must be 1 Dimensional");
}
if (bin_vals.getNumElements() != bin_ids.getNumElements()) {
throw twml::Error(TWML_ERR_SIZE,
"Dimensions of bin_vals and bin_ids do not match");
}
if (feature_offsets.getStride(0) != 1) {
throw twml::Error(TWML_ERR_SIZE,
"feature_offsets must be contiguous");
}
switch (input_vals.getType()) {
case TWML_TYPE_FLOAT:
twml::mdlInfer<float>(output_keys, output_vals,
input_keys, input_vals,
bin_ids, bin_vals, feature_offsets,
return_bin_indices);
break;
case TWML_TYPE_DOUBLE:
twml::mdlInfer<double>(output_keys, output_vals,
input_keys, input_vals,
bin_ids, bin_vals, feature_offsets,
return_bin_indices);
break;
default:
throw twml::Error(TWML_ERR_TYPE,
"Unsupported datatype for mdlInfer");
}
}
const int DEFAULT_INTERPOLATION_LOWEST = 0;
/**
* @param output tensor to hold linear or nearest interpolation output.
* This function does not allocate space.
* The output tensor must have space allcoated.
* @param input input tensor; size must match output.
* input is assumed to have size [batch_size, number_of_labels].
* @param xs the bins.
* @param ys the values for the bins.
* @param mode: linear or nearest InterpolationMode.
* linear is used for isotonic calibration.
* nearest is used for MDL calibration and MDL inference.
*
* @return Returns nothing. Output is stored into the output tensor.
*
* This is used by IsotonicCalibration inference.
*/
template <typename T>
void interpolation(
Tensor output,
const Tensor input,
const Tensor xs,
const Tensor ys,
const InterpolationMode mode) {
// Sanity check: input and output should have two dims.
if (input.getNumDims() != 2 || output.getNumDims() != 2) {
throw twml::Error(TWML_ERR_TYPE,
"input and output should have 2 dimensions.");
}
// Sanity check: input and output size should match.
for (int i = 0; i < input.getNumDims(); i++) {
if (input.getDim(i) != output.getDim(i)) {
throw twml::Error(TWML_ERR_TYPE,
"input and output mismatch in size.");
}
}
// Sanity check: number of labels in input should match
// number of labels in xs / ys.
if (input.getDim(1) != xs.getDim(0)
|| input.getDim(1) != ys.getDim(0)) {
throw twml::Error(TWML_ERR_TYPE,
"input, xs, ys should have the same number of labels.");
}
const uint64_t inputStride0 = input.getStride(0);
const uint64_t inputStride1 = input.getStride(1);
const uint64_t outputStride0 = output.getStride(0);
const uint64_t outputStride1 = output.getStride(1);
const uint64_t xsStride0 = xs.getStride(0);
const uint64_t xsStride1 = xs.getStride(1);
const uint64_t ysStride0 = ys.getStride(0);
const uint64_t ysStride1 = ys.getStride(1);
const uint64_t mainSize = xs.getDim(1);
// for each value in the input matrix, compute output value by
// calling interpolation.
auto inputData = input.getData<T>();
auto outputData = output.getData<T>();
auto xsData = xs.getData<T>();
auto ysData = ys.getData<T>();
for (uint64_t i = 0; i < input.getDim(0); i++) {
for (uint64_t j = 0; j < input.getDim(1); j++) {
const T val = inputData[i * inputStride0 + j * inputStride1];
const T *lxsData = xsData + j * xsStride0;
const T *lysData = ysData + j * ysStride0;
const T res = interpolation(
lxsData, xsStride1,
lysData, ysStride1,
val,
mainSize,
mode,
DEFAULT_INTERPOLATION_LOWEST);
outputData[i * outputStride0 + j * outputStride1] = res;
}
}
}
void linearInterpolation(
Tensor output,
const Tensor input,
const Tensor xs,
const Tensor ys) {
switch (input.getType()) {
case TWML_TYPE_FLOAT:
twml::interpolation<float>(output, input, xs, ys, LINEAR);
break;
case TWML_TYPE_DOUBLE:
twml::interpolation<double>(output, input, xs, ys, LINEAR);
break;
default:
throw twml::Error(TWML_ERR_TYPE,
"Unsupported datatype for linearInterpolation.");
}
}
void nearestInterpolation(
Tensor output,
const Tensor input,
const Tensor xs,
const Tensor ys) {
switch (input.getType()) {
case TWML_TYPE_FLOAT:
twml::interpolation<float>(output, input, xs, ys, NEAREST);
break;
case TWML_TYPE_DOUBLE:
twml::interpolation<double>(output, input, xs, ys, NEAREST);
break;
default:
throw twml::Error(TWML_ERR_TYPE,
"Unsupported datatype for nearestInterpolation.");
}
}
} // namespace twml
twml_err twml_optim_mdl_infer(twml_tensor output_keys,
twml_tensor output_vals,
const twml_tensor input_keys,
const twml_tensor input_vals,
const twml_tensor bin_ids,
const twml_tensor bin_vals,
const twml_tensor feature_offsets,
bool return_bin_indices) {
HANDLE_EXCEPTIONS(
using namespace twml;
mdlInfer(*getTensor(output_keys),
*getTensor(output_vals),
*getConstTensor(input_keys),
*getConstTensor(input_vals),
*getConstTensor(bin_ids),
*getConstTensor(bin_vals),
*getConstTensor(feature_offsets),
return_bin_indices););
return TWML_ERR_NONE;
}
twml_err twml_optim_nearest_interpolation(
twml_tensor output,
const twml_tensor input,
const twml_tensor xs,
const twml_tensor ys) {
HANDLE_EXCEPTIONS(
using namespace twml;
nearestInterpolation(*getTensor(output),
*getConstTensor(input),
*getConstTensor(xs),
*getConstTensor(ys)););
return TWML_ERR_NONE;
}

View File

@ -0,0 +1,53 @@
#include "internal/utf_converter.h"
ssize_t utf8_to_utf16(const uint8_t *in, uint64_t in_len, uint16_t *out, uint64_t max_out) {
uint64_t num_out = 0;
uint64_t num_in = 0;
while (num_in < in_len) {
uint32_t uni;
uint64_t todo;
uint8_t ch = in[num_in];
num_in++;
if (ch <= 0x7F) {
uni = ch;
todo = 0;
} else if (ch <= 0xBF) {
return -1;
} else if (ch <= 0xDF) {
uni = ch & 0x1F;
todo = 1;
} else if (ch <= 0xEF) {
uni = ch & 0x0F;
todo = 2;
} else if (ch <= 0xF7) {
uni = ch & 0x07;
todo = 3;
} else {
return -1;
}
for (uint64_t j = 0; j < todo; ++j) {
if (num_in == in_len) return -1;
uint8_t ch = in[num_in];
num_in++;
if (ch < 0x80 || ch > 0xBF) return -1;
uni <<= 6;
uni += ch & 0x3F;
}
if (uni >= 0xD800 && uni <= 0xDFFF) return -1;
if (uni > 0x10FFFF) return -1;
if (uni <= 0xFFFF) {
if (num_out == max_out) return -1;
out[num_out] = uni;
num_out++;
} else {
uni -= 0x10000;
if (num_out + 1 >= max_out) return -1;
out[num_out] = (uni >> 10) + 0xD800;
out[num_out + 1] = (uni & 0x3FF) + 0xDC00;
num_out += 2;
}
}
if (num_out == max_out) return -1;
out[num_out] = 0;
return num_out;
}

View File

@ -0,0 +1,79 @@
set(CMAKE_MODULE_PATH ${PROJECT_SOURCE_DIR})
cmake_minimum_required(VERSION 2.8 FATAL_ERROR)
cmake_policy(VERSION 2.8)
set(CMAKE_MACOSX_RPATH 1)
file(GLOB_RECURSE sources *.cpp)
set (CMAKE_CXX_FLAGS "-Wall -std=c++11 -fno-stack-protector ${CMAKE_CXX_FLAGS}")
execute_process(
COMMAND
$ENV{LIBTWML_HOME}/src/ops/scripts/get_inc.sh
RESULT_VARIABLE
TF_RES
OUTPUT_VARIABLE
TF_INC)
if (NOT (${TF_RES} EQUAL "0"))
message(${TF_RES})
message(FATAL_ERROR "Failed to get include path for tensorflow")
endif()
execute_process(
COMMAND
$ENV{LIBTWML_HOME}/src/ops/scripts/get_lib.sh
RESULT_VARIABLE
TF_RES
OUTPUT_VARIABLE
TF_LIB)
if (NOT (${TF_RES} EQUAL "0"))
message(${TF_RES})
message(FATAL_ERROR "Failed to get lib path for tensorflow")
endif()
find_path(
TWML_INC
NAMES "twml.h"
PATHS $ENV{LIBTWML_HOME}/include)
add_library(twml_tf MODULE ${sources})
set(CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH} "$ENV{LIBTWML_HOME}/cmake")
if (UNIX)
if (APPLE)
set (CMAKE_CXX_FLAGS "-undefined dynamic_lookup -stdlib=libc++ ${CMAKE_CXX_FLAGS}")
# -Wl,-all_load ensures symbols not used by twml_tf are also included.
# -Wl,-noall_load limits the scope of the previous flag.
set (LINK_ALL_OPTION "-Wl,-all_load")
set (NO_LINK_ALL_OPTION "-Wl,-noall_load")
set(TF_FRAMEWORK_LIB ${TF_LIB}/libtensorflow_framework.1.dylib)
else()
# -Wl,--whole-archive ensures symbols not used by twml_tf are also included.
# -Wl,--no-whole-archive limits the scope of the previous flag.
set (LINK_ALL_OPTION "-Wl,--whole-archive")
set (NO_LINK_ALL_OPTION "-Wl,--no-whole-archive")
set(TF_FRAMEWORK_LIB ${TF_LIB}/libtensorflow_framework.so.1)
endif()
endif()
target_include_directories(
twml_tf
PRIVATE
${CMAKE_CURRENT_SOURCE_DIR}
${TWML_INC}
# TF_INC needs to be the last to avoid some weird white-spacing issues with generated Makefile.
${TF_INC} # Needed because of some header files auto-generated during build time.
${TF_INC}/external/nsync/public/
)
target_link_libraries(twml_tf
PUBLIC
# Since we are using twml_tf as the "one" dynamic library,
# we want it to have the C function symbols needed for other functions as well.
${LINK_ALL_OPTION} twml ${NO_LINK_ALL_OPTION}
${TF_FRAMEWORK_LIB}
)

View File

@ -0,0 +1,92 @@
#include "tensorflow/core/framework/op.h"
#include "tensorflow/core/framework/shape_inference.h"
#include "tensorflow/core/framework/op_kernel.h"
using namespace tensorflow;
REGISTER_OP("Add1")
.Attr("T: {float, double, int32}")
.Input("input1: T")
.Output("output: T")
.SetShapeFn([](::tensorflow::shape_inference::InferenceContext* c) {
c->set_output(0, c->input(0));
return Status::OK();
});
template<typename T>
class Add1 : public OpKernel {
public:
explicit Add1(OpKernelConstruction* context) : OpKernel(context) {}
void Compute(OpKernelContext* context) override {
// Grab the input tensor
const Tensor& input_tensor = context->input(0);
auto input = input_tensor.flat<T>();
// Create an output tensor
Tensor* output_tensor = nullptr;
OP_REQUIRES_OK(context, context->allocate_output(0, input_tensor.shape(),
&output_tensor));
auto output_flat = output_tensor->flat<T>();
// Add 1 to input and assign to output
const int N = input.size();
for (int i = 0; i < N; i++) {
output_flat(i) = input(i) + 1;
}
}
};
REGISTER_OP("Add1Grad")
.Attr("T: {float, double, int32}")
.Input("grad_output: T")
.Output("grad_input: T")
.SetShapeFn([](::tensorflow::shape_inference::InferenceContext* c) {
c->set_output(0, c->input(0));
return Status::OK();
});
template<typename T>
class Add1Grad : public OpKernel {
public:
explicit Add1Grad(OpKernelConstruction* context) : OpKernel(context) {}
void Compute(OpKernelContext* context) override {
// Grab the input tensor
const Tensor& grad_output_tensor = context->input(0);
auto grad_output = grad_output_tensor.flat<T>();
// Create an grad_input tensor
Tensor* grad_input_tensor = nullptr;
OP_REQUIRES_OK(context, context->allocate_output(0, grad_output_tensor.shape(),
&grad_input_tensor));
auto grad_input_flat = grad_input_tensor->flat<T>();
// Copy from grad_output to grad_input
const int N = grad_output.size();
for (int i = 0; i < N; i++) {
grad_input_flat(i) = grad_output(i);
}
}
};
#define REGISTER(Type) \
\
REGISTER_KERNEL_BUILDER( \
Name("Add1") \
.Device(DEVICE_CPU) \
.TypeConstraint<Type>("T"), \
Add1<Type>); \
\
REGISTER_KERNEL_BUILDER( \
Name("Add1Grad") \
.Device(DEVICE_CPU) \
.TypeConstraint<Type>("T"), \
Add1Grad<Type>); \
REGISTER(float);
REGISTER(double);
REGISTER(int32);

View File

@ -0,0 +1,183 @@
#include "tensorflow/core/framework/op.h"
#include "tensorflow/core/framework/shape_inference.h"
#include "tensorflow/core/framework/op_kernel.h"
#include <twml.h>
#include "tensorflow_utils.h"
#include "resource_utils.h"
REGISTER_OP("DecodeAndHashBatchPredictionRequest")
.Input("input_bytes: uint8")
.Attr("keep_features: list(int)")
.Attr("keep_codes: list(int)")
.Attr("decode_mode: int = 0")
.Output("hashed_data_record_handle: resource")
.SetShapeFn(shape_inference::ScalarShape)
.Doc(R"doc(
A tensorflow OP that decodes batch prediction request and creates a handle to the batch of hashed data records.
Attr
keep_features: a list of int ids to keep.
keep_codes: their corresponding code.
decode_mode: integer, indicates which decoding method to use. Let a sparse continuous
have a feature_name and a dict of {name: value}. 0 indicates feature_ids are computed
as hash(name). 1 indicates feature_ids are computed as hash(feature_name, name)
shared_name: name used by the resource handle inside the resource manager.
container: name used by the container of the resources.
shared_name and container are required when inheriting from ResourceOpKernel.
Input
input_bytes: Input tensor containing the serialized batch of BatchPredictionRequest.
Outputs
hashed_data_record_handle: A resource handle to the HashedDataRecordResource containing batch of HashedDataRecords.
)doc");
class DecodeAndHashBatchPredictionRequest : public OpKernel {
public:
explicit DecodeAndHashBatchPredictionRequest(OpKernelConstruction* context)
: OpKernel(context) {
std::vector<int64> keep_features;
std::vector<int64> keep_codes;
OP_REQUIRES_OK(context, context->GetAttr("keep_features", &keep_features));
OP_REQUIRES_OK(context, context->GetAttr("keep_codes", &keep_codes));
OP_REQUIRES_OK(context, context->GetAttr("decode_mode", &m_decode_mode));
OP_REQUIRES(context, keep_features.size() == keep_codes.size(),
errors::InvalidArgument("keep keys and values must have same size."));
#ifdef USE_DENSE_HASH
m_keep_map.set_empty_key(0);
#endif // USE_DENSE_HASH
for (uint64_t i = 0; i < keep_features.size(); i++) {
m_keep_map[keep_features[i]] = keep_codes[i];
}
}
private:
twml::Map<int64_t, int64_t> m_keep_map;
int64 m_decode_mode;
void Compute(OpKernelContext* context) override {
try {
HashedDataRecordResource *resource = nullptr;
OP_REQUIRES_OK(context, makeResourceHandle<HashedDataRecordResource>(context, 0, &resource));
// Store the input bytes in the resource so it isnt freed before the resource.
// This is necessary because we are not copying the contents for tensors.
resource->input = context->input(0);
const uint8_t *input_bytes = resource->input.flat<uint8>().data();
twml::HashedDataRecordReader reader;
twml::HashedBatchPredictionRequest bpr;
reader.setKeepMap(&m_keep_map);
reader.setBuffer(input_bytes);
reader.setDecodeMode(m_decode_mode);
bpr.decode(reader);
resource->common = std::move(bpr.common());
resource->records = std::move(bpr.requests());
// Each datarecord has a copy of common features.
// Initialize total_size by common_size * num_records
int64 common_size = static_cast<int64>(resource->common.totalSize());
int64 num_records = static_cast<int64>(resource->records.size());
int64 total_size = common_size * num_records;
for (const auto &record : resource->records) {
total_size += static_cast<int64>(record.totalSize());
}
resource->total_size = total_size;
resource->num_labels = 0;
resource->num_weights = 0;
} catch (const std::exception &e) {
context->CtxFailureWithWarning(errors::InvalidArgument(e.what()));
}
}
};
REGISTER_KERNEL_BUILDER(
Name("DecodeAndHashBatchPredictionRequest").Device(DEVICE_CPU),
DecodeAndHashBatchPredictionRequest);
REGISTER_OP("DecodeBatchPredictionRequest")
.Input("input_bytes: uint8")
.Attr("keep_features: list(int)")
.Attr("keep_codes: list(int)")
.Output("data_record_handle: resource")
.SetShapeFn(shape_inference::ScalarShape)
.Doc(R"doc(
A tensorflow OP that decodes batch prediction request and creates a handle to the batch of data records.
Attr
keep_features: a list of int ids to keep.
keep_codes: their corresponding code.
shared_name: name used by the resource handle inside the resource manager.
container: name used by the container of the resources.
shared_name and container are required when inheriting from ResourceOpKernel.
Input
input_bytes: Input tensor containing the serialized batch of BatchPredictionRequest.
Outputs
data_record_handle: A resource handle to the DataRecordResource containing batch of DataRecords.
)doc");
class DecodeBatchPredictionRequest : public OpKernel {
public:
explicit DecodeBatchPredictionRequest(OpKernelConstruction* context)
: OpKernel(context) {
std::vector<int64> keep_features;
std::vector<int64> keep_codes;
OP_REQUIRES_OK(context, context->GetAttr("keep_features", &keep_features));
OP_REQUIRES_OK(context, context->GetAttr("keep_codes", &keep_codes));
OP_REQUIRES(context, keep_features.size() == keep_codes.size(),
errors::InvalidArgument("keep keys and values must have same size."));
#ifdef USE_DENSE_HASH
m_keep_map.set_empty_key(0);
#endif // USE_DENSE_HASH
for (uint64_t i = 0; i < keep_features.size(); i++) {
m_keep_map[keep_features[i]] = keep_codes[i];
}
}
private:
twml::Map<int64_t, int64_t> m_keep_map;
void Compute(OpKernelContext* context) override {
try {
DataRecordResource *resource = nullptr;
OP_REQUIRES_OK(context, makeResourceHandle<DataRecordResource>(context, 0, &resource));
// Store the input bytes in the resource so it isnt freed before the resource.
// This is necessary because we are not copying the contents for tensors.
resource->input = context->input(0);
const uint8_t *input_bytes = resource->input.flat<uint8>().data();
twml::DataRecordReader reader;
twml::BatchPredictionRequest bpr;
reader.setKeepMap(&m_keep_map);
reader.setBuffer(input_bytes);
bpr.decode(reader);
resource->common = std::move(bpr.common());
resource->records = std::move(bpr.requests());
resource->num_weights = 0;
resource->num_labels = 0;
resource->keep_map = &m_keep_map;
} catch (const std::exception &e) {
context->CtxFailureWithWarning(errors::InvalidArgument(e.what()));
}
}
};
REGISTER_KERNEL_BUILDER(
Name("DecodeBatchPredictionRequest").Device(DEVICE_CPU),
DecodeBatchPredictionRequest);

View File

@ -0,0 +1,224 @@
#include "tensorflow/core/framework/op.h"
#include "tensorflow/core/framework/shape_inference.h"
#include "tensorflow/core/framework/op_kernel.h"
#include <cstdint>
#include <twml.h>
#include "tensorflow_utils.h"
#include "resource_utils.h"
#include <iterator>
template<typename InputType, typename RecordType>
class DecodeBatchPredictionRequestKernel : public OpKernel {
public:
explicit DecodeBatchPredictionRequestKernel(OpKernelConstruction* context)
: OpKernel(context) {
std::vector<int64> keep_features;
std::vector<int64> keep_codes;
std::vector<int64> label_features;
std::vector<int64> weight_features;
OP_REQUIRES_OK(context, context->GetAttr("keep_features", &keep_features));
OP_REQUIRES_OK(context, context->GetAttr("keep_codes", &keep_codes));
OP_REQUIRES_OK(context, context->GetAttr("label_features", &label_features));
OP_REQUIRES_OK(context, context->GetAttr("weight_features", &weight_features));
OP_REQUIRES_OK(context, context->GetAttr("decode_mode", &m_decode_mode));
OP_REQUIRES(context, keep_features.size() == keep_codes.size(),
errors::InvalidArgument("keep keys and values must have same size."));
#ifdef USE_DENSE_HASH
m_keep_map.set_empty_key(0);
m_labels_map.set_empty_key(0);
m_weights_map.set_empty_key(0);
#endif // USE_DENSE_HASH
for (uint64_t i = 0; i < keep_features.size(); i++) {
m_keep_map[keep_features[i]] = keep_codes[i];
}
for (uint64_t i = 0; i < label_features.size(); i++) {
m_labels_map[label_features[i]] = i;
}
for (uint64_t i = 0; i < weight_features.size(); i++) {
m_weights_map[weight_features[i]] = i;
}
}
protected:
twml::Map<int64_t, int64_t> m_keep_map;
twml::Map<int64_t, int64_t> m_labels_map;
twml::Map<int64_t, int64_t> m_weights_map;
int64 m_decode_mode;
template<typename ResourceType>
void Decode(OpKernelContext* context, ResourceType *resource) {
resource->input = context->input(0);
const uint8_t *input_bytes = getInputBytes<InputType>(resource->input, 0);
int num_labels = static_cast<int>(m_labels_map.size());
int num_weights = static_cast<int>(m_weights_map.size());
typename RecordType::Reader reader;
twml::GenericBatchPredictionRequest<RecordType> bpr(num_labels, num_weights);
reader.setKeepMap(&m_keep_map);
reader.setLabelsMap(&m_labels_map);
reader.setBuffer(input_bytes);
reader.setDecodeMode(m_decode_mode);
// Do not set weight map if it is empty. This will take a faster path.
if (num_weights != 0) {
reader.setWeightsMap(&m_weights_map);
}
bpr.decode(reader);
resource->common = std::move(bpr.common());
resource->records = std::move(bpr.requests());
resource->num_labels = num_labels;
resource->num_weights = num_weights;
}
};
REGISTER_OP("DecodeAndHashBatchPredictionRequestV2")
.Attr("InputType: {uint8, string}")
.Input("input_bytes: InputType")
.Attr("keep_features: list(int)")
.Attr("keep_codes: list(int)")
.Attr("label_features: list(int)")
.Attr("weight_features: list(int) = []")
.Attr("decode_mode: int = 0")
.Output("hashed_data_record_handle: resource")
.SetShapeFn(shape_inference::ScalarShape)
.Doc(R"doc(
A tensorflow OP that decodes a list/batch of data records and creates a handle to the batch of hashed data records.
Compared to DecodeAndHashBatchPredictionRequest, DecodeAndHashBatchPredictionRequestV2 is used for training instead
of serving. Thus label_features and weight_features[optional] must be passed, and labels and weights are extracted in
the output.
DecodeAndHashBatchPredictionRequestV2 controls what DataRecords we want to process together in a batch in training.
For instance, we can put all instances for a query in the same batch when training a ranking model.
Notice that this OP was added separately to make sure we would not break the API for DecodeAndHashBatchPredictionRequest.
It requires some discussions if we merge the two ops into a single .cpp file in a future API revision.
Attr
keep_features: a list of int ids to keep.
keep_codes: their corresponding code.
label_features: list of feature ids representing the labels.
weight_features: list of feature ids representing the weights. Defaults to empty list.
decode_mode: integer, indicates which decoding method to use. Let a sparse continuous
have a feature_name and a dict of {name: value}. 0 indicates feature_ids are computed
as hash(name). 1 indicates feature_ids are computed as hash(feature_name, name)
Input
input_bytes: Input tensor containing the serialized batch of BatchPredictionRequest.
Outputs
hashed_data_record_handle: A resource handle to the HashedDataRecordResource containing batch of HashedDataRecords.
)doc");
template<typename InputType>
class DecodeAndHashBatchPredictionRequestV2 :
public DecodeBatchPredictionRequestKernel<InputType, twml::HashedDataRecord> {
public:
DecodeAndHashBatchPredictionRequestV2(OpKernelConstruction *context)
: DecodeBatchPredictionRequestKernel<InputType, twml::HashedDataRecord>(context) {
}
private:
void Compute(OpKernelContext* context) override {
try {
HashedDataRecordResource *resource = nullptr;
OP_REQUIRES_OK(
context,
makeResourceHandle<HashedDataRecordResource>(context, 0, &resource));
this->Decode(context, resource);
// Each datarecord has a copy of common features.
// Initialize total_size by common_size * num_records
int64 common_size = static_cast<int64>(resource->common.totalSize());
int64 num_records = static_cast<int64>(resource->records.size());
int64 total_size = common_size * num_records;
for (const auto &record : resource->records) {
total_size += static_cast<int64>(record.totalSize());
}
resource->total_size = total_size;
} catch (const std::exception &e) {
context->CtxFailureWithWarning(errors::InvalidArgument(e.what()));
}
}
};
REGISTER_OP("DecodeBatchPredictionRequestV2")
.Attr("InputType: {uint8, string}")
.Input("input_bytes: InputType")
.Attr("keep_features: list(int)")
.Attr("keep_codes: list(int)")
.Attr("label_features: list(int)")
.Attr("weight_features: list(int) = []")
.Attr("decode_mode: int = 0")
.Output("data_record_handle: resource")
.SetShapeFn(shape_inference::ScalarShape)
.Doc(R"doc(
A tensorflow OP that decodes batch prediction request and creates a handle to the batch of data records.
Attr
keep_features: a list of int ids to keep.
keep_codes: their corresponding code.
shared_name: name used by the resource handle inside the resource manager.
label_features: list of feature ids representing the labels.
weight_features: list of feature ids representing the weights. Defaults to empty list.
decode_mode: reserved, do not use.
Input
input_bytes: Input tensor containing the serialized batch of BatchPredictionRequest.
Outputs
data_record_handle: A resource handle to the DataRecordResource containing batch of DataRecords.
)doc");
template<typename InputType>
class DecodeBatchPredictionRequestV2 :
public DecodeBatchPredictionRequestKernel<InputType, twml::DataRecord> {
public:
DecodeBatchPredictionRequestV2(OpKernelConstruction *context)
: DecodeBatchPredictionRequestKernel<InputType, twml::DataRecord>(context) {
}
private:
void Compute(OpKernelContext* context) override {
try {
DataRecordResource *resource = nullptr;
OP_REQUIRES_OK(
context,
makeResourceHandle<DataRecordResource>(context, 0, &resource));
this->Decode(context, resource);
resource->keep_map = &(this->m_keep_map);
} catch (const std::exception &e) {
context->CtxFailureWithWarning(errors::InvalidArgument(e.what()));
}
}
};
#define REGISTER_DECODE_OPS(InputType) \
REGISTER_KERNEL_BUILDER( \
Name("DecodeAndHashBatchPredictionRequestV2") \
.Device(DEVICE_CPU) \
.TypeConstraint<InputType>("InputType"), \
DecodeAndHashBatchPredictionRequestV2<InputType>); \
REGISTER_KERNEL_BUILDER( \
Name("DecodeBatchPredictionRequestV2") \
.Device(DEVICE_CPU) \
.TypeConstraint<InputType>("InputType"), \
DecodeBatchPredictionRequestV2<InputType>); \
REGISTER_DECODE_OPS(uint8)
REGISTER_DECODE_OPS(string)

View File

@ -0,0 +1,82 @@
#include "tensorflow/core/framework/op.h"
#include "tensorflow/core/framework/shape_inference.h"
#include "tensorflow/core/framework/op_kernel.h"
#include <twml.h>
#include "tensorflow_utils.h"
using namespace tensorflow;
REGISTER_OP("BatchPredictionResponseWriter")
.Attr("T: {float, double}")
.Input("keys: int64")
.Input("values: T")
.Output("result: uint8")
.SetShapeFn([](::tensorflow::shape_inference::InferenceContext* c) {
return Status::OK();
}).Doc(R"doc(
A tensorflow OP that packages keys and values into a BatchPredictionResponse.
values: input feature value. (float/double)
keys: feature ids from the original BatchPredictionRequest. (int64)
Outputs
bytes: output BatchPredictionRequest serialized using Thrift into a uint8 tensor.
)doc");
template<typename T>
class BatchPredictionResponseWriter : public OpKernel {
public:
explicit BatchPredictionResponseWriter(OpKernelConstruction* context)
: OpKernel(context) {}
void Compute(OpKernelContext* context) override {
const Tensor& keys = context->input(0);
const Tensor& values = context->input(1);
try {
// Ensure the inner dimension matches.
if (values.dim_size(values.dims() - 1) != keys.dim_size(keys.dims() - 1)) {
throw std::runtime_error("The sizes of keys and values need to match");
}
// set inputs as twml::Tensor
const twml::Tensor in_keys_ = TFTensor_to_twml_tensor(keys);
const twml::Tensor in_values_ = TFTensor_to_twml_tensor(values);
// no tensors in this op
const twml::Tensor dummy_dense_keys_;
const std::vector<twml::RawTensor> dummy_dense_values_;
// call constructor BatchPredictionResponse
twml::BatchPredictionResponse tempResult(
in_keys_, in_values_, dummy_dense_keys_, dummy_dense_values_);
// determine the length of the result
int len = tempResult.encodedSize();
TensorShape result_shape = {1, len};
// Create an output tensor, the size is determined by the content of input.
Tensor* result = nullptr;
OP_REQUIRES_OK(context, context->allocate_output(0, result_shape,
&result));
twml::Tensor out_result = TFTensor_to_twml_tensor(*result);
// Call writer of BatchPredictionResponse
tempResult.write(out_result);
} catch(const std::exception &e) {
context->CtxFailureWithWarning(errors::InvalidArgument(e.what()));
}
}
};
#define REGISTER(Type) \
\
REGISTER_KERNEL_BUILDER( \
Name("BatchPredictionResponseWriter") \
.Device(DEVICE_CPU) \
.TypeConstraint<Type>("T"), \
BatchPredictionResponseWriter<Type>); \
REGISTER(float);
REGISTER(double);

View File

@ -0,0 +1,81 @@
#include "tensorflow/core/framework/op.h"
#include "tensorflow/core/framework/shape_inference.h"
#include "tensorflow/core/framework/op_kernel.h"
#include <twml.h>
#include "tensorflow_utils.h"
using namespace tensorflow;
REGISTER_OP("BatchPredictionTensorResponseWriter")
.Attr("T: list({string, int32, int64, float, double})")
.Input("keys: int64")
.Input("values: T")
.Output("result: uint8")
.SetShapeFn([](::tensorflow::shape_inference::InferenceContext* c) {
return Status::OK();
}).Doc(R"doc(
A tensorflow OP that packages keys and dense tensors into a BatchPredictionResponse.
values: list of tensors
keys: feature ids from the original BatchPredictionRequest. (int64)
Outputs
bytes: output BatchPredictionRequest serialized using Thrift into a uint8 tensor.
)doc");
class BatchPredictionTensorResponseWriter : public OpKernel {
public:
explicit BatchPredictionTensorResponseWriter(OpKernelConstruction* context)
: OpKernel(context) {}
void Compute(OpKernelContext* context) override {
const Tensor& keys = context->input(0);
try {
// set keys as twml::Tensor
const twml::Tensor in_keys_ = TFTensor_to_twml_tensor(keys);
// check sizes
uint64_t num_keys = in_keys_.getNumElements();
uint64_t num_values = context->num_inputs() - 1;
OP_REQUIRES(context, num_values % num_keys == 0,
errors::InvalidArgument("Number of dense tensors not multiple of dense keys"));
// set dense tensor values
std::vector<twml::RawTensor> in_values_;
for (int i = 1; i < context->num_inputs(); i++) {
in_values_.push_back(TFTensor_to_twml_raw_tensor(context->input(i)));
}
// no continuous predictions in this op, only tensors
const twml::Tensor dummy_cont_keys_;
const twml::Tensor dummy_cont_values_;
// call constructor BatchPredictionResponse
twml::BatchPredictionResponse tempResult(
dummy_cont_keys_, dummy_cont_values_, in_keys_, in_values_);
// determine the length of the result
int len = tempResult.encodedSize();
TensorShape result_shape = {1, len};
// Create an output tensor, the size is determined by the content of input.
Tensor* result = NULL;
OP_REQUIRES_OK(context, context->allocate_output(0, result_shape,
&result));
twml::Tensor out_result = TFTensor_to_twml_tensor(*result);
// Call writer of BatchPredictionResponse
tempResult.write(out_result);
} catch(const std::exception &e) {
context->CtxFailureWithWarning(errors::InvalidArgument(e.what()));
}
}
};
REGISTER_KERNEL_BUILDER(
Name("BatchPredictionTensorResponseWriter").Device(DEVICE_CPU),
BatchPredictionTensorResponseWriter);

View File

@ -0,0 +1,330 @@
/* Copyright 2015 The TensorFlow Authors. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
==============================================================================*/
// TWML modified to optimize binary features:
// - Sparse tensor values are assumed to be binary, so only add operation is done
// rather than mul-add;
// - In house version of vectorization is used instead of Eigen;
// - Enable sharding and multithreading.
#define EIGEN_USE_THREADS
#include "binary_sparse_dense_matmul.h"
#include "binary_sparse_dense_matmul_impl.h"
#include "tensorflow/core/framework/bounds_check.h"
#include "tensorflow/core/framework/op.h"
#include "tensorflow/core/framework/op_kernel.h"
#include "tensorflow/core/framework/common_shape_fns.h"
#include "tensorflow/core/framework/shape_inference.h"
namespace tensorflow {
namespace shape_inference {
// TODO: The `a_value` is supposed to be all ones.
// Users should not call this op directly but to use it from `sparse_op` python library.
// To make it consistent with original op, the signature remains the same currently,
// we will think a better way to contrain correct use of this op.
// CX-18174
REGISTER_OP("BinarySparseTensorDenseMatMul")
.Input("a_indices: Tindices")
.Input("a_values: T")
.Input("a_shape: int64")
.Input("b: T")
.Output("product: T")
.Attr("T: type")
.Attr("Tindices: {int32,int64} = DT_INT64")
.Attr("adjoint_a: bool = false")
.Attr("adjoint_b: bool = false")
.SetShapeFn([](InferenceContext* c) {
DimensionHandle unused_dim;
ShapeHandle unused;
ShapeHandle b;
ShapeHandle a_shape;
TF_RETURN_IF_ERROR(c->WithRank(c->input(0), 2, &unused)); // a_indices
TF_RETURN_IF_ERROR(c->WithRank(c->input(1), 1, &unused)); // a_values
TF_RETURN_IF_ERROR(c->MakeShapeFromShapeTensor(2, &a_shape));
TF_RETURN_IF_ERROR(c->WithRank(a_shape, 2, &a_shape));
TF_RETURN_IF_ERROR(c->WithRank(c->input(3), 2, &b));
bool adjoint_a;
bool adjoint_b;
TF_RETURN_IF_ERROR(c->GetAttr("adjoint_a", &adjoint_a));
TF_RETURN_IF_ERROR(c->GetAttr("adjoint_b", &adjoint_b));
DimensionHandle output_right = c->Dim(b, adjoint_b ? 0 : 1);
DimensionHandle output_left = c->Dim(a_shape, adjoint_a ? 1 : 0);
DimensionHandle inner_left = c->Dim(a_shape, adjoint_a ? 0 : 1);
DimensionHandle inner_right = c->Dim(b, adjoint_b ? 1 : 0);
TF_RETURN_IF_ERROR(c->Merge(inner_left, inner_right, &unused_dim));
c->set_output(0, c->Matrix(output_left, output_right));
return Status::OK();
});
} // namespace shape_inference
typedef Eigen::ThreadPoolDevice CPUDevice;
template <typename Device, typename T, typename Tindices>
class BinarySparseTensorDenseMatMulOp : public OpKernel {
public:
explicit BinarySparseTensorDenseMatMulOp(OpKernelConstruction* ctx)
: OpKernel(ctx) {
OP_REQUIRES_OK(ctx, ctx->GetAttr("adjoint_a", &adjoint_a_));
OP_REQUIRES_OK(ctx, ctx->GetAttr("adjoint_b", &adjoint_b_));
}
void Compute(OpKernelContext* ctx) override {
const Tensor* a_indices;
const Tensor* a_values;
const Tensor* a_shape;
const Tensor* b;
OP_REQUIRES_OK(ctx, ctx->input("a_indices", &a_indices));
OP_REQUIRES_OK(ctx, ctx->input("a_values", &a_values));
OP_REQUIRES_OK(ctx, ctx->input("a_shape", &a_shape));
OP_REQUIRES_OK(ctx, ctx->input("b", &b));
// Check that the dimensions of the two matrices are valid.
OP_REQUIRES(ctx, TensorShapeUtils::IsMatrix(b->shape()),
errors::InvalidArgument("Tensor 'b' is not a matrix"));
OP_REQUIRES(ctx, TensorShapeUtils::IsVector(a_shape->shape()),
errors::InvalidArgument("Tensor 'a_shape' is not a vector"));
OP_REQUIRES(
ctx, a_shape->NumElements() == 2,
errors::InvalidArgument("Tensor 'a_shape' must have 2 elements"));
OP_REQUIRES(ctx, TensorShapeUtils::IsVector(a_values->shape()),
errors::InvalidArgument("Tensor 'a_values' is not a vector"));
OP_REQUIRES(ctx, TensorShapeUtils::IsMatrix(a_indices->shape()),
errors::InvalidArgument("Tensor 'a_indices' is not a matrix"));
const int64 nnz = a_indices->shape().dim_size(0);
OP_REQUIRES(ctx, nnz == a_values->NumElements(),
errors::InvalidArgument("Number of rows of a_indices does not "
"match number of entries in a_values"));
OP_REQUIRES(
ctx, a_indices->shape().dim_size(1) == a_shape->NumElements(),
errors::InvalidArgument("Number of columns of a_indices does not match "
"number of entries in a_shape"));
auto a_shape_t = a_shape->vec<int64>();
const int64 outer_left = (adjoint_a_) ? a_shape_t(1) : a_shape_t(0);
const int64 outer_right =
(adjoint_b_) ? b->shape().dim_size(0) : b->shape().dim_size(1);
const int64 inner_left = (adjoint_a_) ? a_shape_t(0) : a_shape_t(1);
const int64 inner_right =
(adjoint_b_) ? b->shape().dim_size(1) : b->shape().dim_size(0);
OP_REQUIRES(
ctx, inner_right == inner_left,
errors::InvalidArgument(
"Cannot multiply A and B because inner dimension does not match: ",
inner_left, " vs. ", inner_right,
". Did you forget a transpose? "
"Dimensions of A: [",
a_shape_t(0), ", ", a_shape_t(1),
"). Dimensions of B: ", b->shape().DebugString()));
TensorShape out_shape({outer_left, outer_right});
Tensor* out = nullptr;
OP_REQUIRES_OK(ctx, ctx->allocate_output(0, out_shape, &out));
if (out->NumElements() == 0) {
// If a has shape [0, x] or b has shape [x, 0], the output shape
// is a 0-element matrix, so there is nothing to do.
return;
}
if (a_values->NumElements() == 0 || b->NumElements() == 0) {
// If a has shape [x, 0] and b has shape [0, y], the
// output shape is [x, y] where x and y are non-zero, so we fill
// the output with zeros.
out->flat<T>().device(ctx->eigen_device<Device>()) =
out->flat<T>().constant(T(0));
return;
}
#define MAYBE_ADJOINT(ADJ_A, ADJ_B) \
if (adjoint_a_ == ADJ_A && adjoint_b_ == ADJ_B) { \
Status functor_status = functor::SparseTensorDenseMatMulFunctor< \
Device, T, Tindices, ADJ_A, \
ADJ_B>::Compute(ctx, a_indices, a_values, a_shape, b, out); \
OP_REQUIRES_OK(ctx, functor_status); \
}
MAYBE_ADJOINT(false, false);
MAYBE_ADJOINT(false, true);
MAYBE_ADJOINT(true, false);
MAYBE_ADJOINT(true, true);
#undef MAYBE_ADJOINT
}
private:
bool adjoint_a_;
bool adjoint_b_;
};
#define REGISTER_CPU(TypeT, TypeIndex) \
REGISTER_KERNEL_BUILDER( \
Name("BinarySparseTensorDenseMatMul") \
.Device(DEVICE_CPU) \
.TypeConstraint<TypeT>("T") \
.TypeConstraint<TypeIndex>("Tindices") \
.HostMemory("a_shape"), \
BinarySparseTensorDenseMatMulOp<CPUDevice, TypeT, TypeIndex>);
#define REGISTER_KERNELS_CPU(T) \
REGISTER_CPU(T, int64); \
REGISTER_CPU(T, int32)
REGISTER_KERNELS_CPU(float);
REGISTER_KERNELS_CPU(double);
REGISTER_KERNELS_CPU(int32);
REGISTER_KERNELS_CPU(complex64);
REGISTER_KERNELS_CPU(complex128);
namespace functor {
namespace {
Status KOutOfBoundsError(int64 k, std::size_t i, int rhs_index_a,
std::size_t lhs_right) {
return errors::InvalidArgument("k (", k, ") from index[", i, ",", rhs_index_a,
"] out of bounds (>=", lhs_right, ")");
}
Status MOutOfBoundsError(int64 m, std::size_t i, int lhs_index_a,
int64 out_dim0) {
return errors::InvalidArgument("m (", m, ") from index[", i, ",", lhs_index_a,
"] out of bounds (>=", out_dim0, ")");
}
} // namespace
// The general functor just borrows the code from tf except that add is computed
// instead of mul-add.
template <typename T, typename Tindices, bool ADJ_A, bool ADJ_B>
struct SparseTensorDenseMatMulFunctor<CPUDevice, T, Tindices, ADJ_A, ADJ_B> {
// Vectorize certain operations above this size.
static const std::size_t kNumVectorize = 32;
static Status Compute(OpKernelContext* ctx,
const Tensor *a_indices,
const Tensor *a_values,
const Tensor *a_shape,
const Tensor *b,
Tensor *out) {
return EigenCompute(ctx->eigen_device<CPUDevice>(), out->matrix<T>(),
a_indices->matrix<Tindices>(), a_values->vec<T>(),
b->matrix<T>());
}
static Status EigenCompute(const CPUDevice& d, typename TTypes<T>::Matrix out,
typename TTypes<Tindices>::ConstMatrix a_indices,
typename TTypes<T>::ConstVec a_values,
typename TTypes<T>::ConstMatrix b) {
const std::size_t nnz = a_values.size();
const std::size_t rhs_right = (ADJ_B ? b.dimension(0) : b.dimension(1));
const std::size_t lhs_right = (ADJ_B ? b.dimension(1) : b.dimension(0));
const int lhs_index_a = ADJ_A ? 1 : 0;
const int rhs_index_a = ADJ_A ? 0 : 1;
out.setZero();
if (rhs_right < kNumVectorize) {
// Disable vectorization if the RHS of output is too small
auto maybe_adjoint_b = MaybeAdjoint<decltype(b), ADJ_B>(b);
for (std::size_t i = 0; i < nnz; ++i) {
const Tindices m = internal::SubtleMustCopy(a_indices(i, lhs_index_a));
const Tindices k = internal::SubtleMustCopy(a_indices(i, rhs_index_a));
if (!FastBoundsCheck(k, lhs_right)) {
return KOutOfBoundsError(k, i, rhs_index_a, lhs_right);
}
if (!FastBoundsCheck(m, out.dimension(0))) {
return MOutOfBoundsError(m, i, lhs_index_a, out.dimension(0));
}
for (std::size_t n = 0; n < rhs_right; ++n) {
const T b_value = maybe_adjoint_b(k, n);
out(m, n) += b_value;
}
}
} else {
// Vectorization via Eigen.
const int b_chip_index = ADJ_B ? 1 : 0;
#define LOOP_NNZ(b_passed) \
for (std::size_t i = 0; i < nnz; ++i) { \
const Tindices m = internal::SubtleMustCopy(a_indices(i, lhs_index_a)); \
const Tindices k = internal::SubtleMustCopy(a_indices(i, rhs_index_a)); \
if (!FastBoundsCheck(k, lhs_right)) { \
return KOutOfBoundsError(k, i, rhs_index_a, lhs_right); \
} \
if (!FastBoundsCheck(m, out.dimension(0))) { \
return MOutOfBoundsError(m, i, lhs_index_a, out.dimension(0)); \
} \
out.template chip<0>(m) += b_passed.template chip<b_chip_index>(k); \
}
if (ADJ_B) {
// Perform transpose and conjugation on B once, since we chip out B's
// columns in the nnz loop.
Eigen::array<int, 2> shuffle; // preserve dimension order
shuffle[0] = 1; shuffle[1] = 0;
Eigen::Tensor<T, 2, Eigen::ColMajor> col_major_conj_b =
b.swap_layout().shuffle(shuffle).conjugate();
LOOP_NNZ(col_major_conj_b);
} else {
LOOP_NNZ(b);
}
#undef LOOP_NNZ
}
return Status::OK();
}
};
// We have only specified and optimised the case with no matrix transpose,
// since it is the most typical usage in productions.
template <typename Tindices>
struct SparseTensorDenseMatMulFunctor<CPUDevice,
float, Tindices, false, false> {
static Status Compute(OpKernelContext* ctx,
const Tensor *a_indices,
const Tensor *a_values,
const Tensor *a_shape,
const Tensor *b,
Tensor *out) {
auto a_indices_ptr = a_indices->flat<Tindices>().data();
auto b_ptr = b->flat<float>().data();
auto out_ptr = out->flat<float>().data();
const int64 nnz = a_indices->shape().dim_size(0);
const int64 outer_left = a_shape->vec<int64>()(0);
const int64 outer_right = b->shape().dim_size(1);
ParallelLookupAndSegmentSum<Tindices>(ctx, a_indices_ptr, b_ptr, nnz,
outer_left, outer_right, out_ptr);
return Status::OK();
}
};
} // namespace functor
} // namespace tensorflow

View File

@ -0,0 +1,75 @@
/* Copyright 2015 The TensorFlow Authors. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
==============================================================================*/
// TWML modified to optimize binary features
#ifndef TENSORFLOW_CORE_KERNELS_BINARY_SPARSE_TENSOR_DENSE_MATMUL_OP_H_
#define TENSORFLOW_CORE_KERNELS_BINARY_SPARSE_TENSOR_DENSE_MATMUL_OP_H_
#include "third_party/eigen3/unsupported/Eigen/CXX11/Tensor"
#include "tensorflow/core/framework/tensor_types.h"
#include "tensorflow/core/framework/types.h"
#include "tensorflow/core/lib/core/errors.h"
namespace tensorflow {
namespace functor {
template <typename Device, typename T, typename Tindices, bool ADJ_A,
bool ADJ_B>
struct SparseTensorDenseMatMulFunctor {
static EIGEN_ALWAYS_INLINE Status Compute(
const Device& d, typename TTypes<T>::Matrix out,
typename TTypes<Tindices>::ConstMatrix a_indices,
typename TTypes<T>::ConstVec a_values, typename TTypes<T>::ConstMatrix b);
};
template <typename MATRIX, bool ADJ>
class MaybeAdjoint;
template <typename MATRIX>
class MaybeAdjoint<MATRIX, false> {
public:
EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE MaybeAdjoint(MATRIX m) : m_(m) {}
EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE typename MATRIX::Scalar operator()(
const typename MATRIX::Index i, const typename MATRIX::Index j) const {
return m_(i, j);
}
private:
const MATRIX m_;
};
template <typename T>
EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE T MaybeConj(T v) {
return v;
}
template <typename MATRIX>
class MaybeAdjoint<MATRIX, true> {
public:
EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE MaybeAdjoint(MATRIX m) : m_(m) {}
EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE typename MATRIX::Scalar operator()(
const typename MATRIX::Index i, const typename MATRIX::Index j) const {
return Eigen::numext::conj(m_(j, i));
}
private:
const MATRIX m_;
};
} // end namespace functor
} // end namespace tensorflow
#endif // TENSORFLOW_CORE_KERNELS_BINARY_SPARSE_TENSOR_DENSE_MATMUL_OP_H_

View File

@ -0,0 +1,145 @@
#ifndef TENSORFLOW_CORE_KERNELS_BINARY_SPARSE_TENSOR_DENSE_MATMUL_IMPL_H_
#define TENSORFLOW_CORE_KERNELS_BINARY_SPARSE_TENSOR_DENSE_MATMUL_IMPL_H_
#include <atomic>
#include "tensorflow/core/framework/op_kernel.h"
#include "tensorflow/core/lib/core/blocking_counter.h"
#include "tensorflow/core/lib/core/threadpool.h"
namespace tensorflow {
namespace functor {
// `ConservativeShard` is adopted rather than `Shard` in tensorflow because the
// original `Shard` may generate number of shards more than the number of
// threads, which is not ideal for this case, as it may cause too much overhead.
static void ConservativeShard(int max_parallelism, thread::ThreadPool *workers,
int64 total, int64 cost_per_unit,
std::function<void(int64, int64)> work) {
if (total == 0) {
return;
}
max_parallelism = std::min(max_parallelism, workers->NumThreads());
if (max_parallelism <= 1) {
// Just inline the whole work since we only have 1 thread (core).
work(0, total);
return;
}
cost_per_unit = std::max(1LL, cost_per_unit);
// We shard [0, total) into "num_shards" shards.
// 1 <= num_shards <= num worker threads
//
// If total * cost_per_unit is small, it is not worth shard too
// much. Let us assume each cost unit is 1ns, kMinCostPerShard=10000
// is 10us.
static const int64 kMinCostPerShard = 10000;
const int num_shards =
std::max<int>(1, std::min(static_cast<int64>(max_parallelism),
total * cost_per_unit / kMinCostPerShard));
// Each shard contains up to "block_size" units. [0, total) is sharded
// into:
// [0, block_size), [block_size, 2*block_size), ...
// The 1st shard is done by the caller thread and the other shards
// are dispatched to the worker threads. The last shard may be smaller than
// block_size.
const int64 block_size = (total + num_shards - 1) / num_shards;
if (block_size >= total) {
work(0, total);
return;
}
const int num_shards_used = (total + block_size - 1) / block_size;
BlockingCounter counter(num_shards_used - 1);
for (int64 start = block_size; start < total; start += block_size) {
auto limit = std::min(start + block_size, total);
workers->Schedule([&work, &counter, start, limit]() {
work(start, limit); // Compute the shard.
counter.DecrementCount(); // The shard is done.
});
}
// Inline execute the 1st shard.
work(0, std::min(block_size, total));
counter.Wait();
}
static inline void VectorSum(float *a, const float *b, int n) {
for (int i = 0; i < n; ++i) {
a[i] += b[i];
}
}
// This func is to vectorize the computation of segment sum.
template<typename Tindices>
static void LookupAndSegmentSum(const Tindices *a_indices, const float *b,
int nnz, int outer_right, float *output) {
for (std::size_t i = 0; i < nnz; ++i) {
const Tindices m = a_indices[i * 2];
const Tindices k = a_indices[i * 2 + 1];
auto output_row_m = output + m * outer_right;
auto b_row_k = b + k * outer_right;
VectorSum(output_row_m, b_row_k, outer_right);
}
}
// This func enables sharding and multithreading, it comes with an overhead of
// duplicating output buffer to achieve lock free output. So there should not
// be too many threads.
template<typename Tindices>
static void ParallelLookupAndSegmentSum(OpKernelContext *ctx,
const Tindices *a_indices,
const float *b, int nnz, int outer_left,
int outer_right, float *output) {
auto worker_threads = *(ctx->device()->tensorflow_cpu_worker_threads());
int out_size = outer_left * outer_right;
if (worker_threads.num_threads <= 1) {
memset(output, 0, out_size * sizeof(float));
LookupAndSegmentSum<Tindices>(a_indices, b,
nnz, outer_right,
output);
return;
}
// this is to make buffer align with kAllocatorAlignment
int padded_out_size = (out_size + (Allocator::kAllocatorAlignment - 1)) &
~(Allocator::kAllocatorAlignment - 1);
std::size_t num_bytes =
(worker_threads.num_threads - 1) * padded_out_size * sizeof(float);
auto buffer = std::unique_ptr<float>(reinterpret_cast<float *>(
port::AlignedMalloc(num_bytes, Allocator::kAllocatorAlignment)));
float *temp_out = buffer.get();
std::atomic<int> thread_index(0);
auto task = [&](int64 start, int64 limit) {
int local_thread_index = thread_index++;
float *buf_ptr = nullptr;
if (local_thread_index == 0) {
buf_ptr = output;
} else {
buf_ptr = temp_out + (local_thread_index - 1) * padded_out_size;
}
memset(buf_ptr, 0, out_size * sizeof(float));
LookupAndSegmentSum<Tindices>(a_indices + start * 2, b,
limit - start, outer_right,
buf_ptr);
};
int cost_per_unit = outer_right;
// We don't use tensorflow shard func as tf may create more shards than
// number of threads.
ConservativeShard(worker_threads.num_threads, worker_threads.workers, nnz,
static_cast<int64>(cost_per_unit), task);
for (int i = 1; i < thread_index; ++i) {
VectorSum(output, temp_out + (i - 1) * padded_out_size, out_size);
}
}
} // namespace functor
} // namespace tensorflow
#endif // TENSORFLOW_CORE_KERNELS_BINARY_SPARSE_TENSOR_DENSE_MATMUL_IMPL_H_

View File

@ -0,0 +1,243 @@
#include "block_format_reader.h"
#include "tensorflow/core/framework/dataset.h"
#include "tensorflow/core/framework/partial_tensor_shape.h"
#include "tensorflow/core/framework/tensor.h"
#include "tensorflow/core/lib/io/random_inputstream.h"
#if !defined(DISABLE_ZLIB)
#include "tensorflow/core/lib/io/zlib_inputstream.h"
#endif
#include <twml.h>
#include <cstdio>
#include <algorithm>
#include <iterator>
using namespace tensorflow;
inline std::string stripPath(std::string const &file_name) {
const auto pos = file_name.find_last_of("/");
if (pos == std::string::npos) return file_name;
return file_name.substr(pos + 1);
}
inline std::string getExtension(std::string const &file_name) {
const auto stripped_file_name = stripPath(file_name);
const auto pos = stripPath(stripped_file_name).find_last_of(".");
if (pos == std::string::npos) return "";
return stripped_file_name.substr(pos + 1);
}
REGISTER_OP("BlockFormatDatasetV2")
.Input("filenames: string")
.Input("compression_type: string")
.Input("buffer_size: int64")
.Output("handle: variant")
.SetIsStateful()
.SetShapeFn(shape_inference::ScalarShape)
.Doc(R"doc(
Creates a dataset for streaming BlockFormat data in compressed (e.g. gzip), uncompressed formats.
This op also has the ability stream a dataset containing files from multiple formats mentioned above.
filenames: A scalar or vector containing the name(s) of the file(s) to be read.
compression_type: A scalar string denoting the compression type. Can be 'none', 'zlib', 'auto'.
buffer_size: A scalar denoting the buffer size to use during decompression.
Outputs
handle: A handle to the dataset. This handle is later used to create an iterator to stream the data from the dataset.
)doc");
class BlockFormatDatasetV2 : public DatasetOpKernel {
public:
using DatasetOpKernel::DatasetOpKernel;
void MakeDataset(OpKernelContext* ctx, DatasetBase **output) override {
const Tensor* filenames_tensor;
OP_REQUIRES_OK(ctx, ctx->input("filenames", &filenames_tensor));
OP_REQUIRES(
ctx, filenames_tensor->dims() <= 1,
errors::InvalidArgument("`filenames` must be a scalar or a vector."));
const auto filenames_flat = filenames_tensor->flat<string>();
const int64 num_files = filenames_tensor->NumElements();
std::vector<string> filenames;
filenames.reserve(num_files);
std::copy(filenames_flat.data(),
filenames_flat.data() + num_files,
std::back_inserter(filenames));
string compression_type;
OP_REQUIRES_OK(
ctx, tensorflow::data::ParseScalarArgument<string>(
ctx, "compression_type", &compression_type));
int64 buffer_size = -1;
OP_REQUIRES_OK(
ctx, tensorflow::data::ParseScalarArgument<int64>(
ctx, "buffer_size", &buffer_size));
OP_REQUIRES(ctx, buffer_size >= 0,
errors::InvalidArgument(
"`buffer_size` must be >= 0 (0 == no buffering)"));
OP_REQUIRES(ctx,
compression_type == "auto" ||
compression_type == "gz" ||
compression_type == "",
errors::InvalidArgument("Unknown extension: ", compression_type));
*output = new Dataset(ctx, std::move(filenames), compression_type, buffer_size);
}
private:
class Dataset : public DatasetBase {
public:
Dataset(OpKernelContext* ctx,
std::vector<string> filenames,
std::string compression_type,
int64 buffer_size)
: DatasetBase(DatasetContext(ctx)),
compression_type_(compression_type),
buffer_size_(buffer_size),
filenames_(std::move(filenames))
{}
const DataTypeVector& output_dtypes() const override {
static DataTypeVector* dtypes = new DataTypeVector({DT_STRING});
return *dtypes;
}
const std::vector<PartialTensorShape>& output_shapes() const override {
static std::vector<PartialTensorShape>* shapes =
new std::vector<PartialTensorShape>({{}});
return *shapes;
}
string DebugString() const override { return "BlockFormatDatasetV2::Dataset"; }
protected:
Status AsGraphDefInternal(SerializationContext* ctx,
DatasetGraphDefBuilder* b,
Node** output) const override {
Node* filenames = nullptr;
Node* compression_type = nullptr;
Node* buffer_size = nullptr;
TF_RETURN_IF_ERROR(b->AddVector(filenames_, &filenames));
TF_RETURN_IF_ERROR(b->AddScalar(compression_type_, &compression_type));
TF_RETURN_IF_ERROR(
b->AddScalar(buffer_size_, &buffer_size));
TF_RETURN_IF_ERROR(b->AddDataset(
this, {filenames, compression_type, buffer_size}, output));
return Status::OK();
}
private:
std::unique_ptr<IteratorBase> MakeIteratorInternal(
const string& prefix) const override {
return std::unique_ptr<IteratorBase>(
new Iterator({this, strings::StrCat(prefix, "::BlockFormat")}));
}
class Iterator : public DatasetIterator<Dataset> {
public:
explicit Iterator(const Params &params)
: DatasetIterator<Dataset>(params) {}
Status GetNextInternal(IteratorContext* ctx,
std::vector<Tensor>* out_tensors,
bool* end_of_sequence) override {
mutex_lock l(mu_);
do {
// We are currently processing a file, so try to read the next record.
if (reader_) {
Tensor result_tensor(cpu_allocator(), DT_STRING, {});
Status s = reader_->ReadNext(&result_tensor.scalar<string>()());
if (s.ok()) {
out_tensors->emplace_back(std::move(result_tensor));
*end_of_sequence = false;
return Status::OK();
} else if (!errors::IsOutOfRange(s)) {
return s;
}
// We have reached the end of the current file, so maybe
// move on to next file.
reader_.reset();
++current_file_index_;
}
// Iteration ends when there are no more files to process.
if (current_file_index_ == dataset()->filenames_.size()) {
*end_of_sequence = true;
return Status::OK();
}
// Actually move on to next file.
const string& next_filename =
dataset()->filenames_[current_file_index_];
auto compression_type = dataset()->compression_type_;
int64 buffer_size = dataset()->buffer_size_;
if (compression_type == "auto") {
compression_type = getExtension(next_filename);
}
if (compression_type != "gz" && compression_type != "") {
return errors::InvalidArgument("Unknown extension: ", compression_type);
}
tensorflow::Env* env = tensorflow::Env::Default();
TF_CHECK_OK(env->NewRandomAccessFile(next_filename, &file_));
// RandomAccessInputstream defaults the second param to "false".
// The second parameter "false" is the key issue.
// "false" assumes the ownership of the file is elsewhere.
// But making that "true" causes segfaults down the line.
// So keep the ownership of "file_" in this class and clean up properly.
file_stream_.reset(new tensorflow::io::RandomAccessInputStream(file_.get(), false));
if (compression_type == "gz") {
// unpack_stream does not take ownership of file_stream_
#if !defined(DISABLE_ZLIB)
unpack_stream_.reset(new tensorflow::io::ZlibInputStream(
file_stream_.get(),
buffer_size,
buffer_size,
tensorflow::io::ZlibCompressionOptions::GZIP()));
reader_.reset(new BlockFormatReader(unpack_stream_.get()));
#else
return errors::InvalidArgument("libtwml compiled without zlib support");
#endif
} else {
unpack_stream_.reset(nullptr);
reader_.reset(new BlockFormatReader(file_stream_.get()));
}
} while (true);
}
private:
mutex mu_;
uint64_t current_file_index_ GUARDED_BY(mu_) = 0;
std::unique_ptr<tensorflow::RandomAccessFile> file_;
std::unique_ptr<tensorflow::io::InputStreamInterface> file_stream_;
std::unique_ptr<tensorflow::io::InputStreamInterface> unpack_stream_;
std::unique_ptr<BlockFormatReader> reader_ GUARDED_BY(mu_);
};
const std::string compression_type_;
const int64 buffer_size_;
const std::vector<string> filenames_;
};
};
REGISTER_KERNEL_BUILDER(
Name("BlockFormatDatasetV2")
.Device(DEVICE_CPU),
BlockFormatDatasetV2);

View File

@ -0,0 +1,50 @@
#pragma once
#include "tensorflow/core/framework/common_shape_fns.h"
#include "tensorflow/core/framework/op.h"
#include "tensorflow/core/framework/shape_inference.h"
#include "tensorflow/core/framework/op_kernel.h"
#include "tensorflow/core/platform/env.h"
#include "tensorflow/core/lib/io/random_inputstream.h"
#include <twml.h>
#include <string>
using tensorflow::int64;
using tensorflow::Status;
using std::string;
class BlockFormatReader : twml::BlockFormatReader {
public:
explicit BlockFormatReader(tensorflow::io::InputStreamInterface *stream)
: twml::BlockFormatReader() , stream_(stream) {
}
// Read the next record.
// Returns OK on success,
// Returns OUT_OF_RANGE for end of file, or something else for an error.
Status ReadNext(string* record) {
if (this->next()) {
return stream_->ReadNBytes(this->current_size(), record);
}
return tensorflow::errors::OutOfRange("eof");
}
uint64_t read_bytes(void *dest, int size, int count) {
uint64_t bytesToRead = size * count;
std::string current;
// TODO: Try to merge ReadNBytes and the memcpy below
// ReadNBytes performs a memory copy already.
Status status = stream_->ReadNBytes(bytesToRead, &current);
if (!status.ok()) {
return 0;
}
memcpy(dest, current.c_str(), bytesToRead);
return count;
}
private:
tensorflow::io::InputStreamInterface *stream_;
TF_DISALLOW_COPY_AND_ASSIGN(BlockFormatReader);
};

View File

@ -0,0 +1,138 @@
#include "tensorflow/core/framework/op.h"
#include "tensorflow/core/framework/shape_inference.h"
#include "tensorflow/core/framework/op_kernel.h"
#include <algorithm> // std::fill_n
using namespace tensorflow;
REGISTER_OP("CompressSampleIds")
.Attr("T: {int32}")
.Input("input: T")
.Output("output: T")
.SetShapeFn([](::tensorflow::shape_inference::InferenceContext* c) {
c->set_output(0, c->Vector(c->kUnknownDim));
return Status::OK();
});
template<typename T>
class CompressSampleIds : public OpKernel {
public:
explicit CompressSampleIds(OpKernelConstruction* context) : OpKernel(context) {}
void Compute(OpKernelContext* context) override {
// Grab the input tensor
const Tensor& input_tensor = context->input(0);
auto input = input_tensor.flat<T>();
const int N = input.size();
// Check for improper input
bool error = (N > 0 && input(0) < 0);
for (int i = 1; !error && i < N; i++) {
error = input(i - 1) > input(i);
}
OP_REQUIRES(
context, !error,
errors::InvalidArgument(
"Error in CompressSampleIds. SampleIds must be non-negative and non-decreasing"
)
);
// choose output size, either last input element + 1, or 0
int output_size = 0;
if (N > 0) {
output_size = input(N - 1) + 1;
}
// Create an output tensor
Tensor* output_tensor = nullptr;
OP_REQUIRES_OK(
context,
context->allocate_output(0, TensorShape({output_size}), &output_tensor)
);
auto output_flat = output_tensor->flat<T>();
// Zero-initialize output
for (int i = 0; i < output_size; i++) {
output_flat(i) = 0;
}
// count how many of each input element
for (int i = 0; i < N; i++) {
output_flat(input(i)) ++;
}
}
};
REGISTER_OP("DecompressSampleIds")
.Attr("T: {int32}")
.Input("input: T")
.Output("output: T")
.SetShapeFn([](::tensorflow::shape_inference::InferenceContext* c) {
c->set_output(0, c->Vector(c->kUnknownDim));
return Status::OK();
});
template<typename T>
class DecompressSampleIds : public OpKernel {
public:
explicit DecompressSampleIds(OpKernelConstruction* context) : OpKernel(context) {}
void Compute(OpKernelContext* context) override {
// Grab the input tensor
const Tensor& input_tensor = context->input(0);
auto input = input_tensor.flat<T>();
const int N = input.size();
// Check for improper input
bool error = false;
int output_size = 0;
for (int i = 0; !error && i < N; i++) {
error = input(i) < 0;
output_size += input(i);
}
OP_REQUIRES(
context, !error,
errors::InvalidArgument(
"Error in DecompressSampleIds. Inputs must be non-negative."
)
);
// Create an output tensor
Tensor* output_tensor = nullptr;
OP_REQUIRES_OK(
context,
context->allocate_output(0, TensorShape({output_size}),&output_tensor)
);
auto output_flat = output_tensor->flat<T>();
T *output_data = output_flat.data();
for (int current_sample = 0; current_sample < N; current_sample++) {
std::fill_n(output_data, input(current_sample), current_sample);
output_data += input(current_sample);
}
}
};
#define REGISTER(Type) \
\
REGISTER_KERNEL_BUILDER( \
Name("CompressSampleIds") \
.Device(DEVICE_CPU) \
.TypeConstraint<Type>("T"), \
CompressSampleIds<Type>); \
\
REGISTER_KERNEL_BUILDER( \
Name("DecompressSampleIds") \
.Device(DEVICE_CPU) \
.TypeConstraint<Type>("T"), \
DecompressSampleIds<Type>); \
\
REGISTER(int32);

View File

@ -0,0 +1,116 @@
#include "tensorflow/core/framework/op.h"
#include "tensorflow/core/framework/shape_inference.h"
#include "tensorflow/core/framework/op_kernel.h"
#include <twml.h>
#include "../tensorflow_utils.h"
#include "../resource_utils.h"
#include <string>
#include <set>
using std::string;
void join(const std::set<string>& v, char c, string& s) {
s.clear();
std::set<std::string>::iterator it = v.begin();
while (it != v.end()) {
s += *it;
it++;
if (it != v.end()) s+= c;
}
}
// cpp function that computes substrings of a given word
std::string computeSubwords(std::string word, int32_t minn, int32_t maxn) {
std::string word2 = "<" + word + ">";
std::set<string> ngrams;
std::string s;
ngrams.insert(word);
ngrams.insert(word2);
for (size_t i = 0; i < word2.size(); i++) {
if ((word2[i] & 0xC0) == 0x80) continue;
for (size_t j = minn; i+j <= word2.size() && j <= maxn; j++) {
ngrams.insert(word2.substr(i, j));
}
}
join(ngrams, ';', s);
ngrams.clear();
return s;
}
// tf-op function that computes substrings for a given tensor of words
template< typename ValueType>
void ComputeSubStringsTensor(OpKernelContext *context, int32 min_n, int32 max_n) {
try {
const Tensor& values = context->input(0);
auto values_flat = values.flat<ValueType>();
// batch_size from input_size :
const int batch_size = values_flat.size();
// define the output tensor
Tensor* substrings = nullptr;
OP_REQUIRES_OK(context, context->allocate_output(0, values.shape(), &substrings));
auto substrings_flat = substrings->flat<ValueType>();
// compute substrings for the given tensor values
for (int64 i = 0; i < batch_size; i++) {
substrings_flat(i) = computeSubwords(values_flat(i), min_n, max_n);
}
}
catch (const std::exception &err) {
context->CtxFailureWithWarning(errors::InvalidArgument(err.what()));
}
}
REGISTER_OP("GetSubstrings")
.Attr("ValueType: {string}")
.Attr("min_n: int")
.Attr("max_n: int")
.Input("values: ValueType")
.Output("substrings: ValueType")
.SetShapeFn([](::tensorflow::shape_inference::InferenceContext* c) {
c->set_output(0, c->input(0));
return Status::OK();
}).Doc(R"doc(
A tensorflow OP to convert word to substrings of length between min_n and max_n.
Attr
min_n,max_n: The size of the substrings.
Input
values: 1D input tensor containing the values.
Outputs
substrings: A string tensor where substrings are joined by ";".
)doc");
template<typename ValueType>
class GetSubstrings : public OpKernel {
public:
explicit GetSubstrings(OpKernelConstruction *context) : OpKernel(context) {
OP_REQUIRES_OK(context, context->GetAttr("min_n", &min_n));
OP_REQUIRES_OK(context, context->GetAttr("max_n", &max_n));
}
private:
int32 min_n;
int32 max_n;
void Compute(OpKernelContext *context) override {
ComputeSubStringsTensor<ValueType>(context, min_n, max_n);
}
};
#define REGISTER_SUBSTRINGS(ValueType) \
REGISTER_KERNEL_BUILDER( \
Name("GetSubstrings") \
.Device(DEVICE_CPU) \
.TypeConstraint<ValueType>("ValueType"), \
GetSubstrings<ValueType>); \
REGISTER_SUBSTRINGS(string)

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,81 @@
#include "tensorflow/core/framework/op.h"
#include "tensorflow/core/framework/shape_inference.h"
#include "tensorflow/core/framework/op_kernel.h"
#include <twml.h>
#include "tensorflow_utils.h"
using namespace tensorflow;
REGISTER_OP("DataRecordTensorWriter")
.Attr("T: list({string, int32, int64, float, double, bool})")
.Input("keys: int64")
.Input("values: T")
.Output("result: uint8")
.SetShapeFn([](::tensorflow::shape_inference::InferenceContext* c) {
return Status::OK();
}).Doc(R"doc(
A tensorflow OP that packages keys and dense tensors into a DataRecord.
values: list of tensors
keys: feature ids from the original DataRecord (int64)
Outputs
bytes: output DataRecord serialized using Thrift into a uint8 tensor.
)doc");
class DataRecordTensorWriter : public OpKernel {
public:
explicit DataRecordTensorWriter(OpKernelConstruction* context)
: OpKernel(context) {}
void Compute(OpKernelContext* context) override {
const Tensor& keys = context->input(0);
try {
// set keys as twml::Tensor
const twml::Tensor in_keys_ = TFTensor_to_twml_tensor(keys);
// check sizes
uint64_t num_keys = in_keys_.getNumElements();
uint64_t num_values = context->num_inputs() - 1;
OP_REQUIRES(context, num_keys == num_values,
errors::InvalidArgument("Number of dense keys and dense tensors do not match"));
// populate DataRecord object
const int64_t *keys = in_keys_.getData<int64_t>();
twml::DataRecord record = twml::DataRecord();
for (int i = 1; i < context->num_inputs(); i++) {
const twml::RawTensor& value = TFTensor_to_twml_raw_tensor(context->input(i));
record.addRawTensor(keys[i-1], value);
}
// determine the length of the encoded result (no memory is copied)
twml::ThriftWriter thrift_dry_writer = twml::ThriftWriter(nullptr, 0, true);
twml::DataRecordWriter record_dry_writer = twml::DataRecordWriter(thrift_dry_writer);
record_dry_writer.write(record);
int len = thrift_dry_writer.getBytesWritten();
TensorShape result_shape = {1, len};
// allocate output tensor
Tensor* result = NULL;
OP_REQUIRES_OK(context, context->allocate_output(0, result_shape, &result));
twml::Tensor out_result = TFTensor_to_twml_tensor(*result);
// write to output tensor
uint8_t *buffer = out_result.getData<uint8_t>();
twml::ThriftWriter thrift_writer = twml::ThriftWriter(buffer, len, false);
twml::DataRecordWriter record_writer = twml::DataRecordWriter(thrift_writer);
record_writer.write(record);
} catch(const std::exception &e) {
context->CtxFailureWithWarning(errors::InvalidArgument(e.what()));
}
}
};
REGISTER_KERNEL_BUILDER(
Name("DataRecordTensorWriter").Device(DEVICE_CPU),
DataRecordTensorWriter);

View File

@ -0,0 +1,293 @@
#include "tensorflow/core/framework/op.h"
#include "tensorflow/core/framework/shape_inference.h"
#include "tensorflow/core/framework/op_kernel.h"
#include <twml.h>
#include "tensorflow_utils.h"
using namespace tensorflow;
void ComputeDiscretizers(OpKernelContext* context, const bool return_bin_indices = false) {
const Tensor& keys = context->input(0);
const Tensor& vals = context->input(1);
const Tensor& bin_ids = context->input(2);
const Tensor& bin_vals = context->input(3);
const Tensor& feature_offsets = context->input(4);
Tensor* new_keys = nullptr;
OP_REQUIRES_OK(context, context->allocate_output(0, keys.shape(),
&new_keys));
Tensor* new_vals = nullptr;
OP_REQUIRES_OK(context, context->allocate_output(1, keys.shape(),
&new_vals));
try {
twml::Tensor out_keys_ = TFTensor_to_twml_tensor(*new_keys);
twml::Tensor out_vals_ = TFTensor_to_twml_tensor(*new_vals);
const twml::Tensor in_keys_ = TFTensor_to_twml_tensor(keys);
const twml::Tensor in_vals_ = TFTensor_to_twml_tensor(vals);
const twml::Tensor bin_ids_ = TFTensor_to_twml_tensor(bin_ids);
const twml::Tensor bin_vals_ = TFTensor_to_twml_tensor(bin_vals);
const twml::Tensor feature_offsets_ = TFTensor_to_twml_tensor(feature_offsets);
twml::mdlInfer(out_keys_, out_vals_,
in_keys_, in_vals_,
bin_ids_, bin_vals_,
feature_offsets_,
return_bin_indices);
} catch (const std::exception &e) {
context->CtxFailureWithWarning(errors::InvalidArgument(e.what()));
}
}
REGISTER_OP("MDL")
.Attr("T: {float, double}")
.Input("keys: int64")
.Input("vals: T")
.Input("bin_ids: int64")
.Input("bin_vals: T")
.Input("feature_offsets: int64")
.Output("new_keys: int64")
.Output("new_vals: T")
.SetShapeFn([](::tensorflow::shape_inference::InferenceContext* c) {
// TODO: check sizes
c->set_output(0, c->input(0));
c->set_output(1, c->input(0));
return Status::OK();
}).Doc(R"doc(
This operation discretizes a tensor containing continuous features.
Input
keys: A tensor containing feature ids.
vals: A tensor containing values at corresponding feature ids.
bin_ids: A tensor containing the discretized feature id for a given bin.
bin_vals: A tensor containing the bin boundaries for value at a given feature id.
feature_offsets: Specifies the starting location of bins for a given feature id.
Expected Sizes:
keys, vals: [N].
bin_ids, bin_vals: [sum_{n=1}^{n=num_classes} num_bins(n)]
where
- N is the number of sparse features in the current batch.
- [0, num_classes) represents the range each feature id can take.
- num_bins(n) is the number of bins for a given feature id.
- If num_bins is fixed, then xs, ys are of size [num_classes * num_bins].
Expected Types:
keys, bin_ids: int64.
vals: float or double.
bin_vals: same as vals.
Before using MDL, you should use a hashmap to get the intersection of
input `keys` with the features that MDL knows about:
::
keys, vals # keys can be in range [0, 1 << 63)
mdl_keys = hashmap.find(keys) # mdl_keys are now in range [0, num_classes_from_calibration)
mdl_keys = where (mdl_keys != -1) # Ignore keys not found
Inside MDL, the following is happening:
::
start = offsets[key[i]]
end = offsets[key[i] + 1]
idx = binary_search for val[i] in [bin_vals[start], bin_vals[end]]
result_keys[i] = bin_ids[idx]
val[i] = 1 # binary feature value
Outputs
new_keys: The discretized feature ids with same shape and size as keys.
new_vals: The discretized values with the same shape and size as vals.
)doc");
template<typename T>
class MDL : public OpKernel {
public:
explicit MDL(OpKernelConstruction* context) : OpKernel(context) {
}
void Compute(OpKernelContext* context) override {
ComputeDiscretizers(context);
}
};
REGISTER_OP("PercentileDiscretizer")
.Attr("T: {float, double}")
.Input("keys: int64")
.Input("vals: T")
.Input("bin_ids: int64")
.Input("bin_vals: T")
.Input("feature_offsets: int64")
.Output("new_keys: int64")
.Output("new_vals: T")
.SetShapeFn([](::tensorflow::shape_inference::InferenceContext* c) {
// TODO: check sizes
c->set_output(0, c->input(0));
c->set_output(1, c->input(0));
return Status::OK();
}).Doc(R"doc(
This operation discretizes a tensor containing continuous features.
Input
keys: A tensor containing feature ids.
vals: A tensor containing values at corresponding feature ids.
bin_ids: A tensor containing the discretized feature id for a given bin.
bin_vals: A tensor containing the bin boundaries for value at a given feature id.
feature_offsets: Specifies the starting location of bins for a given feature id.
Expected Sizes:
keys, vals: [N].
bin_ids, bin_vals: [sum_{n=1}^{n=num_classes} num_bins(n)]
where
- N is the number of sparse features in the current batch.
- [0, num_classes) represents the range each feature id can take.
- num_bins(n) is the number of bins for a given feature id.
- If num_bins is fixed, then xs, ys are of size [num_classes * num_bins].
Expected Types:
keys, bin_ids: int64.
vals: float or double.
bin_vals: same as vals.
Before using PercentileDiscretizer, you should use a hashmap to get the intersection of
input `keys` with the features that PercentileDiscretizer knows about:
::
keys, vals # keys can be in range [0, 1 << 63)
percentile_discretizer_keys = hashmap.find(keys) # percentile_discretizer_keys are now in range [0, num_classes_from_calibration)
percentile_discretizer_keys = where (percentile_discretizer_keys != -1) # Ignore keys not found
Inside PercentileDiscretizer, the following is happening:
::
start = offsets[key[i]]
end = offsets[key[i] + 1]
idx = binary_search for val[i] in [bin_vals[start], bin_vals[end]]
result_keys[i] = bin_ids[idx]
val[i] = 1 # binary feature value
Outputs
new_keys: The discretized feature ids with same shape and size as keys.
new_vals: The discretized values with the same shape and size as vals.
)doc");
template<typename T>
class PercentileDiscretizer : public OpKernel {
public:
explicit PercentileDiscretizer(OpKernelConstruction* context) : OpKernel(context) {
}
void Compute(OpKernelContext* context) override {
ComputeDiscretizers(context);
}
};
REGISTER_OP("PercentileDiscretizerBinIndices")
.Attr("T: {float, double}")
.Input("keys: int64")
.Input("vals: T")
.Input("bin_ids: int64")
.Input("bin_vals: T")
.Input("feature_offsets: int64")
.Output("new_keys: int64")
.Output("new_vals: T")
.SetShapeFn([](::tensorflow::shape_inference::InferenceContext* c) {
// TODO: check sizes
c->set_output(0, c->input(0));
c->set_output(1, c->input(0));
return Status::OK();
}).Doc(R"doc(
This operation discretizes a tensor containing continuous features.
If the feature id and bin id of the discretized value is the same on multiple runs, they
will always be assigned to the same output key and value, regardless of the bin_id assigned during
calibration.
Input
keys: A tensor containing feature ids.
vals: A tensor containing values at corresponding feature ids.
bin_ids: A tensor containing the discretized feature id for a given bin.
bin_vals: A tensor containing the bin boundaries for value at a given feature id.
feature_offsets: Specifies the starting location of bins for a given feature id.
Expected Sizes:
keys, vals: [N].
bin_ids, bin_vals: [sum_{n=1}^{n=num_classes} num_bins(n)]
where
- N is the number of sparse features in the current batch.
- [0, num_classes) represents the range each feature id can take.
- num_bins(n) is the number of bins for a given feature id.
- If num_bins is fixed, then xs, ys are of size [num_classes * num_bins].
Expected Types:
keys, bin_ids: int64.
vals: float or double.
bin_vals: same as vals.
Before using PercentileDiscretizerBinIndices, you should use a hashmap to get the intersection of
input `keys` with the features that PercentileDiscretizerBinIndices knows about:
::
keys, vals # keys can be in range [0, 1 << 63)
percentile_discretizer_keys = hashmap.find(keys) # percentile_discretizer_keys are now in range [0, num_classes_from_calibration)
percentile_discretizer_keys = where (percentile_discretizer_keys != -1) # Ignore keys not found
Inside PercentileDiscretizerBinIndices, the following is happening:
::
start = offsets[key[i]]
end = offsets[key[i] + 1]
idx = binary_search for val[i] in [bin_vals[start], bin_vals[end]]
result_keys[i] = bin_ids[idx]
val[i] = 1 # binary feature value
Outputs
new_keys: The discretized feature ids with same shape and size as keys.
new_vals: The discretized values with the same shape and size as vals.
)doc");
template<typename T>
class PercentileDiscretizerBinIndices : public OpKernel {
public:
explicit PercentileDiscretizerBinIndices(OpKernelConstruction* context) : OpKernel(context) {
}
void Compute(OpKernelContext* context) override {
ComputeDiscretizers(context, true);
}
};
#define REGISTER(Type) \
\
REGISTER_KERNEL_BUILDER( \
Name("PercentileDiscretizerBinIndices") \
.Device(DEVICE_CPU) \
.TypeConstraint<Type>("T"), \
PercentileDiscretizerBinIndices<Type>); \
\
REGISTER_KERNEL_BUILDER( \
Name("PercentileDiscretizer") \
.Device(DEVICE_CPU) \
.TypeConstraint<Type>("T"), \
PercentileDiscretizer<Type>); \
\
REGISTER_KERNEL_BUILDER( \
Name("MDL") \
.Device(DEVICE_CPU) \
.TypeConstraint<Type>("T"), \
MDL<Type>); \
REGISTER(float);
REGISTER(double);

View File

@ -0,0 +1,134 @@
#include "tensorflow/core/framework/op.h"
#include "tensorflow/core/framework/shape_inference.h"
#include "tensorflow/core/framework/op_kernel.h"
#include <twml.h>
#include "tensorflow_utils.h"
#include <map>
#include <vector>
REGISTER_OP("FeatureExtractor")
.Attr("T: {float, double} = DT_FLOAT")
.Input("mask_in: bool")
.Input("ids_in: int64")
.Input("keys_in: int64")
.Input("values_in: T")
.Input("codes_in: int64")
.Input("types_in: int8")
.Output("ids_out: int64")
.Output("keys_out: int64")
.Output("values_out: T")
.Output("codes_out: int64")
.Output("types_out: int8")
.SetShapeFn([](::tensorflow::shape_inference::InferenceContext* c) {
return Status::OK();
}).Doc(R"doc(
A tensorflow OP that extracts the desired indices of a Tensor based on a mask
Input
mask_in: boolean Tensor that determines which are the indices to be kept (bool)
ids_in: input indices Tensor (int64)
keys_in: input keys Tensor (int64)
values_in: input values Tensor (float/double)
codes_in: input codes Tensor (int64)
types_in: input types Tensor(int8)
Outputs
ids_out: output indices Tensor (int64)
keys_out: output keys Tensor (int64)
values_out: output values Tensor (float/double)
codes_out: output codes Tensor (int64)
types_out: output types Tensor(int8)
)doc");
template <typename T>
class FeatureExtractor : public OpKernel {
public:
explicit FeatureExtractor(OpKernelConstruction* context)
: OpKernel(context) {}
template <typename A, typename U>
bool allequal(const A &t, const U &u) {
return t == u;
}
template <typename A, typename U, typename... Others>
bool allequal(const A &t, const U &u, Others const &... args) {
return (t == u) && allequal(u, args...);
}
void Compute(OpKernelContext* context) override {
// Get input tensors
const Tensor& input_mask = context->input(0);
const Tensor& input_ids = context->input(1);
const Tensor& input_keys = context->input(2);
const Tensor& input_values = context->input(3);
const Tensor& input_codes = context->input(4);
const Tensor& input_types = context->input(5);
auto mask = input_mask.flat<bool>();
auto ids = input_ids.flat<int64>();
auto keys = input_keys.flat<int64>();
auto codes = input_codes.flat<int64>();
auto values = input_values.flat<T>();
auto types = input_types.flat<int8>();
// Verify that all Tensors have the same size.
OP_REQUIRES(context, allequal(mask.size(), ids.size(), keys.size(), codes.size(), values.size(), types.size()),
errors::InvalidArgument("all input vectors must be the same size."));
// Get the size of the output vectors by counting the numbers of trues.
int total_size = 0;
for (int i = 0; i < mask.size(); i++) {
if (mask(i))
total_size += 1;
}
// Shape is the number of Trues in the mask Eigen::Tensor
TensorShape shape_out = {total_size};
// Create the output tensors
Tensor* output_codes = nullptr;
Tensor* output_ids = nullptr;
Tensor* output_values = nullptr;
Tensor* output_types = nullptr;
Tensor* output_keys = nullptr;
OP_REQUIRES_OK(context, context->allocate_output(0, shape_out, &output_ids));
OP_REQUIRES_OK(context, context->allocate_output(1, shape_out, &output_keys));
OP_REQUIRES_OK(context, context->allocate_output(2, shape_out, &output_values));
OP_REQUIRES_OK(context, context->allocate_output(3, shape_out, &output_codes));
OP_REQUIRES_OK(context, context->allocate_output(4, shape_out, &output_types));
auto output_ids_ = output_ids->flat<int64>();
auto output_keys_ = output_keys->flat<int64>();
auto output_codes_ = output_codes->flat<int64>();
auto output_values_ = output_values->flat<T>();
auto output_types_ = output_types->flat<int8>();
// Iterate through the mask and set values to output Eigen::Tensors
int j = 0;
for (int i = 0; i < mask.size(); i++) {
if (mask(i)) {
output_ids_(j) = ids(i);
output_keys_(j) = keys(i);
output_values_(j) = values(i);
output_codes_(j) = codes(i);
output_types_(j) = types(i);
++j;
}
}
}
};
#define REGISTER(Type) \
\
REGISTER_KERNEL_BUILDER( \
Name("FeatureExtractor") \
.Device(DEVICE_CPU) \
.TypeConstraint<Type>("T"), \
FeatureExtractor<Type>); \
REGISTER(float);
REGISTER(double);

View File

@ -0,0 +1,58 @@
#include "tensorflow/core/framework/op.h"
#include "tensorflow/core/framework/shape_inference.h"
#include "tensorflow/core/framework/op_kernel.h"
#include <twml.h>
#include "tensorflow_utils.h"
using namespace tensorflow;
REGISTER_OP("FeatureId")
.Attr("feature_names: list(string)")
.Output("output: int64")
.SetShapeFn([](::tensorflow::shape_inference::InferenceContext* c) {
return Status::OK();
}).Doc(R"doc(
A tensorflow OP that hashes a list of strings into int64. This is used for feature name hashing.
Attr
feature_names: a list of string feature names (list(string)).
Outputs
ouput: hashes corresponding to the string feature names (int64).
)doc");
class FeatureId : public OpKernel {
private:
std::vector<string> input_vector;
public:
explicit FeatureId(OpKernelConstruction* context) : OpKernel(context) {
OP_REQUIRES_OK(context, context->GetAttr("feature_names", &input_vector));
}
void Compute(OpKernelContext* context) override {
// Get size of the input_vector and create TensorShape shape
const int total_size = static_cast<int>(input_vector.size());
TensorShape shape = {total_size};
// Create an output tensor
Tensor* output_tensor = nullptr;
OP_REQUIRES_OK(context, context->allocate_output(0, shape,
&output_tensor));
auto output_flat = output_tensor->flat<int64>();
// Transform the input tensor into a int64
for (int i = 0; i < total_size; i++) {
output_flat(i) = twml::featureId(input_vector[i]);
}
}
};
REGISTER_KERNEL_BUILDER(
Name("FeatureId")
.Device(DEVICE_CPU),
FeatureId);

View File

@ -0,0 +1,83 @@
#include "tensorflow/core/framework/op.h"
#include "tensorflow/core/framework/shape_inference.h"
#include "tensorflow/core/framework/op_kernel.h"
#include <twml.h>
#include "tensorflow_utils.h"
#include <map>
#include <vector>
#include <set>
REGISTER_OP("FeatureMask")
.Attr("T: {int64, int8}")
.Input("keep: T")
.Attr("list_keep: list(int)")
.Output("mask: bool")
.SetShapeFn([](::tensorflow::shape_inference::InferenceContext* c) {
return Status::OK();
}).Doc(R"doc(
A tensorflow OP that creates a mask of the indices that should be kept.
Attribute
list_keep: list of values which should be kept(list(int))
Input
keep: Tensor for which we will apply the mask (int64, int8)
Outputs
mask: boolean Tensor. (bool)
)doc");
template <typename T>
class FeatureMask : public OpKernel {
private:
std::set<int64> feature_set_keep;
public:
explicit FeatureMask(OpKernelConstruction* context)
: OpKernel(context) {
std::vector<int64> feature_list_keep;
OP_REQUIRES_OK(context, context->GetAttr("list_keep", &feature_list_keep));
// create set that contains the content of the feature_list_keep, since tensorflow does not allow
// me to directly ouput the contents of list_keep to a set
feature_set_keep = std::set<int64>(feature_list_keep.begin(), feature_list_keep.end());
}
void Compute(OpKernelContext* context) override {
// Get size of the input_vector and create TensorShape shape
const Tensor& input = context->input(0);
auto keep = input.flat<T>();
// Create an output tensor
Tensor* output_mask = nullptr;
// Output shape is determined and now we can copy the contents of the vector to the output Tensor.
const int total_size_out = static_cast<int>(keep.size());
TensorShape shape_out = {total_size_out};
OP_REQUIRES_OK(context, context->allocate_output(0, shape_out, &output_mask));
auto output_mask_ = output_mask->flat<bool>();
// Check if value is in set, output is boolean
for (int j = 0; j < keep.size(); j++){
output_mask_(j) = (feature_set_keep.count(keep(j)));
}
}
};
#define REGISTER(Type) \
\
REGISTER_KERNEL_BUILDER( \
Name("FeatureMask") \
.Device(DEVICE_CPU) \
.TypeConstraint<Type>("T"), \
FeatureMask<Type>); \
REGISTER(int64);
REGISTER(int8);

View File

@ -0,0 +1,190 @@
#include "tensorflow/core/framework/op.h"
#include "tensorflow/core/framework/shape_inference.h"
#include "tensorflow/core/framework/op_kernel.h"
#include <twml.h>
#include "tensorflow_utils.h"
#include "resource_utils.h"
#include <algorithm>
using std::string;
template<typename IndexType, typename ValueType, bool calc_batch_size>
void ComputeFixedLengthTensor(OpKernelContext *context, int64 max_length_) {
try {
const Tensor& segment_ids = context->input(0);
const Tensor& values = context->input(1);
const Tensor& pad_value = context->input(2);
auto indices_flat = segment_ids.flat<IndexType>();
auto values_flat = values.flat<ValueType>();
auto pad_value_scalar = pad_value.scalar<ValueType>()();
// Get maximum length from batch if user hasn't specified it.
int64 max_length = max_length_;
if (max_length < 0 && indices_flat.size() > 0) {
int64 current_id = indices_flat(0);
int64 current_length = 1;
for (int64 i = 1; i < indices_flat.size(); i++) {
if (current_id == indices_flat(i)) {
current_length++;
} else {
current_id = indices_flat(i);
max_length = std::max(max_length, current_length);
current_length = 1;
}
}
// This is needed if the last batch is the longest sequence.
max_length = std::max(max_length, current_length);
}
int64 batch_size = 0;
if (calc_batch_size) {
if (indices_flat.size() > 0) {
// The last value of segment_ids will have value batch_size 1;
batch_size = 1 + indices_flat(indices_flat.size() - 1);
} else {
batch_size = 0;
}
} else {
const Tensor& batch_size_tensor = context->input(3);
batch_size = batch_size_tensor.flat<int64>()(0);
}
TensorShape output_shape = {batch_size, max_length};
Tensor* fixed_length = nullptr;
OP_REQUIRES_OK(context, context->allocate_output(0, output_shape, &fixed_length));
auto fixed_length_flat = fixed_length->flat<ValueType>();
int64 n = 0;
int64 offset = 0;
for (int64 i = 0; i < batch_size; i++) {
for (int64 j = 0; j < max_length; j++) {
if (n < indices_flat.size() && indices_flat(n) == i) {
// Copy from variable length tensor.
fixed_length_flat(offset + j) = values_flat(n);
n++;
} else {
// Pad to fixed length.
fixed_length_flat(offset + j) = pad_value_scalar;
}
}
// Corner case: truncate to max_length if user specified max_length < current length.
while (n < indices_flat.size() && i == indices_flat(n)) n++;
// Update output pointer
offset += max_length;
}
} catch (const std::exception &err) {
context->CtxFailureWithWarning(errors::InvalidArgument(err.what()));
}
}
REGISTER_OP("FixedLengthTensor")
.Attr("IndexType: {int64, int32}")
.Attr("ValueType: {int64, int32, string}")
.Attr("max_length: int")
.Input("segment_ids: IndexType")
.Input("values: ValueType")
.Input("pad_value: ValueType")
.Output("fixed_length: ValueType")
.SetShapeFn([](::tensorflow::shape_inference::InferenceContext* c) {
return Status::OK();
}).Doc(R"doc(
A tensorflow OP to convert variable length segments into fixed length tensor.
Attr
max_length: The size of the inner most (i.e. last) dimension.
Input
segment_ids: 1D input tensor containing the sorted segment_ids.
values: 1D input tensor containing the values.
pad_value: The value used for padding the fixed length tensor.
Outputs
fixed_length: A fixed length tensor of size [batch_size, max_length].
)doc");
template<typename IndexType, typename ValueType>
class FixedLengthTensor: public OpKernel {
public:
explicit FixedLengthTensor(OpKernelConstruction *context) : OpKernel(context) {
OP_REQUIRES_OK(context, context->GetAttr("max_length", &max_length_));
}
private:
int64 max_length_;
void Compute(OpKernelContext *context) override {
ComputeFixedLengthTensor<IndexType, ValueType, true>(context, max_length_);
}
};
REGISTER_OP("FixedLengthTensorV2")
.Attr("IndexType: {int64, int32}")
.Attr("ValueType: {int64, int32, string}")
.Attr("max_length: int")
.Input("segment_ids: IndexType")
.Input("values: ValueType")
.Input("pad_value: ValueType")
.Input("batch_size: int64")
.Output("fixed_length: ValueType")
.SetShapeFn([](::tensorflow::shape_inference::InferenceContext* c) {
return Status::OK();
}).Doc(R"doc(
A tensorflow OP to convert variable length segments into fixed length tensor.
Attr
max_length: The size of the inner most (i.e. last) dimension.
Input
segment_ids: 1D input tensor containing the sorted segment_ids.
values: 1D input tensor containing the values.
pad_value: The value used for padding the fixed length tensor.
batch_size: The batch size to use.
Outputs
fixed_length: A fixed length tensor of size [batch_size, max_length].
)doc");
template<typename IndexType, typename ValueType>
class FixedLengthTensorV2: public OpKernel {
public:
explicit FixedLengthTensorV2(OpKernelConstruction *context) : OpKernel(context) {
OP_REQUIRES_OK(context, context->GetAttr("max_length", &max_length_));
}
private:
int64 max_length_;
void Compute(OpKernelContext *context) override {
ComputeFixedLengthTensor<IndexType, ValueType, false>(context, max_length_);
}
};
#define REGISTER_SPARSE_TO_FIXED_LENGTH(IndexType, ValueType) \
REGISTER_KERNEL_BUILDER( \
Name("FixedLengthTensor") \
.Device(DEVICE_CPU) \
.TypeConstraint<IndexType>("IndexType") \
.TypeConstraint<ValueType>("ValueType"), \
FixedLengthTensor<IndexType, ValueType>); \
\
REGISTER_KERNEL_BUILDER( \
Name("FixedLengthTensorV2") \
.Device(DEVICE_CPU) \
.TypeConstraint<IndexType>("IndexType") \
.TypeConstraint<ValueType>("ValueType"), \
FixedLengthTensorV2<IndexType, ValueType>); \
REGISTER_SPARSE_TO_FIXED_LENGTH(int64, int64)
REGISTER_SPARSE_TO_FIXED_LENGTH(int64, int32)
REGISTER_SPARSE_TO_FIXED_LENGTH(int64, string)
REGISTER_SPARSE_TO_FIXED_LENGTH(int32, int64)
REGISTER_SPARSE_TO_FIXED_LENGTH(int32, int32)
REGISTER_SPARSE_TO_FIXED_LENGTH(int32, string)

View File

@ -0,0 +1,520 @@
#include "tensorflow/core/framework/op.h"
#include "tensorflow/core/framework/shape_inference.h"
#include "tensorflow/core/framework/op_kernel.h"
#include <twml.h>
#include "tensorflow_utils.h"
#include "resource_utils.h"
#include <functional>
REGISTER_OP("DecodeAndHashDataRecord")
.Attr("InputType: {uint8, string}")
.Input("input_bytes: InputType")
.Attr("keep_features: list(int)")
.Attr("keep_codes: list(int)")
.Attr("label_features: list(int)")
.Attr("weight_features: list(int) = []")
.Attr("decode_mode: int = 0")
.Output("hashed_data_record_handle: resource")
.SetShapeFn(shape_inference::ScalarShape)
.Doc(R"doc(
A tensorflow OP that creates a handle for the hashed data record.
Attr
keep_features: a list of int ids to keep.
keep_codes: their corresponding code.
label_features: list of feature ids representing the labels.
weight_features: list of feature ids representing the weights. Defaults to empty list.
decode_mode: integer, indicates which decoding method to use. Let a sparse continuous
have a feature_name and a dict of {name: value}. 0 indicates feature_ids are computed
as hash(name). 1 indicates feature_ids are computed as hash(feature_name, name)
shared_name: name used by the resource handle inside the resource manager.
container: name used by the container of the resources.
Input
input_bytes: Input tensor containing the serialized batch of HashedDataRecords.
Outputs
hashed_data_record_handle: A resource handle to batch of HashedDataRecords.
)doc");
template<typename InputType>
class DecodeAndHashDataRecord : public OpKernel {
public:
explicit DecodeAndHashDataRecord(OpKernelConstruction* context)
: OpKernel(context) {
std::vector<int64> keep_features;
std::vector<int64> keep_codes;
std::vector<int64> label_features;
std::vector<int64> weight_features;
OP_REQUIRES_OK(context, context->GetAttr("keep_features", &keep_features));
OP_REQUIRES_OK(context, context->GetAttr("keep_codes", &keep_codes));
OP_REQUIRES_OK(context, context->GetAttr("label_features", &label_features));
OP_REQUIRES_OK(context, context->GetAttr("weight_features", &weight_features));
OP_REQUIRES_OK(context, context->GetAttr("decode_mode", &m_decode_mode));
OP_REQUIRES(context, keep_features.size() == keep_codes.size(),
errors::InvalidArgument("keep keys and values must have same size."));
#ifdef USE_DENSE_HASH
m_keep_map.set_empty_key(0);
m_labels_map.set_empty_key(0);
m_weights_map.set_empty_key(0);
#endif // USE_DENSE_HASH
for (uint64_t i = 0; i < keep_features.size(); i++) {
m_keep_map[keep_features[i]] = keep_codes[i];
}
for (uint64_t i = 0; i < label_features.size(); i++) {
m_labels_map[label_features[i]] = i;
}
for (uint64_t i = 0; i < weight_features.size(); i++) {
m_weights_map[weight_features[i]] = i;
}
}
private:
twml::Map<int64_t, int64_t> m_keep_map;
twml::Map<int64_t, int64_t> m_labels_map;
twml::Map<int64_t, int64_t> m_weights_map;
int64 m_decode_mode;
void Compute(OpKernelContext* context) override {
try {
HashedDataRecordResource *resource = nullptr;
OP_REQUIRES_OK(context, makeResourceHandle<HashedDataRecordResource>(context, 0, &resource));
// Store the input bytes in the resource so it isnt freed before the resource.
// This is necessary because we are not copying the contents for tensors.
resource->input = context->input(0);
int batch_size = getBatchSize<InputType>(resource->input);
int num_labels = static_cast<int>(m_labels_map.size());
int num_weights = static_cast<int>(m_weights_map.size());
twml::HashedDataRecordReader reader;
reader.setKeepMap(&m_keep_map);
reader.setLabelsMap(&m_labels_map);
reader.setDecodeMode(m_decode_mode);
// Do not set weight map if it is empty. This will take a faster path.
if (num_weights != 0) {
reader.setWeightsMap(&m_weights_map);
}
resource->records.clear();
resource->records.reserve(batch_size);
int64 total_size = 0;
for (int id = 0; id < batch_size; id++) {
const uint8_t *input_bytes = getInputBytes<InputType>(resource->input, id);
reader.setBuffer(input_bytes);
resource->records.emplace_back(num_labels, num_weights);
resource->records[id].decode(reader);
total_size += static_cast<int64>(resource->records[id].totalSize());
}
resource->total_size = total_size;
resource->num_labels = num_labels;
resource->num_weights = num_weights;
} catch (const std::exception &e) {
context->CtxFailureWithWarning(errors::InvalidArgument(e.what()));
}
}
};
REGISTER_OP("GetIdsFromHashedDataRecord")
.Input("hashed_data_record_handle: resource")
.Output("ids: int64")
.SetShapeFn([](::tensorflow::shape_inference::InferenceContext* c) {
return Status::OK();
}).Doc(R"doc(
A tensorflow OP that returns unhashed ids from the hashed data record.
Input
hashed_data_record_handle: Resource handle to DataRecord
Outputs
ids: ids specifies the index of the records[id] in the batch (int64)
)doc");
// This Kernel is used for both training and serving once the resource is created.
class GetIdsFromHashedDataRecord : public OpKernel {
public:
explicit GetIdsFromHashedDataRecord(OpKernelConstruction* context)
: OpKernel(context) {}
void Compute(OpKernelContext* context) override {
try {
auto handle = getHandle<HashedDataRecordResource>(context, 0);
const auto &records = handle->records;
const auto &common = handle->common;
const int64 common_size = static_cast<int64>(common.totalSize());
const int64 total_size = handle->total_size;
TensorShape shape = {total_size};
Tensor *ids;
OP_REQUIRES_OK(context, context->allocate_output(0, shape, &ids));
int id = 0;
int64 offset = 0;
auto ids_flat = ids->flat<int64>();
for (const auto &record : records) {
// Since common features are added to each input, add the common_size to the current size.
// For training common_size == 0, for serving it can be a non-zero value.
int64 curr_size = static_cast<int64>(record.totalSize()) + common_size;
std::fill(ids_flat.data() + offset, ids_flat.data() + offset + curr_size, id);
offset += curr_size;
id++;
}
} catch (const std::exception &e) {
context->CtxFailureWithWarning(errors::InvalidArgument(e.what()));
}
}
};
// OutType: Output Tensor Type. FieldType: The storage type used inside HashedDatarecord.
template<typename OutType, typename FieldType>
class GetOutputFromHashedDataRecord : public OpKernel {
protected:
using Getter = std::function<const std::vector<FieldType>&(const twml::HashedDataRecord &)>;
Getter getter;
public:
explicit GetOutputFromHashedDataRecord(OpKernelConstruction* context)
: OpKernel(context) {}
void Compute(OpKernelContext* context) override {
try {
auto handle = getHandle<HashedDataRecordResource>(context, 0);
const auto &records = handle->records;
const auto &common = handle->common;
const int64 total_size = handle->total_size;
TensorShape shape = {total_size};
Tensor *output;
OP_REQUIRES_OK(context, context->allocate_output(0, shape, &output));
const auto &common_output = getter(common);
auto output_data = output->flat<OutType>().data();
for (const auto &record : records) {
// This is does not copy anything during training as common_size == 0
// It will copy the relevant common features coming from a batch prediction request.
output_data = std::copy(common_output.begin(), common_output.end(), output_data);
// Copy the current record to output.
const auto& rec_output = getter(record);
output_data = std::copy(rec_output.begin(), rec_output.end(), output_data);
}
} catch (const std::exception &e) {
context->CtxFailureWithWarning(errors::InvalidArgument(e.what()));
}
}
};
REGISTER_OP("GetUKeysFromHashedDataRecord")
.Input("hashed_data_record_handle: resource")
.Output("ukeys: int64")
.SetShapeFn([](::tensorflow::shape_inference::InferenceContext* c) {
return Status::OK();
}).Doc(R"doc(
A tensorflow OP that returns unhashed keys from the hashed data record.
Input
hashed_data_record_handle: Resource handle to DataRecord
Outputs
ukeys: unhased keys / raw feature ids from the original request.
)doc");
class GetUKeysFromHashedDataRecord : public GetOutputFromHashedDataRecord<int64, int64_t> {
public:
explicit GetUKeysFromHashedDataRecord(OpKernelConstruction* context)
: GetOutputFromHashedDataRecord<int64, int64_t>(context){
getter = [](const twml::HashedDataRecord &record) -> const std::vector<int64_t> & {
return record.keys();
};
}
};
REGISTER_OP("GetKeysFromHashedDataRecord")
.Input("hashed_data_record_handle: resource")
.Output("keys: int64")
.SetShapeFn([](::tensorflow::shape_inference::InferenceContext* c) {
return Status::OK();
}).Doc(R"doc(
A tensorflow OP that returns keys from the hashed data record.
Input
hashed_data_record_handle: Resource handle to DataRecord
Outputs
keys: keys after raw feature ids are hashed with values (int64)
)doc");
class GetKeysFromHashedDataRecord : public GetOutputFromHashedDataRecord<int64, int64_t> {
public:
explicit GetKeysFromHashedDataRecord(OpKernelConstruction* context)
: GetOutputFromHashedDataRecord<int64, int64_t>(context){
getter = [](const twml::HashedDataRecord &record) -> const std::vector<int64_t> & {
return record.transformed_keys();
};
}
};
REGISTER_OP("GetValuesFromHashedDataRecord")
.Input("hashed_data_record_handle: resource")
.Output("values: float")
.SetShapeFn([](::tensorflow::shape_inference::InferenceContext* c) {
return Status::OK();
}).Doc(R"doc(
A tensorflow OP that returns values from the hashed data record.
Input
hashed_data_record_handle: Resource handle to DataRecord
Outputs
values: feature values.
)doc");
class GetValuesFromHashedDataRecord : public GetOutputFromHashedDataRecord<float, double> {
public:
explicit GetValuesFromHashedDataRecord(OpKernelConstruction* context)
: GetOutputFromHashedDataRecord<float, double>(context){
getter = [](const twml::HashedDataRecord &record) -> const std::vector<double> & {
return record.values();
};
}
};
REGISTER_OP("GetCodesFromHashedDataRecord")
.Input("hashed_data_record_handle: resource")
.Output("codes: int64")
.SetShapeFn([](::tensorflow::shape_inference::InferenceContext* c) {
return Status::OK();
}).Doc(R"doc(
A tensorflow OP that returns codes from the hashed data record.
Input
hashed_data_record_handle: Resource handle to DataRecord
Outputs
codes: deepbird feature code, usually from A,B,C,D ... in the config.
)doc");
class GetCodesFromHashedDataRecord : public GetOutputFromHashedDataRecord<int64, int64_t> {
public:
explicit GetCodesFromHashedDataRecord(OpKernelConstruction* context)
: GetOutputFromHashedDataRecord<int64, int64_t>(context){
getter = [](const twml::HashedDataRecord &record) -> const std::vector<int64_t> & {
return record.codes();
};
}
};
REGISTER_OP("GetTypesFromHashedDataRecord")
.Input("hashed_data_record_handle: resource")
.Output("types: int8")
.SetShapeFn([](::tensorflow::shape_inference::InferenceContext* c) {
return Status::OK();
}).Doc(R"doc(
A tensorflow OP that returns types from the hashed data record.
Input
hashed_data_record_handle: Resource handle to DataRecord
Outputs
types: feature types corresponding to BINARY, DISCRETE, etc.
)doc");
class GetTypesFromHashedDataRecord : public GetOutputFromHashedDataRecord<int8, uint8_t> {
public:
explicit GetTypesFromHashedDataRecord(OpKernelConstruction* context)
: GetOutputFromHashedDataRecord<int8, uint8_t>(context){
getter = [](const twml::HashedDataRecord &record) -> const std::vector<uint8_t> & {
return record.types();
};
}
};
REGISTER_OP("GetBatchSizeFromHashedDataRecord")
.Input("hashed_data_record_handle: resource")
.Output("batch_size: int64")
.SetShapeFn(shape_inference::ScalarShape)
.Doc(R"doc(
A tensorflow OP that returns batch size from the hashed data record.
Input
hashed_data_record_handle: Resource handle to DataRecord
Outputs
batch_size: Number of records held in the handle.
)doc");
class GetBatchSizeFromHashedDataRecord : public OpKernel {
public:
explicit GetBatchSizeFromHashedDataRecord(OpKernelConstruction* context)
: OpKernel(context) {}
void Compute(OpKernelContext* context) override {
try {
auto handle = getHandle<HashedDataRecordResource>(context, 0);
Tensor *output;
OP_REQUIRES_OK(context, context->allocate_output(0, TensorShape({}), &output));
output->scalar<int64>()() = handle->records.size();
} catch (const std::exception &e) {
context->CtxFailureWithWarning(errors::InvalidArgument(e.what()));
}
}
};
REGISTER_OP("GetTotalSizeFromHashedDataRecord")
.Input("hashed_data_record_handle: resource")
.Output("total_size: int64")
.SetShapeFn(shape_inference::ScalarShape)
.Doc(R"doc(
A tensorflow OP that returns total size from the hashed data record.
Input
hashed_data_record_handle: Resource handle to DataRecord
Outputs
total_size: Total number of keys / values in the batch.
)doc");
class GetTotalSizeFromHashedDataRecord : public OpKernel {
public:
explicit GetTotalSizeFromHashedDataRecord(OpKernelConstruction* context)
: OpKernel(context) {}
void Compute(OpKernelContext* context) override {
try {
auto handle = getHandle<HashedDataRecordResource>(context, 0);
Tensor *output;
OP_REQUIRES_OK(context, context->allocate_output(0, TensorShape({}), &output));
output->scalar<int64>()() = handle->total_size;
} catch (const std::exception &e) {
context->CtxFailureWithWarning(errors::InvalidArgument(e.what()));
}
}
};
REGISTER_OP("GetLabelsFromHashedDataRecord")
.Input("hashed_data_record_handle: resource")
.Output("labels: float")
.Attr("default_label: float")
.SetShapeFn([](::tensorflow::shape_inference::InferenceContext* c) {
return Status::OK();
}).Doc(R"doc(
A tensorflow OP that returns labels from the hashed data record.
Input
hashed_data_record_handle: Resource handle to DataRecord
Outputs
labels: A 2D tensor of size [batch_size, num_labels] containing the label values.
)doc");
class GetLabelsFromHashedDataRecord : public OpKernel {
private:
float default_label;
public:
explicit GetLabelsFromHashedDataRecord(OpKernelConstruction* context)
: OpKernel(context) {
OP_REQUIRES_OK(context, context->GetAttr("default_label", &default_label));
}
void Compute(OpKernelContext* context) override {
try {
auto handle = getHandle<HashedDataRecordResource>(context, 0);
const auto &records = handle->records;
const int num_labels = static_cast<int>(handle->num_labels);
TensorShape shape = {static_cast<int64>(handle->records.size()), num_labels};
Tensor *labels;
OP_REQUIRES_OK(context, context->allocate_output(0, shape, &labels));
// The default value of label is not present in data record is std::nanf
// For continuous labels, change that to a default_label or label.
auto func = [this](float label) -> float {
return std::isnan(label) ? default_label : label;
};
auto labels_data = labels->flat<float>().data();
for (const auto &record : records) {
const auto& rec_labels = record.labels();
labels_data = std::transform(rec_labels.begin(), rec_labels.end(), labels_data, func);
}
} catch (const std::exception &e) {
context->CtxFailureWithWarning(errors::InvalidArgument(e.what()));
}
}
};
REGISTER_OP("GetWeightsFromHashedDataRecord")
.Input("hashed_data_record_handle: resource")
.Output("weights: float")
.SetShapeFn([](::tensorflow::shape_inference::InferenceContext* c) {
return Status::OK();
}).Doc(R"doc(
A tensorflow OP that returns weights from the hashed data record.
Input
hashed_data_record_handle: Resource handle to DataRecord
Outputs
weights: A 2D tensor of size [batch_size, num_weights] containing the weight values.
)doc");
class GetWeightsFromHashedDataRecord : public OpKernel {
public:
explicit GetWeightsFromHashedDataRecord(OpKernelConstruction* context)
: OpKernel(context) {}
void Compute(OpKernelContext* context) override {
try {
auto handle = getHandle<HashedDataRecordResource>(context, 0);
const auto &records = handle->records;
const int num_weights = static_cast<int>(handle->num_weights);
TensorShape shape = {static_cast<int64>(handle->records.size()), num_weights};
Tensor *weights;
OP_REQUIRES_OK(context, context->allocate_output(0, shape, &weights));
auto weights_data = weights->flat<float>().data();
for (const auto &record : records) {
const auto& rec_weights = record.weights();
weights_data = std::copy(rec_weights.begin(), rec_weights.end(), weights_data);
}
} catch (const std::exception &e) {
context->CtxFailureWithWarning(errors::InvalidArgument(e.what()));
}
}
};
#define REGISTER_DECODE_AND_HASH(InputType) \
REGISTER_KERNEL_BUILDER( \
Name("DecodeAndHashDataRecord") \
.Device(DEVICE_CPU) \
.TypeConstraint<InputType>("InputType"), \
DecodeAndHashDataRecord<InputType>); \
REGISTER_DECODE_AND_HASH(uint8)
REGISTER_DECODE_AND_HASH(string)
#define REGISTER_GETTER(FIELD) \
REGISTER_KERNEL_BUILDER( \
Name("Get" #FIELD "FromHashedDataRecord") \
.Device(DEVICE_CPU), \
Get##FIELD##FromHashedDataRecord); \
REGISTER_GETTER(Ids)
REGISTER_GETTER(UKeys)
REGISTER_GETTER(Keys)
REGISTER_GETTER(Values)
REGISTER_GETTER(Codes)
REGISTER_GETTER(Types)
REGISTER_GETTER(BatchSize)
REGISTER_GETTER(TotalSize)
REGISTER_GETTER(Labels)
REGISTER_GETTER(Weights)

View File

@ -0,0 +1,260 @@
#include "tensorflow/core/framework/op.h"
#include "tensorflow/core/framework/shape_inference.h"
#include "tensorflow/core/framework/op_kernel.h"
#include "tensorflow/core/util/work_sharder.h"
#include <twml.h>
#include "tensorflow_utils.h"
using namespace tensorflow;
void ComputeHashingDiscretizer(
OpKernelContext*,
int64_t,
const twml::Map<int64_t, int64_t> &,
int64_t,
int64_t,
int64_t);
REGISTER_OP("HashingDiscretizer")
.Attr("T: {float, double}")
.Input("input_ids: int64")
.Input("input_vals: T")
.Input("bin_vals: T")
.Attr("feature_ids: tensor = { dtype: DT_INT64 }")
.Attr("n_bin: int")
.Attr("output_bits: int")
.Attr("cost_per_unit: int")
.Attr("options: int")
.Output("new_keys: int64")
.Output("new_vals: T")
.SetShapeFn(
[](::tensorflow::shape_inference::InferenceContext* c) {
c->set_output(0, c->input(0));
c->set_output(1, c->input(1));
return Status::OK();
}
)
.Doc(R"doc(
This operation discretizes a tensor containing continuous features (if calibrated).
- note - choice of float or double should be consistent among inputs/output
Input
input_ids(int64): A tensor containing input feature ids (direct from data record).
input_vals(float/double): A tensor containing input values at corresponding feature ids.
- i.e. input_ids[i] <-> input_vals[i] for each i
bin_vals(float/double): A tensor containing the bin boundaries for values of a given feature.
- float or double, matching input_vals
feature_ids(int64 attr): 1D TensorProto of feature IDs seen during calibration
-> hint: look up make_tensor_proto:
proto_init = np.array(values, dtype=np.int64)
tensor_attr = tf.make_tensor_proto(proto_init)
n_bin(int): The number of bin boundary values per feature
-> hence, n_bin + 1 buckets for each feature
output_bits(int): The maximum number of bits to use for the output IDs.
cost_per_unit(int): An estimate of the number of CPU cycles (or nanoseconds
if not CPU-bound) to complete a unit of work. Overestimating creates too
many shards and CPU time will be dominated by per-shard overhead, such as
Context creation. Underestimating may not fully make use of the specified
parallelism.
options(int): selects behavior of the op.
0x00 in bits{1:0} for std::lower_bound bucket search.
0x01 in bits{1:0} for linear bucket search
0x02 in bits{1:0} for std::upper_bound bucket search
0x00 in bits{4:2} for integer_multiplicative_hashing
0x01 in bits{4:2} for integer64_multiplicative_hashing
higher bits/other values are reserved for future extensions
Outputs
new_keys(int64): The discretized feature ids with same shape and size as keys.
new_vals(float or double): The discretized values with the same shape and size as vals.
Operation
Note that the discretization operation maps observation vectors to higher dimensional
observation vectors. Here, we describe this mapping.
Let a calibrated feature observation be given by (F,x), where F is the ID of the
feature, and x is some real value (i.e., continuous feature). This kind of
representation is useful for the representation of sparse vectors, where there
are many zeros.
For example, for a dense feature vector [1.2, 2.4, 3.6], we might have
(0, 1.2) (1, 2.4) and (2, 3.6), with feature IDs indicating the 0th, 1st, and 2nd
elements of the vector.
The disretizer performs the following operation:
(F,x) -> (map(x|F),1).
Hence, we have that map(x|F) is a new feature ID, and the value observed for that
feature is 1. We might read map(x|F) as 'the map of x for feature F'.
For each feature F, we associate a (discrete, finite) set of new feature IDs, newIDs(F).
We will then have that map(x|F) is in the set newIDs(F) for any value of x. Each
set member of newIDs(F) is associated with a 'bin', as defined by the bin
boundaries given in the bin_vals input array. For any two different feature IDs F
and G, we would ideally have that INTERSECT(newIDs(F),newIDs(G)) is the empty set.
However, this is not guaranteed for this discretizer.
In the case of this hashing discretizer, map(x|F) can actually be written as follows:
let bucket = bucket(x|F) be the the bucket index for x, according to the
calibration on F. (This is an integer value in [0,n_bin], inclusive)
F is an integer ID. Here, we have that map(x|F) = hash_fn(F,bucket). This has
the desirable property that the new ID depends only on the calibration data
supplied for feature F, and not on any other features in the dataset (e.g.,
number of other features present in the calibration data, or order of features
in the dataset). Note that PercentileDiscretizer does NOT have this property.
This comes at the expense of the possibility of output ID collisions, which
we try to minimize through the design of hash_fn.
Example - consider input vector with a single element, i.e. [x].
Let's Discretize to one of 2 values, as follows:
Let F=0 for the ID of the single feature in the vector.
Let the bin boundary of feature F=0 be BNDRY(F) = BNDRY(0) since F=0
bucket = bucket(x|F=0) = 0 if x<=BNDRY(0) else 1
Let map(x|F) = hash_fn(F=0,bucket=0) if x<=BNDRY(0) else hash_fn(F=0,bucket=1)
If we had another element y in the vector, i.e. [x, y], then we might additionally
Let F=1 for element y.
Let the bin boundary be BNDRY(F) = BNDRY(1) since F=1
bucket = bucket(x|F=1) = 0 if x<=BNDRY(1) else 1
Let map(x|F) = hash_fn(F=1,bucket=0) if x<=BNDRY(1) else hash_fn(F=1,bucket=1)
Note how the construction of map(x|F=1) does not depend on whether map(x|F=0)
was constructed.
)doc");
template<typename T>
class HashingDiscretizer : public OpKernel {
public:
explicit HashingDiscretizer(OpKernelConstruction* context) : OpKernel(context) {
OP_REQUIRES_OK(context,
context->GetAttr("n_bin", &n_bin_));
OP_REQUIRES(context,
n_bin_ > 0,
errors::InvalidArgument("Must have n_bin_ > 0."));
OP_REQUIRES_OK(context,
context->GetAttr("output_bits", &output_bits_));
OP_REQUIRES(context,
output_bits_ > 0,
errors::InvalidArgument("Must have output_bits_ > 0."));
OP_REQUIRES_OK(context,
context->GetAttr("cost_per_unit", &cost_per_unit_));
OP_REQUIRES(context,
cost_per_unit_ >= 0,
errors::InvalidArgument("Must have cost_per_unit >= 0."));
OP_REQUIRES_OK(context,
context->GetAttr("options", &options_));
// construct the ID_to_index hash map
Tensor feature_IDs;
// extract the tensors
OP_REQUIRES_OK(context,
context->GetAttr("feature_ids", &feature_IDs));
// for access to the data
// int64_t data type is set in to_layer function of the calibrator objects in Python
auto feature_IDs_flat = feature_IDs.flat<int64>();
// verify proper dimension constraints
OP_REQUIRES(context,
feature_IDs.shape().dims() == 1,
errors::InvalidArgument("feature_ids must be 1D."));
// reserve space in the hash map and fill in the values
int64_t num_features = feature_IDs.shape().dim_size(0);
#ifdef USE_DENSE_HASH
ID_to_index_.set_empty_key(0);
ID_to_index_.resize(num_features);
#else
ID_to_index_.reserve(num_features);
#endif // USE_DENSE_HASH
for (int64_t i = 0 ; i < num_features ; i++) {
ID_to_index_[feature_IDs_flat(i)] = i;
}
}
void Compute(OpKernelContext* context) override {
ComputeHashingDiscretizer(
context,
output_bits_,
ID_to_index_,
n_bin_,
cost_per_unit_,
options_);
}
private:
twml::Map<int64_t, int64_t> ID_to_index_;
int n_bin_;
int output_bits_;
int cost_per_unit_;
int options_;
};
#define REGISTER(Type) \
REGISTER_KERNEL_BUILDER( \
Name("HashingDiscretizer") \
.Device(DEVICE_CPU) \
.TypeConstraint<Type>("T"), \
HashingDiscretizer<Type>); \
REGISTER(float);
REGISTER(double);
void ComputeHashingDiscretizer(
OpKernelContext* context,
int64_t output_bits,
const twml::Map<int64_t, int64_t> &ID_to_index,
int64_t n_bin,
int64_t cost_per_unit,
int64_t options) {
const Tensor& keys = context->input(0);
const Tensor& vals = context->input(1);
const Tensor& bin_vals = context->input(2);
const int64 output_size = keys.dim_size(0);
TensorShape output_shape;
OP_REQUIRES_OK(context, TensorShapeUtils::MakeShape(&output_size, 1, &output_shape));
Tensor* new_keys = nullptr;
OP_REQUIRES_OK(context, context->allocate_output(0, output_shape, &new_keys));
Tensor* new_vals = nullptr;
OP_REQUIRES_OK(context, context->allocate_output(1, output_shape, &new_vals));
try {
twml::Tensor out_keys_ = TFTensor_to_twml_tensor(*new_keys);
twml::Tensor out_vals_ = TFTensor_to_twml_tensor(*new_vals);
const twml::Tensor in_keys_ = TFTensor_to_twml_tensor(keys);
const twml::Tensor in_vals_ = TFTensor_to_twml_tensor(vals);
const twml::Tensor bin_vals_ = TFTensor_to_twml_tensor(bin_vals);
// retrieve the thread pool from the op context
auto worker_threads = *(context->device()->tensorflow_cpu_worker_threads());
// Definition of the computation thread
auto task = [&](int64 start, int64 limit) {
twml::hashDiscretizerInfer(out_keys_, out_vals_,
in_keys_, in_vals_,
n_bin,
bin_vals_,
output_bits,
ID_to_index,
start, limit,
options);
};
// let Tensorflow split up the work as it sees fit
Shard(worker_threads.num_threads,
worker_threads.workers,
output_size,
static_cast<int64>(cost_per_unit),
task);
} catch (const std::exception &e) {
context->CtxFailureWithWarning(errors::InvalidArgument(e.what()));
}
}

View File

@ -0,0 +1,84 @@
#include "tensorflow/core/framework/op.h"
#include "tensorflow/core/framework/shape_inference.h"
#include "tensorflow/core/framework/op_kernel.h"
#include <twml.h>
#include <mutex>
using namespace tensorflow;
REGISTER_OP("Hashmap")
.Input("keys: int64")
.Input("hash_keys: int64")
.Input("hash_values: int64")
.Output("values: int64")
.Output("mask: int8")
.SetShapeFn([](::tensorflow::shape_inference::InferenceContext* c) {
// TODO: check if the sizes are different in the input
c->set_output(0, c->input(0));
c->set_output(1, c->input(0));
return Status::OK();
});
class Hashmap : public OpKernel {
private:
twml::HashMap hmap;
std::once_flag flag;
public:
explicit Hashmap(OpKernelConstruction* context) : OpKernel(context) {}
void Compute(OpKernelContext* context) override {
try {
// Quick hack
const Tensor& keys = context->input(0);
std::call_once(this->flag, [this, context](){
const Tensor& hash_keys = context->input(1);
const Tensor& hash_values = context->input(2);
const auto hash_keys_flat = hash_keys.flat<int64>();
const auto hash_values_flat = hash_values.flat<int64>();
const int64 N = hash_keys_flat.size();
for (int64 i = 0; i < N; i++) {
hmap.insert(hash_keys_flat(i), hash_values_flat(i));
}
});
Tensor* values = nullptr;
OP_REQUIRES_OK(context, context->allocate_output(0, keys.shape(),
&values));
Tensor* mask = nullptr;
OP_REQUIRES_OK(context, context->allocate_output(1, keys.shape(),
&mask));
// copy the values without sharing a storage
values->flat<int64>() = keys.flat<int64>();
auto keys_flat = keys.flat<int64>();
auto values_flat = values->flat<int64>();
auto mask_flat = mask->flat<int8>();
// TODO: use twml tensor
const int64 N = keys_flat.size();
for (int64 i = 0; i < N; i++) {
// values_flat(i), keys_flat(i) return references to tensorflow::int64.
// Using them in hmap.get() was causing issues because of automatic casting.
int64_t val = values_flat(i);
int64_t key = keys_flat(i);
mask_flat(i) = hmap.get(val, key);
values_flat(i) = val;
}
} catch (const std::exception &e) {
context->CtxFailureWithWarning(errors::InvalidArgument(e.what()));
}
}
};
REGISTER_KERNEL_BUILDER(
Name("Hashmap")
.Device(DEVICE_CPU),
Hashmap);

View File

@ -0,0 +1,81 @@
#include "tensorflow/core/framework/op.h"
#include "tensorflow/core/framework/shape_inference.h"
#include "tensorflow/core/framework/op_kernel.h"
#include <twml.h>
#include "tensorflow_utils.h"
using namespace tensorflow;
REGISTER_OP("IsotonicCalibration")
.Attr("T: {float, double}")
.Input("input: T")
.Input("xs: T")
.Input("ys: T")
.Output("output: T")
.SetShapeFn([](::tensorflow::shape_inference::InferenceContext* c) {
// output shape should be the same as input shape.
c->set_output(0, c->input(0));
return Status::OK();
}).Doc(R"doc(
This operation calibrates probabilities by fitting to a piece-wise non-decreasing function.
Input
input: A tensor containing uncalibrated probabilities.
xs: A tensor containing the boundaries of the bins.
ys: A tensor contianing calibrated values for the corresponding bins.
Expected Sizes:
input: [batch_size, num_labels].
xs, ys: [num_labels, num_bins].
Expected Types:
input: float or double.
xs, ys: same as input.
Outputs
output: A tensor containing calibrated probabilities with same shape and size as input.
)doc");
template<typename T>
class IsotonicCalibration : public OpKernel {
public:
explicit IsotonicCalibration(OpKernelConstruction* context)
: OpKernel(context) {}
void Compute(OpKernelContext* context) override {
const Tensor& input = context->input(0);
const Tensor& xs = context->input(1);
const Tensor& ys = context->input(2);
Tensor* output = nullptr;
OP_REQUIRES_OK(
context,
context->allocate_output(0, input.shape(), &output));
try {
const twml::Tensor twml_input = TFTensor_to_twml_tensor(input);
const twml::Tensor twml_xs = TFTensor_to_twml_tensor(xs);
const twml::Tensor twml_ys = TFTensor_to_twml_tensor(ys);
twml::Tensor twml_output = TFTensor_to_twml_tensor(*output);
twml::linearInterpolation(twml_output, twml_input, twml_xs, twml_ys);
} catch (const std::exception &e) {
context->CtxFailureWithWarning(errors::InvalidArgument(e.what()));
}
}
};
#define REGISTER(Type) \
\
REGISTER_KERNEL_BUILDER( \
Name("IsotonicCalibration") \
.Device(DEVICE_CPU) \
.TypeConstraint<Type>("T"), \
IsotonicCalibration<Type>); \
REGISTER(float);
REGISTER(double);

View File

@ -0,0 +1,39 @@
#include "tensorflow/core/framework/op.h"
#include "tensorflow/core/framework/shape_inference.h"
#include "tensorflow/core/framework/op_kernel.h"
#include "tensorflow/core/framework/common_shape_fns.h"
using namespace tensorflow;
REGISTER_OP("NumIntraOpThreads")
.Input("x: float32")
.Output("num_intra_op_threads: int32")
.SetShapeFn(tensorflow::shape_inference::ScalarShape)
.Doc(R"doc(
A tensorflow OP that returns the number of threads in the intra_op_parallelism pool
This is not part of the Tensorflow API as of the date of writing this doc. Hence,
a tensorflow operation is the best resort.
Input
x: Dummy placeholder so that constant folding is not done by TF GraphOptimizer.
Please refer https://github.com/tensorflow/tensorflow/issues/22546 for more
details.
Output
num_intra_op_threads: A scalar tensor corresponding to the number of threads in
the intra_op_parallelism pool
)doc");
class NumIntraOpThreads : public OpKernel {
public:
explicit NumIntraOpThreads(OpKernelConstruction* context)
: OpKernel(context) {}
void Compute(OpKernelContext* context) override {
int num_intra_op_threads = context->device()->tensorflow_cpu_worker_threads()->num_threads;
Tensor* output_tensor = NULL;
OP_REQUIRES_OK(context, context->allocate_output(0, TensorShape({}), &output_tensor));
auto output_flat = output_tensor->flat<int32>();
output_flat(0) = num_intra_op_threads;
}
};
REGISTER_KERNEL_BUILDER(Name("NumIntraOpThreads").Device(DEVICE_CPU), NumIntraOpThreads);

View File

@ -0,0 +1,75 @@
#include "tensorflow/core/framework/op.h"
#include "tensorflow/core/framework/shape_inference.h"
#include "tensorflow/core/framework/op_kernel.h"
#include "tensorflow/core/util/work_sharder.h"
#include "tensorflow/core/lib/core/threadpool.h"
#include "tensorflow/core/platform/env.h"
#include "tensorflow/core/platform/mutex.h"
#include "tensorflow/core/platform/logging.h"
#include <iostream>
#include <vector>
using namespace tensorflow;
REGISTER_OP("ParAdd")
.Input("input_a: float")
.Input("input_b: float")
.Output("a_plus_b: float")
.SetShapeFn([](::tensorflow::shape_inference::InferenceContext* c) {
c->set_output(0, c->input(0));
return Status::OK();
});
class ParAddOp : public OpKernel {
public:
explicit ParAddOp(OpKernelConstruction* context) : OpKernel(context) {
}
void Compute(OpKernelContext* context) override {
// Grab the input tensor
const Tensor& input_tensor0 = context->input(0);
auto input_flat0 = input_tensor0.flat<float>();
const Tensor& input_tensor1 = context->input(1);
auto input_flat1 = input_tensor1.flat<float>();
OP_REQUIRES(context, input_tensor0.shape() == input_tensor1.shape(),
errors::InvalidArgument("Input tensors must be identical shape."));
// Create an output tensor
Tensor* output_tensor = NULL;
OP_REQUIRES_OK(context,
context->allocate_output(0,
input_tensor0.shape(),
&output_tensor));
auto output_flat = output_tensor->flat<float>();
// PARALLEL ADD
const int N = input_flat0.size();
// retrieve the thread pool from the op context
auto worker_threads = *(context->device()->tensorflow_cpu_worker_threads());
// Definition of the computation thread
auto task = [=, &input_flat0, &input_flat1, &output_flat](int64 start, int64 limit) {
for (; start < limit; ++start) {
output_flat(start) = input_flat0(start) + input_flat1(start);
}
};
// this is a heuristic. high number is likely to be sharded into smaller pieces
int64 cost_per_unit = 1;
// let Tensorflow split up the work as it sees fit
Shard(worker_threads.num_threads,
worker_threads.workers,
N,
cost_per_unit,
task);
}
};
REGISTER_KERNEL_BUILDER(Name("ParAdd").Device(DEVICE_CPU), ParAddOp);

View File

@ -0,0 +1,125 @@
#include "tensorflow/core/framework/op.h"
#include "tensorflow/core/framework/shape_inference.h"
#include "tensorflow/core/framework/op_kernel.h"
#include <twml.h>
#include "tensorflow_utils.h"
using namespace tensorflow;
REGISTER_OP("PartitionSparseTensorMod")
.Attr("T: {float, double}")
.Input("indices: int64")
.Input("values: T")
.Output("result: output_types")
.Attr("num_partitions: int")
.Attr("output_types: list({int64, float, double})")
.SetShapeFn([](::tensorflow::shape_inference::InferenceContext* c) {
return Status::OK();
}).Doc(R"doc(
A tensorflow OP that partitions an input batch represented as a sparse tensor
(indices are [ids, keys]) into separate sparse tensors to more optimally place
sparse computations in distributed training.
Inputs
indices: Indices from sparse tensor ([ids, keys] from the batch).
values: Batch values from the original features dict.
Attr
num_partitions: Number of partitions to generate.
output_types: A list of types for the output tensors like
[tf.int64, tf.float32, tf.int64, tf.float32, ...]
The length must be 2 * num_partitions (see Outputs below)
Outputs
List of dense tensors containing for each partition:
- partitioned indices tensor ([ids, keys] from partitioned batch)
- partitioned values tensor
The list lenth is 2 * num_partitions. Example:
[ [ids_1, keys_1], values_1, [ids_2, keys_2], values_2, ... ]
)doc");
template<typename T>
class PartitionSparseTensorMod : public OpKernel {
private:
int64 num_partitions;
public:
explicit PartitionSparseTensorMod(OpKernelConstruction* context) : OpKernel(context) {
OP_REQUIRES_OK(context, context->GetAttr("num_partitions", &num_partitions));
OP_REQUIRES(context, num_partitions > 0,
errors::InvalidArgument("Number of partitions must be positive"));
}
void Compute(OpKernelContext* context) override {
// grab input tensors
const Tensor& indices_tensor = context->input(0); // (ids, keys)
const Tensor& values_tensor = context->input(1);
// check sizes
int64 num_keys = indices_tensor.shape().dim_size(0);
OP_REQUIRES(context, indices_tensor.dims() == 2,
errors::InvalidArgument("Indices tensor must be 2D [ids, keys]"));
OP_REQUIRES(context, indices_tensor.shape().dim_size(1) == 2,
errors::InvalidArgument("Indices tensor must have 2 cols [ids, keys]"));
OP_REQUIRES(context, values_tensor.shape().dim_size(0) == num_keys,
errors::InvalidArgument("Number of values must match number of keys"));
// grab input vectors
auto indices = indices_tensor.flat<int64>();
auto values = values_tensor.flat<T>();
// count the number of features that fall in each partition
std::vector<int64> partition_counts(num_partitions);
for (int i = 0; i < num_keys; i++) {
int64 key = indices(2 * i + 1);
int64 partition_id = key % num_partitions;
partition_counts[partition_id]++;
}
// allocate outputs for each partition and keep references
std::vector<int64*> output_indices_partitions;
std::vector<T*> output_values_partitions;
output_indices_partitions.reserve(num_partitions);
output_values_partitions.reserve(num_partitions);
for (int i = 0; i < num_partitions; i++) {
Tensor *output_indices = nullptr, *output_values = nullptr;
TensorShape shape_indices = TensorShape({partition_counts[i], 2});
TensorShape shape_values = TensorShape({partition_counts[i]});
OP_REQUIRES_OK(context, context->allocate_output(2 * i, shape_indices, &output_indices));
OP_REQUIRES_OK(context, context->allocate_output(2 * i + 1, shape_values, &output_values));
output_indices_partitions.push_back(output_indices->flat<int64>().data());
output_values_partitions.push_back(output_values->flat<T>().data());
}
// assign a partition id to each feature
// populate tensors for each partition
std::vector<int64> partition_indices(num_partitions);
for (int i = 0; i < num_keys; i++) {
int64 key = indices(2 * i + 1);
int64 pid = key % num_partitions; // partition id
int64 idx = partition_indices[pid]++;
output_indices_partitions[pid][2 * idx] = indices(2 * i);
output_indices_partitions[pid][2 * idx + 1] = key / num_partitions;
output_values_partitions[pid][idx] = values(i);
}
}
};
#define REGISTER(Type) \
\
REGISTER_KERNEL_BUILDER( \
Name("PartitionSparseTensorMod") \
.Device(DEVICE_CPU) \
.TypeConstraint<Type>("T"), \
PartitionSparseTensorMod<Type>); \
REGISTER(float);
REGISTER(double);

View File

@ -0,0 +1,241 @@
#include "tensorflow/core/framework/op.h"
#include "tensorflow/core/framework/shape_inference.h"
#include "tensorflow/core/framework/op_kernel.h"
#include "tensorflow/core/util/work_sharder.h"
#include <twml.h>
#include "tensorflow_utils.h"
using namespace tensorflow;
void CombinedComputeDiscretizers(
OpKernelContext*,
int64_t,
const twml::Map<int64_t, int64_t>&,
int64_t);
REGISTER_OP("PercentileDiscretizerV2")
.Attr("T: {float, double}")
.Input("input_ids: int64")
.Input("input_vals: T")
.Input("bin_ids: int64")
.Input("bin_vals: T")
.Input("feature_offsets: int64")
.Input("start_compute: int64")
.Input("end_compute: int64")
.Attr("output_bits: int")
.Attr("feature_ids: tensor = { dtype: DT_INT64 }")
.Attr("feature_indices: tensor = { dtype: DT_INT64 }")
.Attr("cost_per_unit: int")
.Output("new_keys: int64")
.Output("new_vals: T")
.SetShapeFn([](::tensorflow::shape_inference::InferenceContext* c) {
// TODO: check sizes
c->set_output(0, c->input(0));
c->set_output(1, c->input(0));
return Status::OK();
}).Doc(R"doc(
This operation discretizes a tensor containing continuous features (if calibrated).
- note - choice of float or double should be consistent among inputs/output
Input
input_ids(int64): A tensor containing input feature ids (direct from data record).
input_vals: A tensor containing input values at corresponding feature ids.
- i.e. input_ids[i] <-> input_vals[i] for each i
- float or double
bin_ids(int64): A tensor containing the discretized feature id for each bin.
bin_vals: A tensor containing the bin boundaries for values of a given feature.
- float or double
feature_offsets(int64): Specifies the starting location of bins for a given feature id.
start_compute(int64 scalar tensor): which index to start the computation at
end_compute(int64 scalar tensor): which index to end the computation right before
-> for example, (start_compute,end_compute)=(0,10) would compute on 0 thru 9
output_bits(int): The maximum number of bits to use for the output IDs.
-> 2**out_bits must be greater than bin_ids.size
feature_ids(int64): 1D TensorProto of feature IDs seen during calibration
feature_indices(int64): 1D TensorProto of feature indices corresponding with feature_IDs
-> hint: look up make_tensor_proto:
proto_init = np.array(values, dtype=np.int64)
tensor_attr = tf.make_tensor_proto(my_proto_init)
cost_per_unit(int): An estimate of the number of CPU cycles (or nanoseconds
if not CPU-bound) to complete a unit of work. Overestimating creates too
many shards and CPU time will be dominated by per-shard overhead, such as
Context creation. Underestimating may not fully make use of the specified
parallelism.
Outputs
new_keys(int64): The discretized feature ids with same shape and size as keys.
new_vals(float or double): The discretized values with the same shape and size as vals.
Operation
Note that the discretization operation maps observation vectors to higher dimensional
observation vectors. Here, we describe this mapping.
Let a calibrated feature observation be given by (F,x), where F is the ID of the
feature, and x is some real value (i.e., continuous feature). This kind of
representation is useful for the representation of sparse vectors, where there
are many zeros.
For example, for a dense feature vector [1.2, 2.4, 3.6], we might have
(0, 1.2) (1, 2.4) and (2, 3.6), with feature IDs indicating the 0th, 1st, and 2nd
elements of the vector
The disretizer performs the following operation:
(F,x) -> (map(x|F),1).
Hence, we have that map(x|F) is a new feature ID, and the value observed for that
feature is 1. We might read map(x|F) as 'the map of x for feature F'.
For each feature F, we associate a (discrete, finite) set of new feature IDs, newIDs(F).
We will then have that F~(x) is in the set newIDs(F) for any value of x. Each set member
of newIDs(F) is associated with a 'bin', as defined by the bin boundaries given in
the bin_vals input array. For any two different feature IDs F and G, we have that
INTERSECT(newIDs(F),newIDs(G)) is the empty set
Example - consider input vector with a single element, i.e. [x].
Let's Discretize to one of 2 values, as follows:
Let F=0 for the ID of the single feature in the vector.
Let the bin boundary of feature F=0 be BNDRY(F) = BNDRY(0) since F=0
Let newIDs(F) = newIDs(0) = {0,1}
Let map(x|F) = map(x|0) = 0 if x<=BNDRY else 1
If we had another element y in the vector, i.e. [x, y], then we might additionally
Let F=1 for element y.
Let the bin boundary be BNDRY(F) = BNDRY(1) since F=1
Let newIDs(F) = newIDs(1) = {2,3} (so as to have empty intersect with newIDs(0))
Let map(x|F) = map(x|1) = 2 if x<=BNDRY else 3
Consider vector observation [-0.1, 0.2]. We then represent this as [(0, -0.1), (1, 0.2)]
Let BNDRY(0) = BNDRY(1) = 0. When we discretize the vector, we get:
(0, -0.1) -> (map(-0.1|0), 1) = (0, 1)
(1, 0.2) -> (map( 0.2|1), 1) = (3, 1)
Our output vector is then represented sparsely as [(0, 1), (3, 1)], and the dense
representation of this could be [1, 0, 0, 1]
)doc");
template<typename T>
class PercentileDiscretizerV2 : public OpKernel {
public:
explicit PercentileDiscretizerV2(OpKernelConstruction* context) : OpKernel(context) {
// get the number of output bits
// for use with features that have not been calibrated
OP_REQUIRES_OK(context,
context->GetAttr("output_bits", &output_bits_));
OP_REQUIRES_OK(context,
context->GetAttr("cost_per_unit", &cost_per_unit_));
OP_REQUIRES(context, cost_per_unit_ >= 0,
errors::InvalidArgument("Must have cost_per_unit >= 0."));
// construct the ID_to_index hash map
Tensor feature_IDs;
Tensor feature_indices;
// extract the tensors
OP_REQUIRES_OK(context,
context->GetAttr("feature_ids", &feature_IDs));
OP_REQUIRES_OK(context,
context->GetAttr("feature_indices", &feature_indices));
// for access to the data
// int64_t data type is set in to_layer function of the calibrator objects in Python
auto feature_IDs_flat = feature_IDs.flat<int64>();
auto feature_indices_flat = feature_indices.flat<int64>();
// verify proper dimension constraints
OP_REQUIRES(context, feature_IDs.shape() == feature_indices.shape(),
errors::InvalidArgument("feature_ids and feature_indices must be identical shape."));
OP_REQUIRES(context, feature_IDs.shape().dims() == 1,
errors::InvalidArgument("feature_ids and feature_indices must be 1D."));
// reserve space in the hash map and fill in the values
int num_features = feature_IDs.shape().dim_size(0);
#ifdef USE_DENSE_HASH
ID_to_index_.set_empty_key(0);
ID_to_index_.resize(num_features);
#else
ID_to_index_.reserve(num_features);
#endif // USE_DENSE_HASH
for (int i = 0 ; i < num_features ; i++) {
ID_to_index_[feature_IDs_flat(i)] = feature_indices_flat(i);
}
}
void Compute(OpKernelContext* context) override {
CombinedComputeDiscretizers(
context,
output_bits_,
ID_to_index_,
cost_per_unit_);
}
private:
twml::Map<int64_t, int64_t> ID_to_index_;
int output_bits_;
int cost_per_unit_;
};
#define REGISTER(Type) \
REGISTER_KERNEL_BUILDER( \
Name("PercentileDiscretizerV2") \
.Device(DEVICE_CPU) \
.TypeConstraint<Type>("T"), \
PercentileDiscretizerV2<Type>); \
REGISTER(float);
REGISTER(double);
void CombinedComputeDiscretizers(
OpKernelContext* context,
int64_t output_bits,
const twml::Map<int64_t, int64_t> &ID_to_index,
int64_t cost_per_unit) {
const Tensor& keys = context->input(0);
const Tensor& vals = context->input(1);
const Tensor& bin_ids = context->input(2);
const Tensor& bin_vals = context->input(3);
const Tensor& feature_offsets = context->input(4);
uint64 full_size = keys.dim_size(0);
const int total_size = static_cast<int64>(full_size);
TensorShape output_shape = {total_size};
Tensor* new_keys = nullptr;
OP_REQUIRES_OK(context, context->allocate_output(0, output_shape, &new_keys));
Tensor* new_vals = nullptr;
OP_REQUIRES_OK(context, context->allocate_output(1, output_shape, &new_vals));
try {
twml::Tensor out_keys_ = TFTensor_to_twml_tensor(*new_keys);
twml::Tensor out_vals_ = TFTensor_to_twml_tensor(*new_vals);
const twml::Tensor in_keys_ = TFTensor_to_twml_tensor(keys);
const twml::Tensor in_vals_ = TFTensor_to_twml_tensor(vals);
const twml::Tensor bin_ids_ = TFTensor_to_twml_tensor(bin_ids);
const twml::Tensor bin_vals_ = TFTensor_to_twml_tensor(bin_vals);
const twml::Tensor feature_offsets_ = TFTensor_to_twml_tensor(feature_offsets);
// retrieve the thread pool from the op context
auto worker_threads = *(context->device()->tensorflow_cpu_worker_threads());
// Definition of the computation thread
auto task = [&](int64 start, int64 limit) {
twml::discretizerInfer(out_keys_, out_vals_,
in_keys_, in_vals_,
bin_ids_, bin_vals_,
feature_offsets_, output_bits,
ID_to_index,
start, limit,
start);
};
// let Tensorflow split up the work as it sees fit
Shard(worker_threads.num_threads,
worker_threads.workers,
full_size,
static_cast<int64>(cost_per_unit),
task);
} catch (const std::exception &e) {
context->CtxFailureWithWarning(errors::InvalidArgument(e.what()));
}
}

View File

@ -0,0 +1,126 @@
#pragma once
#include <twml.h>
#include <atomic>
#include <string>
#include <vector>
// Add these to make gcc ignore the warnings from tensorflow.
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wsign-compare"
#include "tensorflow/core/framework/resource_mgr.h"
#include "tensorflow/core/framework/resource_op_kernel.h"
#pragma GCC diagnostic pop
#include <memory>
#include <functional>
template<typename T>
void unrefHandle(T *handle) {
handle->Unref();
}
template <typename T>
using unique_handle = std::unique_ptr<T, std::function<void(T *)> >;
// as std::type_index is not abi compatible, we bypass the hash_code checks.
// https://github.com/tensorflow/tensorflow/commit/15275d3a14c77e2244ae1155f93243256f08e3ed
#ifdef __APPLE__
template <typename T>
Status CreateTwmlResource(OpKernelContext* ctx, const ResourceHandle& p, T* value) {
return ctx->resource_manager()->Create(p.container(), p.name(), value);
}
template <typename T>
Status LookupTwmlResource(OpKernelContext* ctx, const ResourceHandle& p,
T** value) {
return ctx->resource_manager()->Lookup(p.container(), p.name(), value);
}
#endif // __APPLE__
template<typename T>
unique_handle<T> getHandle(tensorflow::OpKernelContext* context, int input_idx) {
using namespace tensorflow;
T *ptr = nullptr;
#ifdef __APPLE__
auto s = LookupTwmlResource(context, HandleFromInput(context, input_idx), &ptr);
#else
auto s = LookupResource(context, HandleFromInput(context, input_idx), &ptr);
#endif // __APPLE__
if (!s.ok()) {
throw std::runtime_error("Failed to get resource handle");
}
return unique_handle<T>(ptr, unrefHandle<T>);
}
template<typename InputType>
const uint8_t *getInputBytes(const Tensor &input, int id) {
return reinterpret_cast<const uint8_t *>(input.flat<InputType>().data());
}
template<>
inline const uint8_t *getInputBytes<string>(const Tensor &input, int id) {
return reinterpret_cast<const uint8_t *>(input.flat<string>()(id).c_str());
}
template<typename InputType>
const int getBatchSize(const Tensor &input) {
return 1;
}
template<>
inline const int getBatchSize<string>(const Tensor &input) {
return static_cast<int>(input.NumElements());
}
class DataRecordResource : public ResourceBase {
public:
Tensor input;
int64 num_labels;
int64 num_weights;
twml::DataRecord common;
std::vector<twml::DataRecord> records;
twml::Map<int64_t, int64_t> *keep_map;
string DebugString() const override { return "DataRecords resource"; }
};
// A thin layer around batch of HashedDataRecords
class HashedDataRecordResource : public ResourceBase {
public:
Tensor input;
int64 total_size;
int64 num_labels;
int64 num_weights;
twml::HashedDataRecord common;
std::vector<twml::HashedDataRecord> records;
string DebugString() const override { return "HashedDataRecord Resource"; }
};
#define TF_CHECK_STATUS(fn) do { \
Status s = fn; \
if (!s.ok()) return s; \
} while (0)
template<typename ResourceType>
Status makeResourceHandle(OpKernelContext* context, int out_idx, ResourceType **resource_) {
static std::atomic<int64> id;
Tensor* handle_tensor;
TF_CHECK_STATUS(context->allocate_output(out_idx, TensorShape({}), &handle_tensor));
ResourceType *resource = new ResourceType();
const auto resource_name = typeid(ResourceType).name() + std::to_string(id++);
ResourceHandle handle = MakePerStepResourceHandle<ResourceType>(context, resource_name);
#ifdef __APPLE__
TF_CHECK_STATUS(CreateTwmlResource(context, handle, resource));
#else
TF_CHECK_STATUS(CreateResource(context, handle, resource));
#endif // __APPLE__
handle_tensor->scalar<ResourceHandle>()() = handle;
*resource_ = resource;
return Status::OK();
}

View File

@ -0,0 +1,5 @@
"""Gets the path of headers for the current Tensorflow library"""
import tensorflow.compat.v1 as tf
print(tf.sysconfig.get_include(), end='')

View File

@ -0,0 +1,2 @@
#!/bin/sh
PEX_INTERPRETER=1 "$PYTHON_ENV" "$LIBTWML_HOME"/src/ops/scripts/get_inc.py

Some files were not shown because too many files have changed in this diff Show More