nolambocoin new miner
Some checks are pending
Build CPU miner / build (push) Waiting to run

This commit is contained in:
w12
2025-01-01 20:47:22 +01:00
commit e0dd59f90e
7388 changed files with 1263698 additions and 0 deletions

35
.github/workflows/build.yml vendored Normal file
View File

@@ -0,0 +1,35 @@
name: Build CPU miner
on: [push]
jobs:
build:
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout@v1
- name: Get static libcurl-dev package
run: cd deps-linux64/ && ls && chmod +x ./deps-linux64.sh && ./deps-linux64.sh
- name: Set-up autoconf
run: chmod +x ./autogen.sh && ./autogen.sh
- name: configure
run: chmod +x ./configure && ./configure CFLAGS="-Wall -O2 -fomit-frame-pointer" LDFLAGS="-static" CXXFLAGS="$CFLAGS -std=gnu++11" --with-curl=/usr/local/ --with-crypto=/usr/local/ssl
- name: make
run: make
- name: make check
run: make check
- name: CPU test
run: ./sugarmaker --help
- name: Zips
run: zip --junk-paths cpuminer sugarmaker
- name: Upload Release Asset
id: upload-release-asset
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.get_release.outputs.upload_url }}
asset_path: ./cpuminer.zip
asset_name: cpuminer.zip
asset_content_type: application/zip

42
.github/workflows/release.yml vendored Normal file
View File

@@ -0,0 +1,42 @@
name: Release CPU miner
on:
release:
types: [created]
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Get release
id: get_release
uses: bruceadams/get-release@v1.2.3
env:
GITHUB_TOKEN: ${{ github.token }}
- uses: actions/checkout@v1
- name: Get libcurl-dev package
run: sudo apt-get install libcurl4-openssl-dev
- name: Set-up autoconf
run: chmod +x ./autogen.sh && ./autogen.sh
- name: configure
run: chmod +x ./configure && ./configure --with-crypto --with-curl
- name: make
run: make
- name: make check
run: make check
- name: CPU test
run: ./sugarmaker --help
- name: Zips
run: zip --junk-paths cpuminer sugarmaker
- name: Upload Release Asset
id: upload-release-asset
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.get_release.outputs.upload_url }}
asset_path: ./cpuminer.zip
asset_name: cpuminer.zip
asset_content_type: application/zip

47
.gitignore vendored Normal file
View File

@@ -0,0 +1,47 @@
sugarmaker
sugarmaker.exe
*.o
# old binary
minerd
minerd.exe
autom4te.cache
.deps
Makefile
Makefile.in
INSTALL
aclocal.m4
configure
configure.lineno
depcomp
missing
install-sh
stamp-h1
cpuminer-config.h*
compile
config.log
config.status
config.status.lineno
config.guess
config.sub
mingw32-config.cache
# yespower
yespower-1.0.1*/.dirstamp
# release
sugarmaker-v*/
# deps-win64
#deps-win64/curl-*
#deps-win64/pthread-*
#deps-win64/x86_64-w64-mingw32/
# deps-win32
#deps-win32/curl-*
#deps-win32/pthread-*
#deps-win32/i686-w64-mingw32/

9
AUTHORS Normal file
View File

@@ -0,0 +1,9 @@
Jeff Garzik <jgarzik@pobox.com>
ArtForz
pooler <pooler@litecoinpool.org>
Alexander Peslyak <solar@openwall.com>
Kanon <60179867+decryp2kanon@users.noreply.github.com>

340
COPYING Normal file
View File

@@ -0,0 +1,340 @@
GNU GENERAL PUBLIC LICENSE
Version 2, June 1991
Copyright (C) 1989, 1991 Free Software Foundation, Inc.
59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The licenses for most software are designed to take away your
freedom to share and change it. By contrast, the GNU General Public
License is intended to guarantee your freedom to share and change free
software--to make sure the software is free for all its users. This
General Public License applies to most of the Free Software
Foundation's software and to any other program whose authors commit to
using it. (Some other Free Software Foundation software is covered by
the GNU Library General Public License instead.) You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
this service if you wish), that you receive source code or can get it
if you want it, that you can change the software or use pieces of it
in new free programs; and that you know you can do these things.
To protect your rights, we need to make restrictions that forbid
anyone to deny you these rights or to ask you to surrender the rights.
These restrictions translate to certain responsibilities for you if you
distribute copies of the software, or if you modify it.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must give the recipients all the rights that
you have. You must make sure that they, too, receive or can get the
source code. And you must show them these terms so they know their
rights.
We protect your rights with two steps: (1) copyright the software, and
(2) offer you this license which gives you legal permission to copy,
distribute and/or modify the software.
Also, for each author's protection and ours, we want to make certain
that everyone understands that there is no warranty for this free
software. If the software is modified by someone else and passed on, we
want its recipients to know that what they have is not the original, so
that any problems introduced by others will not reflect on the original
authors' reputations.
Finally, any free program is threatened constantly by software
patents. We wish to avoid the danger that redistributors of a free
program will individually obtain patent licenses, in effect making the
program proprietary. To prevent this, we have made it clear that any
patent must be licensed for everyone's free use or not licensed at all.
The precise terms and conditions for copying, distribution and
modification follow.
GNU GENERAL PUBLIC LICENSE
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
0. This License applies to any program or other work which contains
a notice placed by the copyright holder saying it may be distributed
under the terms of this General Public License. The "Program", below,
refers to any such program or work, and a "work based on the Program"
means either the Program or any derivative work under copyright law:
that is to say, a work containing the Program or a portion of it,
either verbatim or with modifications and/or translated into another
language. (Hereinafter, translation is included without limitation in
the term "modification".) Each licensee is addressed as "you".
Activities other than copying, distribution and modification are not
covered by this License; they are outside its scope. The act of
running the Program is not restricted, and the output from the Program
is covered only if its contents constitute a work based on the
Program (independent of having been made by running the Program).
Whether that is true depends on what the Program does.
1. You may copy and distribute verbatim copies of the Program's
source code as you receive it, in any medium, provided that you
conspicuously and appropriately publish on each copy an appropriate
copyright notice and disclaimer of warranty; keep intact all the
notices that refer to this License and to the absence of any warranty;
and give any other recipients of the Program a copy of this License
along with the Program.
You may charge a fee for the physical act of transferring a copy, and
you may at your option offer warranty protection in exchange for a fee.
2. You may modify your copy or copies of the Program or any portion
of it, thus forming a work based on the Program, and copy and
distribute such modifications or work under the terms of Section 1
above, provided that you also meet all of these conditions:
a) You must cause the modified files to carry prominent notices
stating that you changed the files and the date of any change.
b) You must cause any work that you distribute or publish, that in
whole or in part contains or is derived from the Program or any
part thereof, to be licensed as a whole at no charge to all third
parties under the terms of this License.
c) If the modified program normally reads commands interactively
when run, you must cause it, when started running for such
interactive use in the most ordinary way, to print or display an
announcement including an appropriate copyright notice and a
notice that there is no warranty (or else, saying that you provide
a warranty) and that users may redistribute the program under
these conditions, and telling the user how to view a copy of this
License. (Exception: if the Program itself is interactive but
does not normally print such an announcement, your work based on
the Program is not required to print an announcement.)
These requirements apply to the modified work as a whole. If
identifiable sections of that work are not derived from the Program,
and can be reasonably considered independent and separate works in
themselves, then this License, and its terms, do not apply to those
sections when you distribute them as separate works. But when you
distribute the same sections as part of a whole which is a work based
on the Program, the distribution of the whole must be on the terms of
this License, whose permissions for other licensees extend to the
entire whole, and thus to each and every part regardless of who wrote it.
Thus, it is not the intent of this section to claim rights or contest
your rights to work written entirely by you; rather, the intent is to
exercise the right to control the distribution of derivative or
collective works based on the Program.
In addition, mere aggregation of another work not based on the Program
with the Program (or with a work based on the Program) on a volume of
a storage or distribution medium does not bring the other work under
the scope of this License.
3. You may copy and distribute the Program (or a work based on it,
under Section 2) in object code or executable form under the terms of
Sections 1 and 2 above provided that you also do one of the following:
a) Accompany it with the complete corresponding machine-readable
source code, which must be distributed under the terms of Sections
1 and 2 above on a medium customarily used for software interchange; or,
b) Accompany it with a written offer, valid for at least three
years, to give any third party, for a charge no more than your
cost of physically performing source distribution, a complete
machine-readable copy of the corresponding source code, to be
distributed under the terms of Sections 1 and 2 above on a medium
customarily used for software interchange; or,
c) Accompany it with the information you received as to the offer
to distribute corresponding source code. (This alternative is
allowed only for noncommercial distribution and only if you
received the program in object code or executable form with such
an offer, in accord with Subsection b above.)
The source code for a work means the preferred form of the work for
making modifications to it. For an executable work, complete source
code means all the source code for all modules it contains, plus any
associated interface definition files, plus the scripts used to
control compilation and installation of the executable. However, as a
special exception, the source code distributed need not include
anything that is normally distributed (in either source or binary
form) with the major components (compiler, kernel, and so on) of the
operating system on which the executable runs, unless that component
itself accompanies the executable.
If distribution of executable or object code is made by offering
access to copy from a designated place, then offering equivalent
access to copy the source code from the same place counts as
distribution of the source code, even though third parties are not
compelled to copy the source along with the object code.
4. You may not copy, modify, sublicense, or distribute the Program
except as expressly provided under this License. Any attempt
otherwise to copy, modify, sublicense or distribute the Program is
void, and will automatically terminate your rights under this License.
However, parties who have received copies, or rights, from you under
this License will not have their licenses terminated so long as such
parties remain in full compliance.
5. You are not required to accept this License, since you have not
signed it. However, nothing else grants you permission to modify or
distribute the Program or its derivative works. These actions are
prohibited by law if you do not accept this License. Therefore, by
modifying or distributing the Program (or any work based on the
Program), you indicate your acceptance of this License to do so, and
all its terms and conditions for copying, distributing or modifying
the Program or works based on it.
6. Each time you redistribute the Program (or any work based on the
Program), the recipient automatically receives a license from the
original licensor to copy, distribute or modify the Program subject to
these terms and conditions. You may not impose any further
restrictions on the recipients' exercise of the rights granted herein.
You are not responsible for enforcing compliance by third parties to
this License.
7. If, as a consequence of a court judgment or allegation of patent
infringement or for any other reason (not limited to patent issues),
conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot
distribute so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you
may not distribute the Program at all. For example, if a patent
license would not permit royalty-free redistribution of the Program by
all those who receive copies directly or indirectly through you, then
the only way you could satisfy both it and this License would be to
refrain entirely from distribution of the Program.
If any portion of this section is held invalid or unenforceable under
any particular circumstance, the balance of the section is intended to
apply and the section as a whole is intended to apply in other
circumstances.
It is not the purpose of this section to induce you to infringe any
patents or other property right claims or to contest validity of any
such claims; this section has the sole purpose of protecting the
integrity of the free software distribution system, which is
implemented by public license practices. Many people have made
generous contributions to the wide range of software distributed
through that system in reliance on consistent application of that
system; it is up to the author/donor to decide if he or she is willing
to distribute software through any other system and a licensee cannot
impose that choice.
This section is intended to make thoroughly clear what is believed to
be a consequence of the rest of this License.
8. If the distribution and/or use of the Program is restricted in
certain countries either by patents or by copyrighted interfaces, the
original copyright holder who places the Program under this License
may add an explicit geographical distribution limitation excluding
those countries, so that distribution is permitted only in or among
countries not thus excluded. In such case, this License incorporates
the limitation as if written in the body of this License.
9. The Free Software Foundation may publish revised and/or new versions
of the General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the Program
specifies a version number of this License which applies to it and "any
later version", you have the option of following the terms and conditions
either of that version or of any later version published by the Free
Software Foundation. If the Program does not specify a version number of
this License, you may choose any version ever published by the Free Software
Foundation.
10. If you wish to incorporate parts of the Program into other free
programs whose distribution conditions are different, write to the author
to ask for permission. For software which is copyrighted by the Free
Software Foundation, write to the Free Software Foundation; we sometimes
make exceptions for this. Our decision will be guided by the two goals
of preserving the free status of all derivatives of our free software and
of promoting the sharing and reuse of software generally.
NO WARRANTY
11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN
OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS
TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE
PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
REPAIR OR CORRECTION.
12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGES.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
convey the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
Also add information on how to contact you by electronic and paper mail.
If the program is interactive, make it output a short notice like this
when it starts in an interactive mode:
Gnomovision version 69, Copyright (C) year name of author
Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, the commands you use may
be called something other than `show w' and `show c'; they could even be
mouse-clicks or menu items--whatever suits your program.
You should also get your employer (if you work as a programmer) or your
school, if any, to sign a "copyright disclaimer" for the program, if
necessary. Here is a sample; alter the names:
Yoyodyne, Inc., hereby disclaims all copyright interest in the program
`Gnomovision' (which makes passes at compilers) written by James Hacker.
<signature of Ty Coon>, 1 April 1989
Ty Coon, President of Vice
This General Public License does not permit incorporating your program into
proprietary programs. If your program is a subroutine library, you may
consider it more useful to permit linking proprietary applications with the
library. If this is what you want to do, use the GNU Library General
Public License instead of this License.

1
ChangeLog Normal file
View File

@@ -0,0 +1 @@
See git repository ('git log') for full changelog.

22
Dockerfile Normal file
View File

@@ -0,0 +1,22 @@
#
# Dockerfile for sugarmaker
# usage: docker run creack/cpuminer --url xxxx --user xxxx --pass xxxx
# ex: docker run creack/cpuminer --url stratum+tcp://ltc.pool.com:80 --user creack.worker1 --pass abcdef
#
#
FROM ubuntu:16.04
MAINTAINER kanon <60179867+decryp2kanon@users.noreply.github.com>
RUN apt-get update -qq && \
apt-get install -qqy automake libcurl4-openssl-dev git make gcc
RUN git clone https://github.com/decryp2kanon/sugarmaker
RUN cd sugarmaker && \
./autogen.sh && \
./configure CFLAGS='-O2 -fomit-frame-pointer' && \
make
WORKDIR /sugarmaker
ENTRYPOINT ["./sugarmaker"]

3
LICENSE Normal file
View File

@@ -0,0 +1,3 @@
sugarmaker is available under the terms of the GNU Public License version 2.
See COPYING for details.

35
Makefile.am Normal file
View File

@@ -0,0 +1,35 @@
AUTOMAKE_OPTIONS = subdir-objects
if WANT_JANSSON
JANSSON_INCLUDES= -I$(top_srcdir)/compat/jansson
else
JANSSON_INCLUDES=
endif
EXTRA_DIST = example-cfg.json
SUBDIRS = compat
bin_PROGRAMS = sugarmaker
dist_man_MANS = sugarmaker.1
sugarmaker_SOURCES = elist.h miner.h compat.h \
cpu-miner.c util.c \
sha2.c \
yespower-1.0.1/sha256.c yespower-1.0.1/yespower-opt.c \
YespowerSugar.c \
YespowerIso.c \
YespowerNull.c \
YespowerUrx.c \
YespowerLitb.c \
YespowerIots.c \
YespowerItc.c \
YespowerYtn.c \
yespower-1.0.1-power2b/sha256-p2b.c yespower-1.0.1-power2b/yespower-opt-p2b.c yespower-1.0.1-power2b/blake2b.c \
YespowerMbc.c \
YespowerARM.c
sugarmaker_LDFLAGS = $(PTHREAD_FLAGS)
sugarmaker_LDADD = @LIBCURL@ @JANSSON_LIBS@ @PTHREAD_LIBS@ @WS2_LIBS@
sugarmaker_CPPFLAGS = $(PTHREAD_FLAGS) @CPPFLAGS@ $(JANSSON_INCLUDES)

13
NEWS Normal file
View File

@@ -0,0 +1,13 @@
Version 1.0 - 01.01.2025
- Add YespowerARM for NoLamboCoin
* YespowerSugar: Sugarchain (default)
* YespowerIso: IsotopeC
* YespowerNull: CranePay, Bellcoin, Veco, SwampCoin
* YespowerUrx: UraniumX
* YespowerLitb: LightBit
* YespowerIots: IOTS
* YespowerItc: Intercoin
* YespowerMbc: power2b for MicroBitcoin
* YespowerARM: NolamboCoin

90
README.md Normal file
View File

@@ -0,0 +1,90 @@
# Yenten ARM miner (yespowerr16 algo)
cmd for test Yenten mining:
```
sugarmaker.exe -a yespowerr16 -o stratum+tcp://cpu-pool.com:63368 -u wallet_address
```
![GitHub All Releases](https://img.shields.io/github/downloads/yentencoin/yenten-arm-miner-yespowerr16/total)
This is a multi-threaded CPU miner for ***Yenten Coin***, fork of sugarmaker, fork of solardiz's (Resistance) fork of pooler's (Litecoin) fork of Jeff Garzik's (Bitcoin) reference cpuminer. This fork is supporting only Yespower variant algorithms.
License: [GPLv2](https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html). See COPYING for details.
Git tree: https://github.com/yentencoin/yenten-arm-miner-yespowerr16
### Build dependencies:
```
autoconf
automake
GNU make
gcc
libcurl https://curl.haxx.se/libcurl/
```
- On recent Debian and Ubuntu, these can be installed with:
```
sudo apt-get install build-essential libcurl4-openssl-dev autotools-dev automake libtool
```
### Basic Unix build instructions:
```
./autogen.sh
./configure CFLAGS="-Wall -O2 -fomit-frame-pointer" CXXFLAGS="$CFLAGS -std=gnu++11"
make
```
Notes for AIX users:
- To build a 64-bit binary, export `OBJECT_MODE=64`
- GNU-style long options are not supported, but are accessible via configuration file
### Basic Windows build instructions, using MinGW:
- Install MinGW and the MSYS Developer Tool Kit (http://www.mingw.org/)
* Make sure you have `mstcpip.h` in `MinGW\include`
- If using MinGW-w64, install `pthreads-w64`
- Install `libcurl devel` (https://curl.haxx.se/download.html)
* Make sure you have `libcurl.m4` in `MinGW\share\aclocal`
* Make sure you have `curl-config` in `MinGW\bin`
- In the MSYS shell, run:
```
./autogen.sh
LIBCURL='-lcurldll' ./configure
make
```
### Usage instructions:
Run `sugarmaker --help` to see options. You can solo-mine using these options:
- Mainnet (Solo)
```
./sugarmaker -a yespowerr16 -o http://127.0.0.1:9982 -u user -p pass --coinbase-addr=wallet_address -t1
```
- Mainnet (Stratum Pool)
```
./sugarmaker -a yespowerr16 -o stratum+tcp://cpu-pool.com:63368 -u wallet_address -t1
```
(Omit the leading `./` if you're on Windows.) For the above to work, for solo mining you need
a *fully-synced node* running locally and with RPC username/password configured,
- e.g. with the below in your `.yenten/yenten.conf`:
```
rpcbind=127.0.0.1
rpcallowip=127.0.0.0/8
rpcuser=user
rpcpassword=pass
```
- Connecting through a proxy:
* Use the `--proxy` option.
* To use a SOCKS proxy, add a `socks4://` or `socks5://` prefix to the proxy host.
* Protocols `socks4a` and `socks5h`, allowing remote name resolving, are also available since libcurl 7.18.0.
* If no protocol is specified, the proxy is assumed to be a HTTP proxy.
* When the `--proxy` option is not used, the program honors the `http_proxy` and `all_proxy` environment variables.
### Author
- Jeff Garzik <jeff@garzik.org>
- Pooler <pooler@litecoinpool.org>
- Alexander Peslyak <solar@openwall.com>
- Kanon <60179867+decryp2kanon@users.noreply.github.com>
- Yentencoin

110
YespowerARM.c Normal file
View File

@@ -0,0 +1,110 @@
#include "cpuminer-config.h"
#include "miner.h"
#include "yespower-1.0.1/yespower.h"
#include "yespower.h"
#include "sysendian.h"
#include <stdlib.h>
#include <string.h>
#include <inttypes.h>
const yespower_params_t *select_yespower_params(const char *cpu_info) {
#ifdef __arm__
if (strstr(cpu_info, "BCM2837") || strstr(cpu_info, "BCM2711")) {
static const yespower_params_t params_rpi = {
.version = YESPOWER_1_0,
.N = 2048,
.r = 8,
.pers = (const uint8_t *)"Raspberry",
.perslen = 7
};
return &params_rpi;
} else if (strstr(cpu_info, "BCM2712")) {
static const yespower_params_t params_rpi5 = {
.version = YESPOWER_1_0,
.N = 3072,
.r = 12,
.pers = (const uint8_t *)"Raspberry5",
.perslen = 7
};
return &params_rpi5;
} else {
// ARM-Server
static const yespower_params_t params_arm_server = {
.version = YESPOWER_1_0,
.N = 4096,
.r = 16,
.pers = (const uint8_t *)"ARMServer",
.perslen = 10
};
return &params_arm_server;
}
#else
static const yespower_params_t params_default = {
.version = YESPOWER_1_0,
.N = 4096,
.r = 16,
.pers = (const uint8_t *)"Default",
.perslen = 7
};
return &params_default;
#endif
}
static void get_cpu_info(char *cpu_info, size_t max_size) {
#ifdef __arm__
FILE *cpuinfo_file = fopen("/proc/cpuinfo", "r");
if (cpuinfo_file) {
fread(cpu_info, 1, max_size - 1, cpuinfo_file);
fclose(cpuinfo_file);
cpu_info[max_size - 1] = '\0';
} else {
strncpy(cpu_info, "Unknown ARM", max_size);
}
#else
strncpy(cpu_info, "x86/x64", max_size);
#endif
}
int yespower_hash(const char *input, char *output) {
char cpu_info[256] = {0};
get_cpu_info(cpu_info, sizeof(cpu_info));
const yespower_params_t *params = select_yespower_params(cpu_info);
return yespower_tls(input, 80, params, (yespower_binary_t *) output);
}
int scanhash_arm_yespower(int thr_id, uint32_t *data, uint32_t *target, uint32_t max_nonce, unsigned long *hashes_done) {
uint32_t nonce = data[19]; // Nonce ist das 20. Element der Daten
unsigned char hash[32]; // Speicher für den berechneten Hash
int result = 0; // Rückgabewert
*hashes_done = 0; // Initialisierung der Hashanzahl
// Initialisierung des CPU-Informationspuffers
char cpu_info[256] = {0};
get_cpu_info(cpu_info, sizeof(cpu_info));
// Wähle die Yespower-Parameter basierend auf CPU-Informationen
const yespower_params_t *params = select_yespower_params(cpu_info);
for (; nonce < max_nonce; nonce++) {
data[19] = nonce; // Aktualisiere die Nonce
// Berechne den Hash mit Yespower
if (yespower_tls((const uint8_t *)data, 80, params, (yespower_binary_t *)hash) != 0) {
fprintf(stderr, "Thread %d: Fehler bei der Yespower-Berechnung.\n", thr_id);
break;
}
// Prüfe, ob der berechnete Hash kleiner als das Ziel ist
if (memcmp(hash, target, 32) <= 0) {
printf("Thread %d: Gültiger Hash gefunden! Nonce: %u\n", thr_id, nonce);
result = 1;
break;
}
(*hashes_done)++;
}
data[19] = nonce; // Stelle die Nonce wieder her
return result;
}

84
YespowerIots.c Normal file
View File

@@ -0,0 +1,84 @@
/*
* Copyright 2011 ArtForz, 2011-2014 pooler, 2018 The Resistance developers, 2020 The Sugarchain Yumekawa developers
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
* This file is loosly based on a tiny portion of pooler's cpuminer scrypt.c.
*/
#include "cpuminer-config.h"
#include "miner.h"
#include "yespower-1.0.1/yespower.h"
#include <stdlib.h>
#include <string.h>
#include <inttypes.h>
int scanhash_iots_yespower(int thr_id, uint32_t *pdata,
const uint32_t *ptarget,
uint32_t max_nonce, unsigned long *hashes_done)
{
static const yespower_params_t params = {
.version = YESPOWER_1_0,
.N = 2048,
.r = 32,
.pers = (const uint8_t *)"Iots is committed to the development of IOT",
.perslen = 43
};
union {
uint8_t u8[8];
uint32_t u32[20];
} data;
union {
yespower_binary_t yb;
uint32_t u32[7];
} hash;
uint32_t n = pdata[19] - 1;
const uint32_t Htarg = ptarget[7];
int i;
for (i = 0; i < 19; i++)
be32enc(&data.u32[i], pdata[i]);
do {
be32enc(&data.u32[19], ++n);
if (yespower_tls(data.u8, 80, &params, &hash.yb))
abort();
if (le32dec(&hash.u32[7]) <= Htarg) {
for (i = 0; i < 7; i++)
hash.u32[i] = le32dec(&hash.u32[i]);
if (fulltest(hash.u32, ptarget)) {
*hashes_done = n - pdata[19] + 1;
pdata[19] = n;
return 1;
}
}
} while (n < max_nonce && !work_restart[thr_id].restart);
*hashes_done = n - pdata[19] + 1;
pdata[19] = n;
return 0;
}

84
YespowerIso.c Normal file
View File

@@ -0,0 +1,84 @@
/*
* Copyright 2011 ArtForz, 2011-2014 pooler, 2018 The Resistance developers, 2020 The Sugarchain Yumekawa developers
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
* This file is loosly based on a tiny portion of pooler's cpuminer scrypt.c.
*/
#include "cpuminer-config.h"
#include "miner.h"
#include "yespower-1.0.1/yespower.h"
#include <stdlib.h>
#include <string.h>
#include <inttypes.h>
int scanhash_iso_yespower(int thr_id, uint32_t *pdata,
const uint32_t *ptarget,
uint32_t max_nonce, unsigned long *hashes_done)
{
static const yespower_params_t params = {
.version = YESPOWER_1_0,
.N = 2048,
.r = 32,
.pers = (const uint8_t *)"IsotopeC",
.perslen = 8
};
union {
uint8_t u8[8];
uint32_t u32[20];
} data;
union {
yespower_binary_t yb;
uint32_t u32[7];
} hash;
uint32_t n = pdata[19] - 1;
const uint32_t Htarg = ptarget[7];
int i;
for (i = 0; i < 19; i++)
be32enc(&data.u32[i], pdata[i]);
do {
be32enc(&data.u32[19], ++n);
if (yespower_tls(data.u8, 80, &params, &hash.yb))
abort();
if (le32dec(&hash.u32[7]) <= Htarg) {
for (i = 0; i < 7; i++)
hash.u32[i] = le32dec(&hash.u32[i]);
if (fulltest(hash.u32, ptarget)) {
*hashes_done = n - pdata[19] + 1;
pdata[19] = n;
return 1;
}
}
} while (n < max_nonce && !work_restart[thr_id].restart);
*hashes_done = n - pdata[19] + 1;
pdata[19] = n;
return 0;
}

84
YespowerItc.c Normal file
View File

@@ -0,0 +1,84 @@
/*
* Copyright 2011 ArtForz, 2011-2014 pooler, 2018 The Resistance developers, 2020 The Sugarchain Yumekawa developers
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
* This file is loosly based on a tiny portion of pooler's cpuminer scrypt.c.
*/
#include "cpuminer-config.h"
#include "miner.h"
#include "yespower-1.0.1/yespower.h"
#include <stdlib.h>
#include <string.h>
#include <inttypes.h>
int scanhash_itc_yespower(int thr_id, uint32_t *pdata,
const uint32_t *ptarget,
uint32_t max_nonce, unsigned long *hashes_done)
{
static const yespower_params_t params = {
.version = YESPOWER_1_0,
.N = 2048,
.r = 32,
.pers = (const uint8_t *)"InterITC",
.perslen = 8
};
union {
uint8_t u8[8];
uint32_t u32[20];
} data;
union {
yespower_binary_t yb;
uint32_t u32[7];
} hash;
uint32_t n = pdata[19] - 1;
const uint32_t Htarg = ptarget[7];
int i;
for (i = 0; i < 19; i++)
be32enc(&data.u32[i], pdata[i]);
do {
be32enc(&data.u32[19], ++n);
if (yespower_tls(data.u8, 80, &params, &hash.yb))
abort();
if (le32dec(&hash.u32[7]) <= Htarg) {
for (i = 0; i < 7; i++)
hash.u32[i] = le32dec(&hash.u32[i]);
if (fulltest(hash.u32, ptarget)) {
*hashes_done = n - pdata[19] + 1;
pdata[19] = n;
return 1;
}
}
} while (n < max_nonce && !work_restart[thr_id].restart);
*hashes_done = n - pdata[19] + 1;
pdata[19] = n;
return 0;
}

84
YespowerLitb.c Normal file
View File

@@ -0,0 +1,84 @@
/*
* Copyright 2011 ArtForz, 2011-2014 pooler, 2018 The Resistance developers, 2020 The Sugarchain Yumekawa developers
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
* This file is loosly based on a tiny portion of pooler's cpuminer scrypt.c.
*/
#include "cpuminer-config.h"
#include "miner.h"
#include "yespower-1.0.1/yespower.h"
#include <stdlib.h>
#include <string.h>
#include <inttypes.h>
int scanhash_litb_yespower(int thr_id, uint32_t *pdata,
const uint32_t *ptarget,
uint32_t max_nonce, unsigned long *hashes_done)
{
static const yespower_params_t params = {
.version = YESPOWER_1_0,
.N = 2048,
.r = 32,
.pers = (const uint8_t *)"LITBpower: The number of LITB working or available for proof-of-work mining",
.perslen = 73
};
union {
uint8_t u8[8];
uint32_t u32[20];
} data;
union {
yespower_binary_t yb;
uint32_t u32[7];
} hash;
uint32_t n = pdata[19] - 1;
const uint32_t Htarg = ptarget[7];
int i;
for (i = 0; i < 19; i++)
be32enc(&data.u32[i], pdata[i]);
do {
be32enc(&data.u32[19], ++n);
if (yespower_tls(data.u8, 80, &params, &hash.yb))
abort();
if (le32dec(&hash.u32[7]) <= Htarg) {
for (i = 0; i < 7; i++)
hash.u32[i] = le32dec(&hash.u32[i]);
if (fulltest(hash.u32, ptarget)) {
*hashes_done = n - pdata[19] + 1;
pdata[19] = n;
return 1;
}
}
} while (n < max_nonce && !work_restart[thr_id].restart);
*hashes_done = n - pdata[19] + 1;
pdata[19] = n;
return 0;
}

84
YespowerMbc.c Normal file
View File

@@ -0,0 +1,84 @@
/*
* Copyright 2011 ArtForz, 2011-2014 pooler, 2018 The Resistance developers, 2020 The Sugarchain Yumekawa developers
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
* This file is loosly based on a tiny portion of pooler's cpuminer scrypt.c.
*/
#include "cpuminer-config.h"
#include "miner.h"
#include "yespower-1.0.1-power2b/yespower-p2b.h"
#include <stdlib.h>
#include <string.h>
#include <inttypes.h>
int scanhash_mbc_yespower(int thr_id, uint32_t *pdata,
const uint32_t *ptarget,
uint32_t max_nonce, unsigned long *hashes_done)
{
static const yespower_params_t params = {
.version = YESPOWER_1_0_BLAKE2B,
.N = 2048,
.r = 32,
.pers = (const uint8_t *)"Now I am become Death, the destroyer of worlds",
.perslen = 46
};
union {
uint8_t u8[8];
uint32_t u32[20];
} data;
union {
yespower_binary_t_p2b yb;
uint32_t u32[7];
} hash;
uint32_t n = pdata[19] - 1;
const uint32_t Htarg = ptarget[7];
int i;
for (i = 0; i < 19; i++)
be32enc(&data.u32[i], pdata[i]);
do {
be32enc(&data.u32[19], ++n);
if (yespower_tls_p2b(data.u8, 80, &params, &hash.yb))
abort();
if (le32dec(&hash.u32[7]) <= Htarg) {
for (i = 0; i < 7; i++)
hash.u32[i] = le32dec(&hash.u32[i]);
if (fulltest(hash.u32, ptarget)) {
*hashes_done = n - pdata[19] + 1;
pdata[19] = n;
return 1;
}
}
} while (n < max_nonce && !work_restart[thr_id].restart);
*hashes_done = n - pdata[19] + 1;
pdata[19] = n;
return 0;
}

84
YespowerNull.c Normal file
View File

@@ -0,0 +1,84 @@
/*
* Copyright 2011 ArtForz, 2011-2014 pooler, 2018 The Resistance developers, 2020 The Sugarchain Yumekawa developers
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
* This file is loosly based on a tiny portion of pooler's cpuminer scrypt.c.
*/
#include "cpuminer-config.h"
#include "miner.h"
#include "yespower-1.0.1/yespower.h"
#include <stdlib.h>
#include <string.h>
#include <inttypes.h>
int scanhash_null_yespower(int thr_id, uint32_t *pdata,
const uint32_t *ptarget,
uint32_t max_nonce, unsigned long *hashes_done)
{
static const yespower_params_t params = {
.version = YESPOWER_1_0,
.N = 2048,
.r = 32,
.pers = NULL,
.perslen = 0
};
union {
uint8_t u8[8];
uint32_t u32[20];
} data;
union {
yespower_binary_t yb;
uint32_t u32[7];
} hash;
uint32_t n = pdata[19] - 1;
const uint32_t Htarg = ptarget[7];
int i;
for (i = 0; i < 19; i++)
be32enc(&data.u32[i], pdata[i]);
do {
be32enc(&data.u32[19], ++n);
if (yespower_tls(data.u8, 80, &params, &hash.yb))
abort();
if (le32dec(&hash.u32[7]) <= Htarg) {
for (i = 0; i < 7; i++)
hash.u32[i] = le32dec(&hash.u32[i]);
if (fulltest(hash.u32, ptarget)) {
*hashes_done = n - pdata[19] + 1;
pdata[19] = n;
return 1;
}
}
} while (n < max_nonce && !work_restart[thr_id].restart);
*hashes_done = n - pdata[19] + 1;
pdata[19] = n;
return 0;
}

84
YespowerSugar.c Normal file
View File

@@ -0,0 +1,84 @@
/*
* Copyright 2011 ArtForz, 2011-2014 pooler, 2018 The Resistance developers, 2020 The Sugarchain Yumekawa developers
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
* This file is loosly based on a tiny portion of pooler's cpuminer scrypt.c.
*/
#include "cpuminer-config.h"
#include "miner.h"
#include "yespower-1.0.1/yespower.h"
#include <stdlib.h>
#include <string.h>
#include <inttypes.h>
int scanhash_sugar_yespower(int thr_id, uint32_t *pdata,
const uint32_t *ptarget,
uint32_t max_nonce, unsigned long *hashes_done)
{
static const yespower_params_t params = {
.version = YESPOWER_1_0,
.N = 2048,
.r = 32,
.pers = (const uint8_t *)"Satoshi Nakamoto 31/Oct/2008 Proof-of-work is essentially one-CPU-one-vote",
.perslen = 74
};
union {
uint8_t u8[8];
uint32_t u32[20];
} data;
union {
yespower_binary_t yb;
uint32_t u32[7];
} hash;
uint32_t n = pdata[19] - 1;
const uint32_t Htarg = ptarget[7];
int i;
for (i = 0; i < 19; i++)
be32enc(&data.u32[i], pdata[i]);
do {
be32enc(&data.u32[19], ++n);
if (yespower_tls(data.u8, 80, &params, &hash.yb))
abort();
if (le32dec(&hash.u32[7]) <= Htarg) {
for (i = 0; i < 7; i++)
hash.u32[i] = le32dec(&hash.u32[i]);
if (fulltest(hash.u32, ptarget)) {
*hashes_done = n - pdata[19] + 1;
pdata[19] = n;
return 1;
}
}
} while (n < max_nonce && !work_restart[thr_id].restart);
*hashes_done = n - pdata[19] + 1;
pdata[19] = n;
return 0;
}

84
YespowerUrx.c Normal file
View File

@@ -0,0 +1,84 @@
/*
* Copyright 2011 ArtForz, 2011-2014 pooler, 2018 The Resistance developers, 2020 The Sugarchain Yumekawa developers
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
* This file is loosly based on a tiny portion of pooler's cpuminer scrypt.c.
*/
#include "cpuminer-config.h"
#include "miner.h"
#include "yespower-1.0.1/yespower.h"
#include <stdlib.h>
#include <string.h>
#include <inttypes.h>
int scanhash_urx_yespower(int thr_id, uint32_t *pdata,
const uint32_t *ptarget,
uint32_t max_nonce, unsigned long *hashes_done)
{
static const yespower_params_t params = {
.version = YESPOWER_1_0,
.N = 2048,
.r = 32,
.pers = (const uint8_t *)"UraniumX",
.perslen = 8
};
union {
uint8_t u8[8];
uint32_t u32[20];
} data;
union {
yespower_binary_t yb;
uint32_t u32[7];
} hash;
uint32_t n = pdata[19] - 1;
const uint32_t Htarg = ptarget[7];
int i;
for (i = 0; i < 19; i++)
be32enc(&data.u32[i], pdata[i]);
do {
be32enc(&data.u32[19], ++n);
if (yespower_tls(data.u8, 80, &params, &hash.yb))
abort();
if (le32dec(&hash.u32[7]) <= Htarg) {
for (i = 0; i < 7; i++)
hash.u32[i] = le32dec(&hash.u32[i]);
if (fulltest(hash.u32, ptarget)) {
*hashes_done = n - pdata[19] + 1;
pdata[19] = n;
return 1;
}
}
} while (n < max_nonce && !work_restart[thr_id].restart);
*hashes_done = n - pdata[19] + 1;
pdata[19] = n;
return 0;
}

84
YespowerYtn.c Normal file
View File

@@ -0,0 +1,84 @@
/*
* Copyright 2011 ArtForz, 2011-2014 pooler, 2018 The Resistance developers, 2020 The Sugarchain Yumekawa developers
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
* This file is loosly based on a tiny portion of pooler's cpuminer scrypt.c.
*/
#include "cpuminer-config.h"
#include "miner.h"
#include "yespower-1.0.1/yespower.h"
#include <stdlib.h>
#include <string.h>
#include <inttypes.h>
int scanhash_ytn_yespower(int thr_id, uint32_t *pdata,
const uint32_t *ptarget,
uint32_t max_nonce, unsigned long *hashes_done)
{
static const yespower_params_t params = {
.version = YESPOWER_1_0,
.N = 4096,
.r = 16,
.pers = NULL,
.perslen = 0
};
union {
uint8_t u8[8];
uint32_t u32[20];
} data;
union {
yespower_binary_t yb;
uint32_t u32[7];
} hash;
uint32_t n = pdata[19] - 1;
const uint32_t Htarg = ptarget[7];
int i;
for (i = 0; i < 19; i++)
be32enc(&data.u32[i], pdata[i]);
do {
be32enc(&data.u32[19], ++n);
if (yespower_tls(data.u8, 80, &params, &hash.yb))
abort();
if (le32dec(&hash.u32[7]) <= Htarg) {
for (i = 0; i < 7; i++)
hash.u32[i] = le32dec(&hash.u32[i]);
if (fulltest(hash.u32, ptarget)) {
*hashes_done = n - pdata[19] + 1;
pdata[19] = n;
return 1;
}
}
} while (n < max_nonce && !work_restart[thr_id].restart);
*hashes_done = n - pdata[19] + 1;
pdata[19] = n;
return 0;
}

12
autogen.sh Normal file
View File

@@ -0,0 +1,12 @@
#!/bin/sh
# You need autoconf 2.5x, preferably 2.57 or later
# You need automake 1.7 or later. 1.6 might work.
set -e
aclocal
autoheader
automake --gnu --add-missing --copy
autoconf

51
build-aarch64.sh Normal file
View File

@@ -0,0 +1,51 @@
# try on virtualbox ubuntu 16.04
# https://lxadm.com/Static_compilation_of_cpuminer
# CLEAN
make distclean || echo clean
rm -f config.status
# DEPENDS
## OPENSSL
# wget https://www.openssl.org/source/openssl-1.1.0g.tar.gz
# tar -xvzf openssl-1.1.0g.tar.gz
# cd openssl-1.1.0g/
# ./config no-shared
# make -j$(nproc)
# sudo make install
# cd ..
## CURL
# wget https://github.com/curl/curl/releases/download/curl-7_57_0/curl-7.57.0.tar.gz
# tar -xvzf curl-7.57.0.tar.gz
# cd curl-7.57.0/
# .buildconf | grep "buildconf: OK"
# ./configure --disable-shared | grep "Static=yes"
# make -j$(nproc)
# sudo make install
# cd ..
# BUILD
./autogen.sh
# CFLAGS="-Wall -O2 -fomit-frame-pointer" ./configure
# ./configure --with-curl="/usr/local/" --with-crypto="/usr/local/" CFLAGS="-Wall -O2 -fomit-frame-pointer" LDFLAGS="-static -I/usr/local/lib/ -L/usr/local/lib/libcrypto.a" LIBS="-lssl -lcrypto -lz -lpthread -ldl" CFLAGS="-DCURL_STATICLIB" --with-crypto
./configure --with-curl="/usr/local/" --with-crypto="/usr/local/" CFLAGS="-Wall -O2 -fomit-frame-pointer" LDFLAGS="-static" LIBS="-ldl -lz"
make
strip -s sugarmaker
# CHECK STATIC
file sugarmaker | grep "statically linked"
# PACKAGE
RELEASE=sugarmaker-v2.5.0-sugar4-aarch64
rm -rf $RELEASE
mkdir $RELEASE
cp ./mining-script/sh/*.sh $RELEASE/
cp sugarmaker $RELEASE/
# SIGN
zip -X $RELEASE/$RELEASE.zip $RELEASE/*
sha256sum $RELEASE/$RELEASE.zip > $RELEASE/$RELEASE
gpg --digest-algo sha256 --clearsign $RELEASE/$RELEASE
rm $RELEASE/$RELEASE && cat $RELEASE/$RELEASE.asc

51
build-armv7l.sh Normal file
View File

@@ -0,0 +1,51 @@
# try on virtualbox ubuntu 16.04
# https://lxadm.com/Static_compilation_of_cpuminer
# CLEAN
make distclean || echo clean
rm -f config.status
# DEPENDS
## OPENSSL
# wget https://www.openssl.org/source/openssl-1.1.0g.tar.gz
# tar -xvzf openssl-1.1.0g.tar.gz
# cd openssl-1.1.0g/
# ./config no-shared
# make -j$(nproc)
# sudo make install
# cd ..
## CURL
# wget https://github.com/curl/curl/releases/download/curl-7_57_0/curl-7.57.0.tar.gz
# tar -xvzf curl-7.57.0.tar.gz
# cd curl-7.57.0/
# .buildconf | grep "buildconf: OK"
# ./configure --disable-shared | grep "Static=yes"
# make -j$(nproc)
# sudo make install
# cd ..
# BUILD
./autogen.sh
# CFLAGS="-Wall -O2 -fomit-frame-pointer" ./configure
# ./configure --with-curl="/usr/local/" --with-crypto="/usr/local/" CFLAGS="-Wall -O2 -fomit-frame-pointer" LDFLAGS="-static -I/usr/local/lib/ -L/usr/local/lib/libcrypto.a" LIBS="-lssl -lcrypto -lz -lpthread -ldl" CFLAGS="-DCURL_STATICLIB" --with-crypto
./configure --with-curl="/usr/local/" --with-crypto="/usr/local/" CFLAGS="-Wall -O2 -fomit-frame-pointer" LDFLAGS="-static" LIBS="-ldl -lz"
make
strip -s sugarmaker
# CHECK STATIC
file sugarmaker | grep "statically linked"
# PACKAGE
RELEASE=sugarmaker-v2.5.0-sugar4-armv7l
rm -rf $RELEASE
mkdir $RELEASE
cp ./mining-script/sh/*.sh $RELEASE/
cp sugarmaker $RELEASE/
# SIGN
zip -X $RELEASE/$RELEASE.zip $RELEASE/*
sha256sum $RELEASE/$RELEASE.zip > $RELEASE/$RELEASE
gpg --digest-algo sha256 --clearsign $RELEASE/$RELEASE
rm $RELEASE/$RELEASE && cat $RELEASE/$RELEASE.asc

49
build-linux32.sh Normal file
View File

@@ -0,0 +1,49 @@
# try on virtualbox ubuntu 16.04
# https://lxadm.com/Static_compilation_of_cpuminer
# CLEAN
make distclean || echo clean
rm -f config.status
# DEPENDS
## OPENSSL
# wget https://www.openssl.org/source/openssl-1.1.0g.tar.gz
# tar -xvzf openssl-1.1.0g.tar.gz
# cd openssl-1.1.0g/
# ./config no-shared
# make -j$(nproc)
# sudo make install
# cd ..
## CURL
# wget https://github.com/curl/curl/releases/download/curl-7_57_0/curl-7.57.0.tar.gz
# tar -xvzf curl-7.57.0.tar.gz
# cd curl-7.57.0/
# .buildconf | grep "buildconf: OK"
# ./configure --disable-shared | grep "Static=yes"
# make -j$(nproc)
# sudo make install
# cd ..
# BUILD
./autogen.sh
CFLAGS="-Wall -O2 -fomit-frame-pointer" CXXFLAGS="$CFLAGS -std=gnu++11" LDFLAGS="-static" ./configure --with-curl=/usr/local/
make
strip -s sugarmaker
# CHECK STATIC
file sugarmaker | grep "statically linked"
# PACKAGE
RELEASE=sugarmaker-v2.5.0-sugar4-linux32
rm -rf $RELEASE
mkdir $RELEASE
cp ./mining-script/sh/*.sh $RELEASE/
cp sugarmaker $RELEASE/
# SIGN
zip -X $RELEASE/$RELEASE.zip $RELEASE/*
sha256sum $RELEASE/$RELEASE.zip > $RELEASE/$RELEASE
gpg --digest-algo sha256 --clearsign $RELEASE/$RELEASE
rm $RELEASE/$RELEASE && cat $RELEASE/$RELEASE.asc

33
build-linux64.sh Normal file
View File

@@ -0,0 +1,33 @@
# try on virtualbox ubuntu 16.04
# https://lxadm.com/Static_compilation_of_cpuminer
# CLEAN
make distclean || echo clean
rm -f config.status
# DEPENDS
# cd deps-linux64/
# ./deps-linux64.sh
# cd ..
# BUILD
./autogen.sh
./configure CFLAGS="-Wall -O2 -fomit-frame-pointer" LDFLAGS="-static" CXXFLAGS="$CFLAGS -std=gnu++11" --with-curl=/usr/local/
make
strip -s sugarmaker
# CHECK STATIC
file sugarmaker | grep "statically linked"
# PACKAGE
RELEASE=sugarmaker-v2.5.0-sugar4-linux64
rm -rf $RELEASE
mkdir $RELEASE
cp ./mining-script/sh/*.sh $RELEASE/
cp sugarmaker $RELEASE/
# SIGN
zip -X $RELEASE/$RELEASE.zip $RELEASE/*
sha256sum $RELEASE/$RELEASE.zip > $RELEASE/$RELEASE
gpg --digest-algo sha256 --clearsign $RELEASE/$RELEASE
rm $RELEASE/$RELEASE && cat $RELEASE/$RELEASE.asc

34
build-osx.sh Normal file
View File

@@ -0,0 +1,34 @@
# try on virtualbox ubuntu 16.04
# https://gist.github.com/quagliero/90f493f123c7b1ddba5428ba0146329a
# CLEAN
make distclean || echo clean
rm -f config.status
# HOTFIX for OSX
./autogen.sh
sed -i '' '/LIBCURL_CHECK_CONFIG/d' ./configure
sed -i '' '/AC_MSG_ERROR/d' ./configure
# BUILD
./autogen.sh
./configure CFLAGS="-Wall -O2 -fomit-frame-pointer" --with-crypto=/usr/local/opt/openssl --with-curl
make
strip sugarmaker
# CHECK STATIC
file sugarmaker | grep "statically linked"
# PACKAGE
RELEASE=sugarmaker-v2.5.0-sugar4-osx
rm -rf $RELEASE
mkdir $RELEASE
cp ./mining-script/sh/*.sh $RELEASE/
cp sugarmaker $RELEASE/
# SIGN
zip -r $RELEASE/$RELEASE.zip $RELEASE/*
# sha256sum $RELEASE/$RELEASE.zip > $RELEASE/$RELEASE
shasum -a 256 $RELEASE/$RELEASE.zip > $RELEASE/$RELEASE
gpg --digest-algo sha256 --clearsign $RELEASE/$RELEASE
rm $RELEASE/$RELEASE && cat $RELEASE/$RELEASE.asc

29
build-w32.sh Normal file
View File

@@ -0,0 +1,29 @@
# CLEAN
make distclean || echo clean
rm -f config.status
# DEPS
# cd deps-win32
# ./build_win_x86_deps.sh
# cd ..
# BUILD
autoreconf -fi -I./deps-win32/i686-w64-mingw32/share/aclocal
./autogen.sh
./configure --host=i686-w64-mingw32 LDFLAGS="-L./deps-win32/i686-w64-mingw32/lib -static" CFLAGS="-Wall -O2 -fomit-frame-pointer -I./deps-win32/i686-w64-mingw32/include -std=c99 -DWIN32 -DCURL_STATICLIB -DPTW32_STATIC_LIB" --with-libcurl=deps-win32/i686-w64-mingw32
make
strip -p --strip-debug --strip-unneeded sugarmaker.exe
# PACKAGE
RELEASE=sugarmaker-v2.5.0-sugar4-w32
rm -rf $RELEASE
mkdir $RELEASE
cp ./mining-script/bat/*.bat $RELEASE/
mv sugarmaker.exe $RELEASE/
# SIGN
zip -X $RELEASE/$RELEASE.zip $RELEASE/*
sha256sum $RELEASE/$RELEASE.zip > $RELEASE/$RELEASE
gpg --digest-algo sha256 --clearsign $RELEASE/$RELEASE
rm $RELEASE/$RELEASE && cat $RELEASE/$RELEASE.asc

29
build-w64.sh Normal file
View File

@@ -0,0 +1,29 @@
# CLEAN
make distclean || echo clean
rm -f config.status
# DEPS
# cd deps-win64
# ./build_win_x64_deps.sh
# cd ..
# BUILD
autoreconf -fi -I./deps-win64/x86_64-w64-mingw32/share/aclocal
./autogen.sh
./configure --host=x86_64-w64-mingw32 LDFLAGS="-L./deps-win64/x86_64-w64-mingw32/lib -static" CFLAGS="-Wall -O2 -fomit-frame-pointer -I./deps-win64/x86_64-w64-mingw32/include -std=c99 -DWIN32 -DCURL_STATICLIB -DPTW32_STATIC_LIB" --with-libcurl=deps-win64/x86_64-w64-mingw32
make
strip -p --strip-debug --strip-unneeded sugarmaker.exe
# PACKAGE
RELEASE=sugarmaker-v2.5.0-sugar4-w64
rm -rf $RELEASE
mkdir $RELEASE
cp ./mining-script/bat/*.bat $RELEASE/
mv sugarmaker.exe $RELEASE/
# SIGN
zip -X $RELEASE/$RELEASE.zip $RELEASE/*
sha256sum $RELEASE/$RELEASE.zip > $RELEASE/$RELEASE
gpg --digest-algo sha256 --clearsign $RELEASE/$RELEASE
rm $RELEASE/$RELEASE && cat $RELEASE/$RELEASE.asc

12
build.sh Normal file
View File

@@ -0,0 +1,12 @@
# CLEAN
make distclean || echo clean
rm -f config.status
# BUILD
./autogen.sh
./configure CFLAGS="-Wall -O2 -fomit-frame-pointer"
make -j$(nproc)
strip -s sugarmaker
# CHECK STATIC
file sugarmaker | grep "dynamically linked"

21
compat.h Normal file
View File

@@ -0,0 +1,21 @@
#ifndef __COMPAT_H__
#define __COMPAT_H__
#ifdef WIN32
#include <windows.h>
#define sleep(secs) Sleep((secs) * 1000)
enum {
PRIO_PROCESS = 0,
};
static inline int setpriority(int which, int who, int prio)
{
return -!SetThreadPriority(GetCurrentThread(), THREAD_PRIORITY_IDLE);
}
#endif /* WIN32 */
#endif /* __COMPAT_H__ */

7
compat/Makefile.am Normal file
View File

@@ -0,0 +1,7 @@
if WANT_JANSSON
SUBDIRS = jansson
else
SUBDIRS =
endif

3
compat/jansson/.gitignore vendored Normal file
View File

@@ -0,0 +1,3 @@
libjansson.a

19
compat/jansson/LICENSE Normal file
View File

@@ -0,0 +1,19 @@
Copyright (c) 2009, 2010 Petri Lehtinen <petri@digip.org>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

View File

@@ -0,0 +1,18 @@
noinst_LIBRARIES = libjansson.a
libjansson_a_SOURCES = \
config.h \
dump.c \
hashtable.c \
hashtable.h \
jansson.h \
jansson_private.h \
load.c \
strbuffer.c \
strbuffer.h \
utf.c \
utf.h \
util.h \
value.c

73
compat/jansson/config.h Normal file
View File

@@ -0,0 +1,73 @@
/* config.h. Generated from config.h.in by configure. */
/* config.h.in. Generated from configure.ac by autoheader. */
/* Define to 1 if you have the <dlfcn.h> header file. */
#define HAVE_DLFCN_H 1
/* Define to 1 if you have the <inttypes.h> header file. */
#define HAVE_INTTYPES_H 1
/* Define to 1 if you have the <memory.h> header file. */
#define HAVE_MEMORY_H 1
/* Define to 1 if you have the <stdint.h> header file. */
#define HAVE_STDINT_H 1
/* Define to 1 if you have the <stdlib.h> header file. */
#define HAVE_STDLIB_H 1
/* Define to 1 if you have the <strings.h> header file. */
#define HAVE_STRINGS_H 1
/* Define to 1 if you have the <string.h> header file. */
#define HAVE_STRING_H 1
/* Define to 1 if you have the <sys/stat.h> header file. */
#define HAVE_SYS_STAT_H 1
/* Define to 1 if you have the <sys/types.h> header file. */
#define HAVE_SYS_TYPES_H 1
/* Define to 1 if you have the <unistd.h> header file. */
#define HAVE_UNISTD_H 1
/* Define to the sub-directory in which libtool stores uninstalled libraries.
*/
#define LT_OBJDIR ".libs/"
/* Name of package */
#define PACKAGE "jansson"
/* Define to the address where bug reports for this package should be sent. */
#define PACKAGE_BUGREPORT "petri@digip.org"
/* Define to the full name of this package. */
#define PACKAGE_NAME "jansson"
/* Define to the full name and version of this package. */
#define PACKAGE_STRING "jansson 1.3"
/* Define to the one symbol short name of this package. */
#define PACKAGE_TARNAME "jansson"
/* Define to the home page for this package. */
#define PACKAGE_URL ""
/* Define to the version of this package. */
#define PACKAGE_VERSION "1.3"
/* Define to 1 if you have the ANSI C header files. */
#define STDC_HEADERS 1
/* Version number of package */
#define VERSION "1.3"
/* Define to `__inline__' or `__inline' if that's what the C compiler
calls it, or to nothing if 'inline' is not supported under any name. */
#ifndef __cplusplus
/* #undef inline */
#endif
/* Define to the type of a signed integer type of width exactly 32 bits if
such a type exists and the standard includes do not define it. */
/* #undef int32_t */

460
compat/jansson/dump.c Normal file
View File

@@ -0,0 +1,460 @@
/*
* Copyright (c) 2009, 2010 Petri Lehtinen <petri@digip.org>
*
* Jansson is free software; you can redistribute it and/or modify
* it under the terms of the MIT license. See LICENSE for details.
*/
#define _GNU_SOURCE
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <assert.h>
#include <jansson.h>
#include "jansson_private.h"
#include "strbuffer.h"
#include "utf.h"
#define MAX_INTEGER_STR_LENGTH 100
#define MAX_REAL_STR_LENGTH 100
typedef int (*dump_func)(const char *buffer, int size, void *data);
struct string
{
char *buffer;
int length;
int size;
};
static int dump_to_strbuffer(const char *buffer, int size, void *data)
{
return strbuffer_append_bytes((strbuffer_t *)data, buffer, size);
}
static int dump_to_file(const char *buffer, int size, void *data)
{
FILE *dest = (FILE *)data;
if(fwrite(buffer, size, 1, dest) != 1)
return -1;
return 0;
}
/* 256 spaces (the maximum indentation size) */
static char whitespace[] = " ";
static int dump_indent(unsigned long flags, int depth, int space, dump_func dump, void *data)
{
if(JSON_INDENT(flags) > 0)
{
int i, ws_count = JSON_INDENT(flags);
if(dump("\n", 1, data))
return -1;
for(i = 0; i < depth; i++)
{
if(dump(whitespace, ws_count, data))
return -1;
}
}
else if(space && !(flags & JSON_COMPACT))
{
return dump(" ", 1, data);
}
return 0;
}
static int dump_string(const char *str, int ascii, dump_func dump, void *data)
{
const char *pos, *end;
int32_t codepoint;
if(dump("\"", 1, data))
return -1;
end = pos = str;
while(1)
{
const char *text;
char seq[13];
int length;
while(*end)
{
end = utf8_iterate(pos, &codepoint);
if(!end)
return -1;
/* mandatory escape or control char */
if(codepoint == '\\' || codepoint == '"' || codepoint < 0x20)
break;
/* non-ASCII */
if(ascii && codepoint > 0x7F)
break;
pos = end;
}
if(pos != str) {
if(dump(str, pos - str, data))
return -1;
}
if(end == pos)
break;
/* handle \, ", and control codes */
length = 2;
switch(codepoint)
{
case '\\': text = "\\\\"; break;
case '\"': text = "\\\""; break;
case '\b': text = "\\b"; break;
case '\f': text = "\\f"; break;
case '\n': text = "\\n"; break;
case '\r': text = "\\r"; break;
case '\t': text = "\\t"; break;
default:
{
/* codepoint is in BMP */
if(codepoint < 0x10000)
{
sprintf(seq, "\\u%04x", codepoint);
length = 6;
}
/* not in BMP -> construct a UTF-16 surrogate pair */
else
{
int32_t first, last;
codepoint -= 0x10000;
first = 0xD800 | ((codepoint & 0xffc00) >> 10);
last = 0xDC00 | (codepoint & 0x003ff);
sprintf(seq, "\\u%04x\\u%04x", first, last);
length = 12;
}
text = seq;
break;
}
}
if(dump(text, length, data))
return -1;
str = pos = end;
}
return dump("\"", 1, data);
}
static int object_key_compare_keys(const void *key1, const void *key2)
{
return strcmp((*(const object_key_t **)key1)->key,
(*(const object_key_t **)key2)->key);
}
static int object_key_compare_serials(const void *key1, const void *key2)
{
return (*(const object_key_t **)key1)->serial -
(*(const object_key_t **)key2)->serial;
}
static int do_dump(const json_t *json, unsigned long flags, int depth,
dump_func dump, void *data)
{
int ascii = flags & JSON_ENSURE_ASCII ? 1 : 0;
switch(json_typeof(json)) {
case JSON_NULL:
return dump("null", 4, data);
case JSON_TRUE:
return dump("true", 4, data);
case JSON_FALSE:
return dump("false", 5, data);
case JSON_INTEGER:
{
char buffer[MAX_INTEGER_STR_LENGTH];
int size;
size = snprintf(buffer, MAX_INTEGER_STR_LENGTH, "%d", json_integer_value(json));
if(size >= MAX_INTEGER_STR_LENGTH)
return -1;
return dump(buffer, size, data);
}
case JSON_REAL:
{
char buffer[MAX_REAL_STR_LENGTH];
int size;
size = snprintf(buffer, MAX_REAL_STR_LENGTH, "%.17g",
json_real_value(json));
if(size >= MAX_REAL_STR_LENGTH)
return -1;
/* Make sure there's a dot or 'e' in the output. Otherwise
a real is converted to an integer when decoding */
if(strchr(buffer, '.') == NULL &&
strchr(buffer, 'e') == NULL)
{
if(size + 2 >= MAX_REAL_STR_LENGTH) {
/* No space to append ".0" */
return -1;
}
buffer[size] = '.';
buffer[size + 1] = '0';
size += 2;
}
return dump(buffer, size, data);
}
case JSON_STRING:
return dump_string(json_string_value(json), ascii, dump, data);
case JSON_ARRAY:
{
int i;
int n;
json_array_t *array;
/* detect circular references */
array = json_to_array(json);
if(array->visited)
goto array_error;
array->visited = 1;
n = json_array_size(json);
if(dump("[", 1, data))
goto array_error;
if(n == 0) {
array->visited = 0;
return dump("]", 1, data);
}
if(dump_indent(flags, depth + 1, 0, dump, data))
goto array_error;
for(i = 0; i < n; ++i) {
if(do_dump(json_array_get(json, i), flags, depth + 1,
dump, data))
goto array_error;
if(i < n - 1)
{
if(dump(",", 1, data) ||
dump_indent(flags, depth + 1, 1, dump, data))
goto array_error;
}
else
{
if(dump_indent(flags, depth, 0, dump, data))
goto array_error;
}
}
array->visited = 0;
return dump("]", 1, data);
array_error:
array->visited = 0;
return -1;
}
case JSON_OBJECT:
{
json_object_t *object;
void *iter;
const char *separator;
int separator_length;
if(flags & JSON_COMPACT) {
separator = ":";
separator_length = 1;
}
else {
separator = ": ";
separator_length = 2;
}
/* detect circular references */
object = json_to_object(json);
if(object->visited)
goto object_error;
object->visited = 1;
iter = json_object_iter((json_t *)json);
if(dump("{", 1, data))
goto object_error;
if(!iter) {
object->visited = 0;
return dump("}", 1, data);
}
if(dump_indent(flags, depth + 1, 0, dump, data))
goto object_error;
if(flags & JSON_SORT_KEYS || flags & JSON_PRESERVE_ORDER)
{
const object_key_t **keys;
unsigned int size;
unsigned int i;
int (*cmp_func)(const void *, const void *);
size = json_object_size(json);
keys = malloc(size * sizeof(object_key_t *));
if(!keys)
goto object_error;
i = 0;
while(iter)
{
keys[i] = jsonp_object_iter_fullkey(iter);
iter = json_object_iter_next((json_t *)json, iter);
i++;
}
assert(i == size);
if(flags & JSON_SORT_KEYS)
cmp_func = object_key_compare_keys;
else
cmp_func = object_key_compare_serials;
qsort(keys, size, sizeof(object_key_t *), cmp_func);
for(i = 0; i < size; i++)
{
const char *key;
json_t *value;
key = keys[i]->key;
value = json_object_get(json, key);
assert(value);
dump_string(key, ascii, dump, data);
if(dump(separator, separator_length, data) ||
do_dump(value, flags, depth + 1, dump, data))
{
free(keys);
goto object_error;
}
if(i < size - 1)
{
if(dump(",", 1, data) ||
dump_indent(flags, depth + 1, 1, dump, data))
{
free(keys);
goto object_error;
}
}
else
{
if(dump_indent(flags, depth, 0, dump, data))
{
free(keys);
goto object_error;
}
}
}
free(keys);
}
else
{
/* Don't sort keys */
while(iter)
{
void *next = json_object_iter_next((json_t *)json, iter);
dump_string(json_object_iter_key(iter), ascii, dump, data);
if(dump(separator, separator_length, data) ||
do_dump(json_object_iter_value(iter), flags, depth + 1,
dump, data))
goto object_error;
if(next)
{
if(dump(",", 1, data) ||
dump_indent(flags, depth + 1, 1, dump, data))
goto object_error;
}
else
{
if(dump_indent(flags, depth, 0, dump, data))
goto object_error;
}
iter = next;
}
}
object->visited = 0;
return dump("}", 1, data);
object_error:
object->visited = 0;
return -1;
}
default:
/* not reached */
return -1;
}
}
char *json_dumps(const json_t *json, unsigned long flags)
{
strbuffer_t strbuff;
char *result;
if(!json_is_array(json) && !json_is_object(json))
return NULL;
if(strbuffer_init(&strbuff))
return NULL;
if(do_dump(json, flags, 0, dump_to_strbuffer, (void *)&strbuff)) {
strbuffer_close(&strbuff);
return NULL;
}
result = strdup(strbuffer_value(&strbuff));
strbuffer_close(&strbuff);
return result;
}
int json_dumpf(const json_t *json, FILE *output, unsigned long flags)
{
if(!json_is_array(json) && !json_is_object(json))
return -1;
return do_dump(json, flags, 0, dump_to_file, (void *)output);
}
int json_dump_file(const json_t *json, const char *path, unsigned long flags)
{
int result;
FILE *output = fopen(path, "w");
if(!output)
return -1;
result = json_dumpf(json, output, flags);
fclose(output);
return result;
}

374
compat/jansson/hashtable.c Normal file
View File

@@ -0,0 +1,374 @@
/*
* Copyright (c) 2009, 2010 Petri Lehtinen <petri@digip.org>
*
* This library is free software; you can redistribute it and/or modify
* it under the terms of the MIT license. See LICENSE for details.
*/
#include <config.h>
#include <stdlib.h>
#include "hashtable.h"
typedef struct hashtable_list list_t;
typedef struct hashtable_pair pair_t;
typedef struct hashtable_bucket bucket_t;
#define container_of(ptr_, type_, member_) \
((type_ *)((char *)ptr_ - (size_t)&((type_ *)0)->member_))
#define list_to_pair(list_) container_of(list_, pair_t, list)
static inline void list_init(list_t *list)
{
list->next = list;
list->prev = list;
}
static inline void list_insert(list_t *list, list_t *node)
{
node->next = list;
node->prev = list->prev;
list->prev->next = node;
list->prev = node;
}
static inline void list_remove(list_t *list)
{
list->prev->next = list->next;
list->next->prev = list->prev;
}
static inline int bucket_is_empty(hashtable_t *hashtable, bucket_t *bucket)
{
return bucket->first == &hashtable->list && bucket->first == bucket->last;
}
static void insert_to_bucket(hashtable_t *hashtable, bucket_t *bucket,
list_t *list)
{
if(bucket_is_empty(hashtable, bucket))
{
list_insert(&hashtable->list, list);
bucket->first = bucket->last = list;
}
else
{
list_insert(bucket->first, list);
bucket->first = list;
}
}
static unsigned int primes[] = {
5, 13, 23, 53, 97, 193, 389, 769, 1543, 3079, 6151, 12289, 24593,
49157, 98317, 196613, 393241, 786433, 1572869, 3145739, 6291469,
12582917, 25165843, 50331653, 100663319, 201326611, 402653189,
805306457, 1610612741
};
static inline unsigned int num_buckets(hashtable_t *hashtable)
{
return primes[hashtable->num_buckets];
}
static pair_t *hashtable_find_pair(hashtable_t *hashtable, bucket_t *bucket,
const void *key, unsigned int hash)
{
list_t *list;
pair_t *pair;
if(bucket_is_empty(hashtable, bucket))
return NULL;
list = bucket->first;
while(1)
{
pair = list_to_pair(list);
if(pair->hash == hash && hashtable->cmp_keys(pair->key, key))
return pair;
if(list == bucket->last)
break;
list = list->next;
}
return NULL;
}
/* returns 0 on success, -1 if key was not found */
static int hashtable_do_del(hashtable_t *hashtable,
const void *key, unsigned int hash)
{
pair_t *pair;
bucket_t *bucket;
unsigned int index;
index = hash % num_buckets(hashtable);
bucket = &hashtable->buckets[index];
pair = hashtable_find_pair(hashtable, bucket, key, hash);
if(!pair)
return -1;
if(&pair->list == bucket->first && &pair->list == bucket->last)
bucket->first = bucket->last = &hashtable->list;
else if(&pair->list == bucket->first)
bucket->first = pair->list.next;
else if(&pair->list == bucket->last)
bucket->last = pair->list.prev;
list_remove(&pair->list);
if(hashtable->free_key)
hashtable->free_key(pair->key);
if(hashtable->free_value)
hashtable->free_value(pair->value);
free(pair);
hashtable->size--;
return 0;
}
static void hashtable_do_clear(hashtable_t *hashtable)
{
list_t *list, *next;
pair_t *pair;
for(list = hashtable->list.next; list != &hashtable->list; list = next)
{
next = list->next;
pair = list_to_pair(list);
if(hashtable->free_key)
hashtable->free_key(pair->key);
if(hashtable->free_value)
hashtable->free_value(pair->value);
free(pair);
}
}
static int hashtable_do_rehash(hashtable_t *hashtable)
{
list_t *list, *next;
pair_t *pair;
unsigned int i, index, new_size;
free(hashtable->buckets);
hashtable->num_buckets++;
new_size = num_buckets(hashtable);
hashtable->buckets = malloc(new_size * sizeof(bucket_t));
if(!hashtable->buckets)
return -1;
for(i = 0; i < num_buckets(hashtable); i++)
{
hashtable->buckets[i].first = hashtable->buckets[i].last =
&hashtable->list;
}
list = hashtable->list.next;
list_init(&hashtable->list);
for(; list != &hashtable->list; list = next) {
next = list->next;
pair = list_to_pair(list);
index = pair->hash % new_size;
insert_to_bucket(hashtable, &hashtable->buckets[index], &pair->list);
}
return 0;
}
hashtable_t *hashtable_create(key_hash_fn hash_key, key_cmp_fn cmp_keys,
free_fn free_key, free_fn free_value)
{
hashtable_t *hashtable = malloc(sizeof(hashtable_t));
if(!hashtable)
return NULL;
if(hashtable_init(hashtable, hash_key, cmp_keys, free_key, free_value))
{
free(hashtable);
return NULL;
}
return hashtable;
}
void hashtable_destroy(hashtable_t *hashtable)
{
hashtable_close(hashtable);
free(hashtable);
}
int hashtable_init(hashtable_t *hashtable,
key_hash_fn hash_key, key_cmp_fn cmp_keys,
free_fn free_key, free_fn free_value)
{
unsigned int i;
hashtable->size = 0;
hashtable->num_buckets = 0; /* index to primes[] */
hashtable->buckets = malloc(num_buckets(hashtable) * sizeof(bucket_t));
if(!hashtable->buckets)
return -1;
list_init(&hashtable->list);
hashtable->hash_key = hash_key;
hashtable->cmp_keys = cmp_keys;
hashtable->free_key = free_key;
hashtable->free_value = free_value;
for(i = 0; i < num_buckets(hashtable); i++)
{
hashtable->buckets[i].first = hashtable->buckets[i].last =
&hashtable->list;
}
return 0;
}
void hashtable_close(hashtable_t *hashtable)
{
hashtable_do_clear(hashtable);
free(hashtable->buckets);
}
int hashtable_set(hashtable_t *hashtable, void *key, void *value)
{
pair_t *pair;
bucket_t *bucket;
unsigned int hash, index;
/* rehash if the load ratio exceeds 1 */
if(hashtable->size >= num_buckets(hashtable))
if(hashtable_do_rehash(hashtable))
return -1;
hash = hashtable->hash_key(key);
index = hash % num_buckets(hashtable);
bucket = &hashtable->buckets[index];
pair = hashtable_find_pair(hashtable, bucket, key, hash);
if(pair)
{
if(hashtable->free_key)
hashtable->free_key(key);
if(hashtable->free_value)
hashtable->free_value(pair->value);
pair->value = value;
}
else
{
pair = malloc(sizeof(pair_t));
if(!pair)
return -1;
pair->key = key;
pair->value = value;
pair->hash = hash;
list_init(&pair->list);
insert_to_bucket(hashtable, bucket, &pair->list);
hashtable->size++;
}
return 0;
}
void *hashtable_get(hashtable_t *hashtable, const void *key)
{
pair_t *pair;
unsigned int hash;
bucket_t *bucket;
hash = hashtable->hash_key(key);
bucket = &hashtable->buckets[hash % num_buckets(hashtable)];
pair = hashtable_find_pair(hashtable, bucket, key, hash);
if(!pair)
return NULL;
return pair->value;
}
int hashtable_del(hashtable_t *hashtable, const void *key)
{
unsigned int hash = hashtable->hash_key(key);
return hashtable_do_del(hashtable, key, hash);
}
void hashtable_clear(hashtable_t *hashtable)
{
unsigned int i;
hashtable_do_clear(hashtable);
for(i = 0; i < num_buckets(hashtable); i++)
{
hashtable->buckets[i].first = hashtable->buckets[i].last =
&hashtable->list;
}
list_init(&hashtable->list);
hashtable->size = 0;
}
void *hashtable_iter(hashtable_t *hashtable)
{
return hashtable_iter_next(hashtable, &hashtable->list);
}
void *hashtable_iter_at(hashtable_t *hashtable, const void *key)
{
pair_t *pair;
unsigned int hash;
bucket_t *bucket;
hash = hashtable->hash_key(key);
bucket = &hashtable->buckets[hash % num_buckets(hashtable)];
pair = hashtable_find_pair(hashtable, bucket, key, hash);
if(!pair)
return NULL;
return &pair->list;
}
void *hashtable_iter_next(hashtable_t *hashtable, void *iter)
{
list_t *list = (list_t *)iter;
if(list->next == &hashtable->list)
return NULL;
return list->next;
}
void *hashtable_iter_key(void *iter)
{
pair_t *pair = list_to_pair((list_t *)iter);
return pair->key;
}
void *hashtable_iter_value(void *iter)
{
pair_t *pair = list_to_pair((list_t *)iter);
return pair->value;
}
void hashtable_iter_set(hashtable_t *hashtable, void *iter, void *value)
{
pair_t *pair = list_to_pair((list_t *)iter);
if(hashtable->free_value)
hashtable->free_value(pair->value);
pair->value = value;
}

207
compat/jansson/hashtable.h Normal file
View File

@@ -0,0 +1,207 @@
/*
* Copyright (c) 2009, 2010 Petri Lehtinen <petri@digip.org>
*
* This library is free software; you can redistribute it and/or modify
* it under the terms of the MIT license. See LICENSE for details.
*/
#ifndef HASHTABLE_H
#define HASHTABLE_H
typedef unsigned int (*key_hash_fn)(const void *key);
typedef int (*key_cmp_fn)(const void *key1, const void *key2);
typedef void (*free_fn)(void *key);
struct hashtable_list {
struct hashtable_list *prev;
struct hashtable_list *next;
};
struct hashtable_pair {
void *key;
void *value;
unsigned int hash;
struct hashtable_list list;
};
struct hashtable_bucket {
struct hashtable_list *first;
struct hashtable_list *last;
};
typedef struct hashtable {
unsigned int size;
struct hashtable_bucket *buckets;
unsigned int num_buckets; /* index to primes[] */
struct hashtable_list list;
key_hash_fn hash_key;
key_cmp_fn cmp_keys; /* returns non-zero for equal keys */
free_fn free_key;
free_fn free_value;
} hashtable_t;
/**
* hashtable_create - Create a hashtable object
*
* @hash_key: The key hashing function
* @cmp_keys: The key compare function. Returns non-zero for equal and
* zero for unequal unequal keys
* @free_key: If non-NULL, called for a key that is no longer referenced.
* @free_value: If non-NULL, called for a value that is no longer referenced.
*
* Returns a new hashtable object that should be freed with
* hashtable_destroy when it's no longer used, or NULL on failure (out
* of memory).
*/
hashtable_t *hashtable_create(key_hash_fn hash_key, key_cmp_fn cmp_keys,
free_fn free_key, free_fn free_value);
/**
* hashtable_destroy - Destroy a hashtable object
*
* @hashtable: The hashtable
*
* Destroys a hashtable created with hashtable_create().
*/
void hashtable_destroy(hashtable_t *hashtable);
/**
* hashtable_init - Initialize a hashtable object
*
* @hashtable: The (statically allocated) hashtable object
* @hash_key: The key hashing function
* @cmp_keys: The key compare function. Returns non-zero for equal and
* zero for unequal unequal keys
* @free_key: If non-NULL, called for a key that is no longer referenced.
* @free_value: If non-NULL, called for a value that is no longer referenced.
*
* Initializes a statically allocated hashtable object. The object
* should be cleared with hashtable_close when it's no longer used.
*
* Returns 0 on success, -1 on error (out of memory).
*/
int hashtable_init(hashtable_t *hashtable,
key_hash_fn hash_key, key_cmp_fn cmp_keys,
free_fn free_key, free_fn free_value);
/**
* hashtable_close - Release all resources used by a hashtable object
*
* @hashtable: The hashtable
*
* Destroys a statically allocated hashtable object.
*/
void hashtable_close(hashtable_t *hashtable);
/**
* hashtable_set - Add/modify value in hashtable
*
* @hashtable: The hashtable object
* @key: The key
* @value: The value
*
* If a value with the given key already exists, its value is replaced
* with the new value.
*
* Key and value are "stealed" in the sense that hashtable frees them
* automatically when they are no longer used. The freeing is
* accomplished by calling free_key and free_value functions that were
* supplied to hashtable_new. In case one or both of the free
* functions is NULL, the corresponding item is not "stealed".
*
* Returns 0 on success, -1 on failure (out of memory).
*/
int hashtable_set(hashtable_t *hashtable, void *key, void *value);
/**
* hashtable_get - Get a value associated with a key
*
* @hashtable: The hashtable object
* @key: The key
*
* Returns value if it is found, or NULL otherwise.
*/
void *hashtable_get(hashtable_t *hashtable, const void *key);
/**
* hashtable_del - Remove a value from the hashtable
*
* @hashtable: The hashtable object
* @key: The key
*
* Returns 0 on success, or -1 if the key was not found.
*/
int hashtable_del(hashtable_t *hashtable, const void *key);
/**
* hashtable_clear - Clear hashtable
*
* @hashtable: The hashtable object
*
* Removes all items from the hashtable.
*/
void hashtable_clear(hashtable_t *hashtable);
/**
* hashtable_iter - Iterate over hashtable
*
* @hashtable: The hashtable object
*
* Returns an opaque iterator to the first element in the hashtable.
* The iterator should be passed to hashtable_iter_* functions.
* The hashtable items are not iterated over in any particular order.
*
* There's no need to free the iterator in any way. The iterator is
* valid as long as the item that is referenced by the iterator is not
* deleted. Other values may be added or deleted. In particular,
* hashtable_iter_next() may be called on an iterator, and after that
* the key/value pair pointed by the old iterator may be deleted.
*/
void *hashtable_iter(hashtable_t *hashtable);
/**
* hashtable_iter_at - Return an iterator at a specific key
*
* @hashtable: The hashtable object
* @key: The key that the iterator should point to
*
* Like hashtable_iter() but returns an iterator pointing to a
* specific key.
*/
void *hashtable_iter_at(hashtable_t *hashtable, const void *key);
/**
* hashtable_iter_next - Advance an iterator
*
* @hashtable: The hashtable object
* @iter: The iterator
*
* Returns a new iterator pointing to the next element in the
* hashtable or NULL if the whole hastable has been iterated over.
*/
void *hashtable_iter_next(hashtable_t *hashtable, void *iter);
/**
* hashtable_iter_key - Retrieve the key pointed by an iterator
*
* @iter: The iterator
*/
void *hashtable_iter_key(void *iter);
/**
* hashtable_iter_value - Retrieve the value pointed by an iterator
*
* @iter: The iterator
*/
void *hashtable_iter_value(void *iter);
/**
* hashtable_iter_set - Set the value pointed by an iterator
*
* @iter: The iterator
* @value: The value to set
*/
void hashtable_iter_set(hashtable_t *hashtable, void *iter, void *value);
#endif

191
compat/jansson/jansson.h Normal file
View File

@@ -0,0 +1,191 @@
/*
* Copyright (c) 2009, 2010 Petri Lehtinen <petri@digip.org>
*
* Jansson is free software; you can redistribute it and/or modify
* it under the terms of the MIT license. See LICENSE for details.
*/
#ifndef JANSSON_H
#define JANSSON_H
#include <stdio.h>
#ifndef __cplusplus
#define JSON_INLINE inline
#else
#define JSON_INLINE inline
extern "C" {
#endif
/* types */
typedef enum {
JSON_OBJECT,
JSON_ARRAY,
JSON_STRING,
JSON_INTEGER,
JSON_REAL,
JSON_TRUE,
JSON_FALSE,
JSON_NULL
} json_type;
typedef struct {
json_type type;
unsigned long refcount;
} json_t;
#define json_typeof(json) ((json)->type)
#define json_is_object(json) (json && json_typeof(json) == JSON_OBJECT)
#define json_is_array(json) (json && json_typeof(json) == JSON_ARRAY)
#define json_is_string(json) (json && json_typeof(json) == JSON_STRING)
#define json_is_integer(json) (json && json_typeof(json) == JSON_INTEGER)
#define json_is_real(json) (json && json_typeof(json) == JSON_REAL)
#define json_is_number(json) (json_is_integer(json) || json_is_real(json))
#define json_is_true(json) (json && json_typeof(json) == JSON_TRUE)
#define json_is_false(json) (json && json_typeof(json) == JSON_FALSE)
#define json_is_boolean(json) (json_is_true(json) || json_is_false(json))
#define json_is_null(json) (json && json_typeof(json) == JSON_NULL)
/* construction, destruction, reference counting */
json_t *json_object(void);
json_t *json_array(void);
json_t *json_string(const char *value);
json_t *json_string_nocheck(const char *value);
json_t *json_integer(int value);
json_t *json_real(double value);
json_t *json_true(void);
json_t *json_false(void);
json_t *json_null(void);
static JSON_INLINE
json_t *json_incref(json_t *json)
{
if(json && json->refcount != (unsigned int)-1)
++json->refcount;
return json;
}
/* do not call json_delete directly */
void json_delete(json_t *json);
static JSON_INLINE
void json_decref(json_t *json)
{
if(json && json->refcount != (unsigned int)-1 && --json->refcount == 0)
json_delete(json);
}
/* getters, setters, manipulation */
unsigned int json_object_size(const json_t *object);
json_t *json_object_get(const json_t *object, const char *key);
int json_object_set_new(json_t *object, const char *key, json_t *value);
int json_object_set_new_nocheck(json_t *object, const char *key, json_t *value);
int json_object_del(json_t *object, const char *key);
int json_object_clear(json_t *object);
int json_object_update(json_t *object, json_t *other);
void *json_object_iter(json_t *object);
void *json_object_iter_at(json_t *object, const char *key);
void *json_object_iter_next(json_t *object, void *iter);
const char *json_object_iter_key(void *iter);
json_t *json_object_iter_value(void *iter);
int json_object_iter_set_new(json_t *object, void *iter, json_t *value);
static JSON_INLINE
int json_object_set(json_t *object, const char *key, json_t *value)
{
return json_object_set_new(object, key, json_incref(value));
}
static JSON_INLINE
int json_object_set_nocheck(json_t *object, const char *key, json_t *value)
{
return json_object_set_new_nocheck(object, key, json_incref(value));
}
static inline
int json_object_iter_set(json_t *object, void *iter, json_t *value)
{
return json_object_iter_set_new(object, iter, json_incref(value));
}
unsigned int json_array_size(const json_t *array);
json_t *json_array_get(const json_t *array, unsigned int index);
int json_array_set_new(json_t *array, unsigned int index, json_t *value);
int json_array_append_new(json_t *array, json_t *value);
int json_array_insert_new(json_t *array, unsigned int index, json_t *value);
int json_array_remove(json_t *array, unsigned int index);
int json_array_clear(json_t *array);
int json_array_extend(json_t *array, json_t *other);
static JSON_INLINE
int json_array_set(json_t *array, unsigned int index, json_t *value)
{
return json_array_set_new(array, index, json_incref(value));
}
static JSON_INLINE
int json_array_append(json_t *array, json_t *value)
{
return json_array_append_new(array, json_incref(value));
}
static JSON_INLINE
int json_array_insert(json_t *array, unsigned int index, json_t *value)
{
return json_array_insert_new(array, index, json_incref(value));
}
const char *json_string_value(const json_t *string);
int json_integer_value(const json_t *integer);
double json_real_value(const json_t *real);
double json_number_value(const json_t *json);
int json_string_set(json_t *string, const char *value);
int json_string_set_nocheck(json_t *string, const char *value);
int json_integer_set(json_t *integer, int value);
int json_real_set(json_t *real, double value);
/* equality */
int json_equal(json_t *value1, json_t *value2);
/* copying */
json_t *json_copy(json_t *value);
json_t *json_deep_copy(json_t *value);
/* loading, printing */
#define JSON_ERROR_TEXT_LENGTH 160
typedef struct {
char text[JSON_ERROR_TEXT_LENGTH];
int line;
} json_error_t;
json_t *json_loads(const char *input, json_error_t *error);
json_t *json_loadf(FILE *input, json_error_t *error);
json_t *json_load_file(const char *path, json_error_t *error);
#define JSON_INDENT(n) (n & 0xFF)
#define JSON_COMPACT 0x100
#define JSON_ENSURE_ASCII 0x200
#define JSON_SORT_KEYS 0x400
#define JSON_PRESERVE_ORDER 0x800
char *json_dumps(const json_t *json, unsigned long flags);
int json_dumpf(const json_t *json, FILE *output, unsigned long flags);
int json_dump_file(const json_t *json, const char *path, unsigned long flags);
#ifdef __cplusplus
}
#endif
#endif

View File

@@ -0,0 +1,60 @@
/*
* Copyright (c) 2009, 2010 Petri Lehtinen <petri@digip.org>
*
* Jansson is free software; you can redistribute it and/or modify
* it under the terms of the MIT license. See LICENSE for details.
*/
#ifndef JANSSON_PRIVATE_H
#define JANSSON_PRIVATE_H
#include "jansson.h"
#include "hashtable.h"
#define container_of(ptr_, type_, member_) \
((type_ *)((char *)ptr_ - (size_t)&((type_ *)0)->member_))
typedef struct {
json_t json;
hashtable_t hashtable;
unsigned long serial;
int visited;
} json_object_t;
typedef struct {
json_t json;
unsigned int size;
unsigned int entries;
json_t **table;
int visited;
} json_array_t;
typedef struct {
json_t json;
char *value;
} json_string_t;
typedef struct {
json_t json;
double value;
} json_real_t;
typedef struct {
json_t json;
int value;
} json_integer_t;
#define json_to_object(json_) container_of(json_, json_object_t, json)
#define json_to_array(json_) container_of(json_, json_array_t, json)
#define json_to_string(json_) container_of(json_, json_string_t, json)
#define json_to_real(json_) container_of(json_, json_real_t, json)
#define json_to_integer(json_) container_of(json_, json_integer_t, json)
typedef struct {
unsigned long serial;
char key[];
} object_key_t;
const object_key_t *jsonp_object_iter_fullkey(void *iter);
#endif

883
compat/jansson/load.c Normal file
View File

@@ -0,0 +1,883 @@
/*
* Copyright (c) 2009, 2010 Petri Lehtinen <petri@digip.org>
*
* Jansson is free software; you can redistribute it and/or modify
* it under the terms of the MIT license. See LICENSE for details.
*/
#if __GNUC__ >= 8
#pragma GCC diagnostic ignored "-Wformat-truncation"
#endif
#define _GNU_SOURCE
#include <ctype.h>
#include <errno.h>
#include <limits.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <stdarg.h>
#include <assert.h>
#include <jansson.h>
#include "jansson_private.h"
#include "strbuffer.h"
#include "utf.h"
#define TOKEN_INVALID -1
#define TOKEN_EOF 0
#define TOKEN_STRING 256
#define TOKEN_INTEGER 257
#define TOKEN_REAL 258
#define TOKEN_TRUE 259
#define TOKEN_FALSE 260
#define TOKEN_NULL 261
/* read one byte from stream, return EOF on end of file */
typedef int (*get_func)(void *data);
/* return non-zero if end of file has been reached */
typedef int (*eof_func)(void *data);
typedef struct {
get_func get;
eof_func eof;
void *data;
int stream_pos;
char buffer[5];
int buffer_pos;
} stream_t;
typedef struct {
stream_t stream;
strbuffer_t saved_text;
int token;
int line, column;
union {
char *string;
int integer;
double real;
} value;
} lex_t;
/*** error reporting ***/
static void error_init(json_error_t *error)
{
if(error)
{
error->text[0] = '\0';
error->line = -1;
}
}
static void error_set(json_error_t *error, const lex_t *lex,
const char *msg, ...)
{
va_list ap;
char text[JSON_ERROR_TEXT_LENGTH];
if(!error || error->text[0] != '\0') {
/* error already set */
return;
}
va_start(ap, msg);
vsnprintf(text, JSON_ERROR_TEXT_LENGTH, msg, ap);
va_end(ap);
if(lex)
{
const char *saved_text = strbuffer_value(&lex->saved_text);
error->line = lex->line;
if(saved_text && saved_text[0])
{
if(lex->saved_text.length <= 20) {
snprintf(error->text, JSON_ERROR_TEXT_LENGTH,
"%s near '%s'", text, saved_text);
}
else
snprintf(error->text, JSON_ERROR_TEXT_LENGTH, "%s", text);
}
else
{
snprintf(error->text, JSON_ERROR_TEXT_LENGTH,
"%s near end of file", text);
}
}
else
{
error->line = -1;
snprintf(error->text, JSON_ERROR_TEXT_LENGTH, "%s", text);
}
}
/*** lexical analyzer ***/
static void
stream_init(stream_t *stream, get_func get, eof_func eof, void *data)
{
stream->get = get;
stream->eof = eof;
stream->data = data;
stream->stream_pos = 0;
stream->buffer[0] = '\0';
stream->buffer_pos = 0;
}
static char stream_get(stream_t *stream, json_error_t *error)
{
char c;
if(!stream->buffer[stream->buffer_pos])
{
stream->buffer[0] = stream->get(stream->data);
stream->buffer_pos = 0;
c = stream->buffer[0];
if((unsigned char)c >= 0x80 && c != (char)EOF)
{
/* multi-byte UTF-8 sequence */
int i, count;
count = utf8_check_first(c);
if(!count)
goto out;
assert(count >= 2);
for(i = 1; i < count; i++)
stream->buffer[i] = stream->get(stream->data);
if(!utf8_check_full(stream->buffer, count, NULL))
goto out;
stream->stream_pos += count;
stream->buffer[count] = '\0';
}
else {
stream->buffer[1] = '\0';
stream->stream_pos++;
}
}
return stream->buffer[stream->buffer_pos++];
out:
error_set(error, NULL, "unable to decode byte 0x%x at position %d",
(unsigned char)c, stream->stream_pos);
stream->buffer[0] = EOF;
stream->buffer[1] = '\0';
stream->buffer_pos = 1;
return EOF;
}
static void stream_unget(stream_t *stream, char c)
{
assert(stream->buffer_pos > 0);
stream->buffer_pos--;
assert(stream->buffer[stream->buffer_pos] == c);
}
static int lex_get(lex_t *lex, json_error_t *error)
{
return stream_get(&lex->stream, error);
}
static int lex_eof(lex_t *lex)
{
return lex->stream.eof(lex->stream.data);
}
static void lex_save(lex_t *lex, char c)
{
strbuffer_append_byte(&lex->saved_text, c);
}
static int lex_get_save(lex_t *lex, json_error_t *error)
{
char c = stream_get(&lex->stream, error);
lex_save(lex, c);
return c;
}
static void lex_unget_unsave(lex_t *lex, char c)
{
char d;
stream_unget(&lex->stream, c);
d = strbuffer_pop(&lex->saved_text);
assert(c == d);
}
static void lex_save_cached(lex_t *lex)
{
while(lex->stream.buffer[lex->stream.buffer_pos] != '\0')
{
lex_save(lex, lex->stream.buffer[lex->stream.buffer_pos]);
lex->stream.buffer_pos++;
}
}
/* assumes that str points to 'u' plus at least 4 valid hex digits */
static int32_t decode_unicode_escape(const char *str)
{
int i;
int32_t value = 0;
assert(str[0] == 'u');
for(i = 1; i <= 4; i++) {
char c = str[i];
value <<= 4;
if(isdigit(c))
value += c - '0';
else if(islower(c))
value += c - 'a' + 10;
else if(isupper(c))
value += c - 'A' + 10;
else
assert(0);
}
return value;
}
static void lex_scan_string(lex_t *lex, json_error_t *error)
{
char c;
const char *p;
char *t;
int i;
lex->value.string = NULL;
lex->token = TOKEN_INVALID;
c = lex_get_save(lex, error);
while(c != '"') {
if(c == (char)EOF) {
lex_unget_unsave(lex, c);
if(lex_eof(lex))
error_set(error, lex, "premature end of input");
goto out;
}
else if((unsigned char)c <= 0x1F) {
/* control character */
lex_unget_unsave(lex, c);
if(c == '\n')
error_set(error, lex, "unexpected newline", c);
else
error_set(error, lex, "control character 0x%x", c);
goto out;
}
else if(c == '\\') {
c = lex_get_save(lex, error);
if(c == 'u') {
c = lex_get_save(lex, error);
for(i = 0; i < 4; i++) {
if(!isxdigit(c)) {
lex_unget_unsave(lex, c);
error_set(error, lex, "invalid escape");
goto out;
}
c = lex_get_save(lex, error);
}
}
else if(c == '"' || c == '\\' || c == '/' || c == 'b' ||
c == 'f' || c == 'n' || c == 'r' || c == 't')
c = lex_get_save(lex, error);
else {
lex_unget_unsave(lex, c);
error_set(error, lex, "invalid escape");
goto out;
}
}
else
c = lex_get_save(lex, error);
}
/* the actual value is at most of the same length as the source
string, because:
- shortcut escapes (e.g. "\t") (length 2) are converted to 1 byte
- a single \uXXXX escape (length 6) is converted to at most 3 bytes
- two \uXXXX escapes (length 12) forming an UTF-16 surrogate pair
are converted to 4 bytes
*/
lex->value.string = malloc(lex->saved_text.length + 1);
if(!lex->value.string) {
/* this is not very nice, since TOKEN_INVALID is returned */
goto out;
}
/* the target */
t = lex->value.string;
/* + 1 to skip the " */
p = strbuffer_value(&lex->saved_text) + 1;
while(*p != '"') {
if(*p == '\\') {
p++;
if(*p == 'u') {
char buffer[4];
int length;
int32_t value;
value = decode_unicode_escape(p);
p += 5;
if(0xD800 <= value && value <= 0xDBFF) {
/* surrogate pair */
if(*p == '\\' && *(p + 1) == 'u') {
int32_t value2 = decode_unicode_escape(++p);
p += 5;
if(0xDC00 <= value2 && value2 <= 0xDFFF) {
/* valid second surrogate */
value =
((value - 0xD800) << 10) +
(value2 - 0xDC00) +
0x10000;
}
else {
/* invalid second surrogate */
error_set(error, lex,
"invalid Unicode '\\u%04X\\u%04X'",
value, value2);
goto out;
}
}
else {
/* no second surrogate */
error_set(error, lex, "invalid Unicode '\\u%04X'",
value);
goto out;
}
}
else if(0xDC00 <= value && value <= 0xDFFF) {
error_set(error, lex, "invalid Unicode '\\u%04X'", value);
goto out;
}
else if(value == 0)
{
error_set(error, lex, "\\u0000 is not allowed");
goto out;
}
if(utf8_encode(value, buffer, &length))
assert(0);
memcpy(t, buffer, length);
t += length;
}
else {
switch(*p) {
case '"': case '\\': case '/':
*t = *p; break;
case 'b': *t = '\b'; break;
case 'f': *t = '\f'; break;
case 'n': *t = '\n'; break;
case 'r': *t = '\r'; break;
case 't': *t = '\t'; break;
default: assert(0);
}
t++;
p++;
}
}
else
*(t++) = *(p++);
}
*t = '\0';
lex->token = TOKEN_STRING;
return;
out:
free(lex->value.string);
}
static int lex_scan_number(lex_t *lex, char c, json_error_t *error)
{
const char *saved_text;
char *end;
double value;
lex->token = TOKEN_INVALID;
if(c == '-')
c = lex_get_save(lex, error);
if(c == '0') {
c = lex_get_save(lex, error);
if(isdigit(c)) {
lex_unget_unsave(lex, c);
goto out;
}
}
else if(isdigit(c)) {
c = lex_get_save(lex, error);
while(isdigit(c))
c = lex_get_save(lex, error);
}
else {
lex_unget_unsave(lex, c);
goto out;
}
if(c != '.' && c != 'E' && c != 'e') {
long value;
lex_unget_unsave(lex, c);
saved_text = strbuffer_value(&lex->saved_text);
value = strtol(saved_text, &end, 10);
assert(end == saved_text + lex->saved_text.length);
if((value == LONG_MAX && errno == ERANGE) || value > INT_MAX) {
error_set(error, lex, "too big integer");
goto out;
}
else if((value == LONG_MIN && errno == ERANGE) || value < INT_MIN) {
error_set(error, lex, "too big negative integer");
goto out;
}
lex->token = TOKEN_INTEGER;
lex->value.integer = (int)value;
return 0;
}
if(c == '.') {
c = lex_get(lex, error);
if(!isdigit(c))
goto out;
lex_save(lex, c);
c = lex_get_save(lex, error);
while(isdigit(c))
c = lex_get_save(lex, error);
}
if(c == 'E' || c == 'e') {
c = lex_get_save(lex, error);
if(c == '+' || c == '-')
c = lex_get_save(lex, error);
if(!isdigit(c)) {
lex_unget_unsave(lex, c);
goto out;
}
c = lex_get_save(lex, error);
while(isdigit(c))
c = lex_get_save(lex, error);
}
lex_unget_unsave(lex, c);
saved_text = strbuffer_value(&lex->saved_text);
value = strtod(saved_text, &end);
assert(end == saved_text + lex->saved_text.length);
if(errno == ERANGE && value != 0) {
error_set(error, lex, "real number overflow");
goto out;
}
lex->token = TOKEN_REAL;
lex->value.real = value;
return 0;
out:
return -1;
}
static int lex_scan(lex_t *lex, json_error_t *error)
{
char c;
strbuffer_clear(&lex->saved_text);
if(lex->token == TOKEN_STRING) {
free(lex->value.string);
lex->value.string = NULL;
}
c = lex_get(lex, error);
while(c == ' ' || c == '\t' || c == '\n' || c == '\r')
{
if(c == '\n')
lex->line++;
c = lex_get(lex, error);
}
if(c == (char)EOF) {
if(lex_eof(lex))
lex->token = TOKEN_EOF;
else
lex->token = TOKEN_INVALID;
goto out;
}
lex_save(lex, c);
if(c == '{' || c == '}' || c == '[' || c == ']' || c == ':' || c == ',')
lex->token = c;
else if(c == '"')
lex_scan_string(lex, error);
else if(isdigit(c) || c == '-') {
if(lex_scan_number(lex, c, error))
goto out;
}
else if(isupper(c) || islower(c)) {
/* eat up the whole identifier for clearer error messages */
const char *saved_text;
c = lex_get_save(lex, error);
while(isupper(c) || islower(c))
c = lex_get_save(lex, error);
lex_unget_unsave(lex, c);
saved_text = strbuffer_value(&lex->saved_text);
if(strcmp(saved_text, "true") == 0)
lex->token = TOKEN_TRUE;
else if(strcmp(saved_text, "false") == 0)
lex->token = TOKEN_FALSE;
else if(strcmp(saved_text, "null") == 0)
lex->token = TOKEN_NULL;
else
lex->token = TOKEN_INVALID;
}
else {
/* save the rest of the input UTF-8 sequence to get an error
message of valid UTF-8 */
lex_save_cached(lex);
lex->token = TOKEN_INVALID;
}
out:
return lex->token;
}
static char *lex_steal_string(lex_t *lex)
{
char *result = NULL;
if(lex->token == TOKEN_STRING)
{
result = lex->value.string;
lex->value.string = NULL;
}
return result;
}
static int lex_init(lex_t *lex, get_func get, eof_func eof, void *data)
{
stream_init(&lex->stream, get, eof, data);
if(strbuffer_init(&lex->saved_text))
return -1;
lex->token = TOKEN_INVALID;
lex->line = 1;
return 0;
}
static void lex_close(lex_t *lex)
{
if(lex->token == TOKEN_STRING)
free(lex->value.string);
strbuffer_close(&lex->saved_text);
}
/*** parser ***/
static json_t *parse_value(lex_t *lex, json_error_t *error);
static json_t *parse_object(lex_t *lex, json_error_t *error)
{
json_t *object = json_object();
if(!object)
return NULL;
lex_scan(lex, error);
if(lex->token == '}')
return object;
while(1) {
char *key;
json_t *value;
if(lex->token != TOKEN_STRING) {
error_set(error, lex, "string or '}' expected");
goto error;
}
key = lex_steal_string(lex);
if(!key)
return NULL;
lex_scan(lex, error);
if(lex->token != ':') {
free(key);
error_set(error, lex, "':' expected");
goto error;
}
lex_scan(lex, error);
value = parse_value(lex, error);
if(!value) {
free(key);
goto error;
}
if(json_object_set_nocheck(object, key, value)) {
free(key);
json_decref(value);
goto error;
}
json_decref(value);
free(key);
lex_scan(lex, error);
if(lex->token != ',')
break;
lex_scan(lex, error);
}
if(lex->token != '}') {
error_set(error, lex, "'}' expected");
goto error;
}
return object;
error:
json_decref(object);
return NULL;
}
static json_t *parse_array(lex_t *lex, json_error_t *error)
{
json_t *array = json_array();
if(!array)
return NULL;
lex_scan(lex, error);
if(lex->token == ']')
return array;
while(lex->token) {
json_t *elem = parse_value(lex, error);
if(!elem)
goto error;
if(json_array_append(array, elem)) {
json_decref(elem);
goto error;
}
json_decref(elem);
lex_scan(lex, error);
if(lex->token != ',')
break;
lex_scan(lex, error);
}
if(lex->token != ']') {
error_set(error, lex, "']' expected");
goto error;
}
return array;
error:
json_decref(array);
return NULL;
}
static json_t *parse_value(lex_t *lex, json_error_t *error)
{
json_t *json;
switch(lex->token) {
case TOKEN_STRING: {
json = json_string_nocheck(lex->value.string);
break;
}
case TOKEN_INTEGER: {
json = json_integer(lex->value.integer);
break;
}
case TOKEN_REAL: {
json = json_real(lex->value.real);
break;
}
case TOKEN_TRUE:
json = json_true();
break;
case TOKEN_FALSE:
json = json_false();
break;
case TOKEN_NULL:
json = json_null();
break;
case '{':
json = parse_object(lex, error);
break;
case '[':
json = parse_array(lex, error);
break;
case TOKEN_INVALID:
error_set(error, lex, "invalid token");
return NULL;
default:
error_set(error, lex, "unexpected token");
return NULL;
}
if(!json)
return NULL;
return json;
}
static json_t *parse_json(lex_t *lex, json_error_t *error)
{
error_init(error);
lex_scan(lex, error);
if(lex->token != '[' && lex->token != '{') {
error_set(error, lex, "'[' or '{' expected");
return NULL;
}
return parse_value(lex, error);
}
typedef struct
{
const char *data;
int pos;
} string_data_t;
static int string_get(void *data)
{
char c;
string_data_t *stream = (string_data_t *)data;
c = stream->data[stream->pos];
if(c == '\0')
return EOF;
else
{
stream->pos++;
return c;
}
}
static int string_eof(void *data)
{
string_data_t *stream = (string_data_t *)data;
return (stream->data[stream->pos] == '\0');
}
json_t *json_loads(const char *string, json_error_t *error)
{
lex_t lex;
json_t *result;
string_data_t stream_data = {
.data = string,
.pos = 0
};
if(lex_init(&lex, string_get, string_eof, (void *)&stream_data))
return NULL;
result = parse_json(&lex, error);
if(!result)
goto out;
lex_scan(&lex, error);
if(lex.token != TOKEN_EOF) {
error_set(error, &lex, "end of file expected");
json_decref(result);
result = NULL;
}
out:
lex_close(&lex);
return result;
}
json_t *json_loadf(FILE *input, json_error_t *error)
{
lex_t lex;
json_t *result;
if(lex_init(&lex, (get_func)fgetc, (eof_func)feof, input))
return NULL;
result = parse_json(&lex, error);
if(!result)
goto out;
lex_scan(&lex, error);
if(lex.token != TOKEN_EOF) {
error_set(error, &lex, "end of file expected");
json_decref(result);
result = NULL;
}
out:
lex_close(&lex);
return result;
}
json_t *json_load_file(const char *path, json_error_t *error)
{
json_t *result;
FILE *fp;
error_init(error);
fp = fopen(path, "r");
if(!fp)
{
error_set(error, NULL, "unable to open %s: %s",
path, strerror(errno));
return NULL;
}
result = json_loadf(fp, error);
fclose(fp);
return result;
}

View File

@@ -0,0 +1,95 @@
/*
* Copyright (c) 2009, 2010 Petri Lehtinen <petri@digip.org>
*
* Jansson is free software; you can redistribute it and/or modify
* it under the terms of the MIT license. See LICENSE for details.
*/
#define _GNU_SOURCE
#include <stdlib.h>
#include <string.h>
#include "strbuffer.h"
#include "util.h"
#define STRBUFFER_MIN_SIZE 16
#define STRBUFFER_FACTOR 2
int strbuffer_init(strbuffer_t *strbuff)
{
strbuff->size = STRBUFFER_MIN_SIZE;
strbuff->length = 0;
strbuff->value = malloc(strbuff->size);
if(!strbuff->value)
return -1;
/* initialize to empty */
strbuff->value[0] = '\0';
return 0;
}
void strbuffer_close(strbuffer_t *strbuff)
{
free(strbuff->value);
strbuff->size = 0;
strbuff->length = 0;
strbuff->value = NULL;
}
void strbuffer_clear(strbuffer_t *strbuff)
{
strbuff->length = 0;
strbuff->value[0] = '\0';
}
const char *strbuffer_value(const strbuffer_t *strbuff)
{
return strbuff->value;
}
char *strbuffer_steal_value(strbuffer_t *strbuff)
{
char *result = strbuff->value;
strbuffer_init(strbuff);
return result;
}
int strbuffer_append(strbuffer_t *strbuff, const char *string)
{
return strbuffer_append_bytes(strbuff, string, strlen(string));
}
int strbuffer_append_byte(strbuffer_t *strbuff, char byte)
{
return strbuffer_append_bytes(strbuff, &byte, 1);
}
int strbuffer_append_bytes(strbuffer_t *strbuff, const char *data, int size)
{
if(strbuff->length + size >= strbuff->size)
{
strbuff->size = max(strbuff->size * STRBUFFER_FACTOR,
strbuff->length + size + 1);
strbuff->value = realloc(strbuff->value, strbuff->size);
if(!strbuff->value)
return -1;
}
memcpy(strbuff->value + strbuff->length, data, size);
strbuff->length += size;
strbuff->value[strbuff->length] = '\0';
return 0;
}
char strbuffer_pop(strbuffer_t *strbuff)
{
if(strbuff->length > 0) {
char c = strbuff->value[--strbuff->length];
strbuff->value[strbuff->length] = '\0';
return c;
}
else
return '\0';
}

View File

@@ -0,0 +1,31 @@
/*
* Copyright (c) 2009, 2010 Petri Lehtinen <petri@digip.org>
*
* Jansson is free software; you can redistribute it and/or modify
* it under the terms of the MIT license. See LICENSE for details.
*/
#ifndef STRBUFFER_H
#define STRBUFFER_H
typedef struct {
char *value;
int length; /* bytes used */
int size; /* bytes allocated */
} strbuffer_t;
int strbuffer_init(strbuffer_t *strbuff);
void strbuffer_close(strbuffer_t *strbuff);
void strbuffer_clear(strbuffer_t *strbuff);
const char *strbuffer_value(const strbuffer_t *strbuff);
char *strbuffer_steal_value(strbuffer_t *strbuff);
int strbuffer_append(strbuffer_t *strbuff, const char *string);
int strbuffer_append_byte(strbuffer_t *strbuff, char byte);
int strbuffer_append_bytes(strbuffer_t *strbuff, const char *data, int size);
char strbuffer_pop(strbuffer_t *strbuff);
#endif

190
compat/jansson/utf.c Normal file
View File

@@ -0,0 +1,190 @@
/*
* Copyright (c) 2009, 2010 Petri Lehtinen <petri@digip.org>
*
* Jansson is free software; you can redistribute it and/or modify
* it under the terms of the MIT license. See LICENSE for details.
*/
#include <string.h>
#include "utf.h"
int utf8_encode(int32_t codepoint, char *buffer, int *size)
{
if(codepoint < 0)
return -1;
else if(codepoint < 0x80)
{
buffer[0] = (char)codepoint;
*size = 1;
}
else if(codepoint < 0x800)
{
buffer[0] = 0xC0 + ((codepoint & 0x7C0) >> 6);
buffer[1] = 0x80 + ((codepoint & 0x03F));
*size = 2;
}
else if(codepoint < 0x10000)
{
buffer[0] = 0xE0 + ((codepoint & 0xF000) >> 12);
buffer[1] = 0x80 + ((codepoint & 0x0FC0) >> 6);
buffer[2] = 0x80 + ((codepoint & 0x003F));
*size = 3;
}
else if(codepoint <= 0x10FFFF)
{
buffer[0] = 0xF0 + ((codepoint & 0x1C0000) >> 18);
buffer[1] = 0x80 + ((codepoint & 0x03F000) >> 12);
buffer[2] = 0x80 + ((codepoint & 0x000FC0) >> 6);
buffer[3] = 0x80 + ((codepoint & 0x00003F));
*size = 4;
}
else
return -1;
return 0;
}
int utf8_check_first(char byte)
{
unsigned char u = (unsigned char)byte;
if(u < 0x80)
return 1;
if(0x80 <= u && u <= 0xBF) {
/* second, third or fourth byte of a multi-byte
sequence, i.e. a "continuation byte" */
return 0;
}
else if(u == 0xC0 || u == 0xC1) {
/* overlong encoding of an ASCII byte */
return 0;
}
else if(0xC2 <= u && u <= 0xDF) {
/* 2-byte sequence */
return 2;
}
else if(0xE0 <= u && u <= 0xEF) {
/* 3-byte sequence */
return 3;
}
else if(0xF0 <= u && u <= 0xF4) {
/* 4-byte sequence */
return 4;
}
else { /* u >= 0xF5 */
/* Restricted (start of 4-, 5- or 6-byte sequence) or invalid
UTF-8 */
return 0;
}
}
int utf8_check_full(const char *buffer, int size, int32_t *codepoint)
{
int i;
int32_t value = 0;
unsigned char u = (unsigned char)buffer[0];
if(size == 2)
{
value = u & 0x1F;
}
else if(size == 3)
{
value = u & 0xF;
}
else if(size == 4)
{
value = u & 0x7;
}
else
return 0;
for(i = 1; i < size; i++)
{
u = (unsigned char)buffer[i];
if(u < 0x80 || u > 0xBF) {
/* not a continuation byte */
return 0;
}
value = (value << 6) + (u & 0x3F);
}
if(value > 0x10FFFF) {
/* not in Unicode range */
return 0;
}
else if(0xD800 <= value && value <= 0xDFFF) {
/* invalid code point (UTF-16 surrogate halves) */
return 0;
}
else if((size == 2 && value < 0x80) ||
(size == 3 && value < 0x800) ||
(size == 4 && value < 0x10000)) {
/* overlong encoding */
return 0;
}
if(codepoint)
*codepoint = value;
return 1;
}
const char *utf8_iterate(const char *buffer, int32_t *codepoint)
{
int count;
int32_t value;
if(!*buffer)
return buffer;
count = utf8_check_first(buffer[0]);
if(count <= 0)
return NULL;
if(count == 1)
value = (unsigned char)buffer[0];
else
{
if(!utf8_check_full(buffer, count, &value))
return NULL;
}
if(codepoint)
*codepoint = value;
return buffer + count;
}
int utf8_check_string(const char *string, int length)
{
int i;
if(length == -1)
length = strlen(string);
for(i = 0; i < length; i++)
{
int count = utf8_check_first(string[i]);
if(count == 0)
return 0;
else if(count > 1)
{
if(i + count > length)
return 0;
if(!utf8_check_full(&string[i], count, NULL))
return 0;
i += count - 1;
}
}
return 1;
}

28
compat/jansson/utf.h Normal file
View File

@@ -0,0 +1,28 @@
/*
* Copyright (c) 2009, 2010 Petri Lehtinen <petri@digip.org>
*
* Jansson is free software; you can redistribute it and/or modify
* it under the terms of the MIT license. See LICENSE for details.
*/
#ifndef UTF_H
#define UTF_H
#include <config.h>
#ifdef HAVE_INTTYPES_H
/* inttypes.h includes stdint.h in a standard environment, so there's
no need to include stdint.h separately. If inttypes.h doesn't define
int32_t, it's defined in config.h. */
#include <inttypes.h>
#endif
int utf8_encode(int codepoint, char *buffer, int *size);
int utf8_check_first(char byte);
int utf8_check_full(const char *buffer, int size, int32_t *codepoint);
const char *utf8_iterate(const char *buffer, int32_t *codepoint);
int utf8_check_string(const char *string, int length);
#endif

13
compat/jansson/util.h Normal file
View File

@@ -0,0 +1,13 @@
/*
* Copyright (c) 2009, 2010 Petri Lehtinen <petri@digip.org>
*
* Jansson is free software; you can redistribute it and/or modify
* it under the terms of the MIT license. See LICENSE for details.
*/
#ifndef UTIL_H
#define UTIL_H
#define max(a, b) ((a) > (b) ? (a) : (b))
#endif

976
compat/jansson/value.c Normal file
View File

@@ -0,0 +1,976 @@
/*
* Copyright (c) 2009, 2010 Petri Lehtinen <petri@digip.org>
*
* Jansson is free software; you can redistribute it and/or modify
* it under the terms of the MIT license. See LICENSE for details.
*/
#define _GNU_SOURCE
#include <config.h>
#include <stdlib.h>
#include <string.h>
#include <jansson.h>
#include "hashtable.h"
#include "jansson_private.h"
#include "utf.h"
#include "util.h"
static inline void json_init(json_t *json, json_type type)
{
json->type = type;
json->refcount = 1;
}
/*** object ***/
/* This macro just returns a pointer that's a few bytes backwards from
string. This makes it possible to pass a pointer to object_key_t
when only the string inside it is used, without actually creating
an object_key_t instance. */
#define string_to_key(string) container_of(string, object_key_t, key)
static unsigned int hash_key(const void *ptr)
{
const char *str = ((const object_key_t *)ptr)->key;
unsigned int hash = 5381;
unsigned int c;
while((c = (unsigned int)*str))
{
hash = ((hash << 5) + hash) + c;
str++;
}
return hash;
}
static int key_equal(const void *ptr1, const void *ptr2)
{
return strcmp(((const object_key_t *)ptr1)->key,
((const object_key_t *)ptr2)->key) == 0;
}
static void value_decref(void *value)
{
json_decref((json_t *)value);
}
json_t *json_object(void)
{
json_object_t *object = malloc(sizeof(json_object_t));
if(!object)
return NULL;
json_init(&object->json, JSON_OBJECT);
if(hashtable_init(&object->hashtable, hash_key, key_equal,
free, value_decref))
{
free(object);
return NULL;
}
object->serial = 0;
object->visited = 0;
return &object->json;
}
static void json_delete_object(json_object_t *object)
{
hashtable_close(&object->hashtable);
free(object);
}
unsigned int json_object_size(const json_t *json)
{
json_object_t *object;
if(!json_is_object(json))
return -1;
object = json_to_object(json);
return object->hashtable.size;
}
json_t *json_object_get(const json_t *json, const char *key)
{
json_object_t *object;
if(!json_is_object(json))
return NULL;
object = json_to_object(json);
return hashtable_get(&object->hashtable, string_to_key(key));
}
int json_object_set_new_nocheck(json_t *json, const char *key, json_t *value)
{
json_object_t *object;
object_key_t *k;
if(!key || !value)
return -1;
if(!json_is_object(json) || json == value)
{
json_decref(value);
return -1;
}
object = json_to_object(json);
k = malloc(sizeof(object_key_t) + strlen(key) + 1);
if(!k)
return -1;
k->serial = object->serial++;
strcpy(k->key, key);
if(hashtable_set(&object->hashtable, k, value))
{
json_decref(value);
return -1;
}
return 0;
}
int json_object_set_new(json_t *json, const char *key, json_t *value)
{
if(!key || !utf8_check_string(key, -1))
{
json_decref(value);
return -1;
}
return json_object_set_new_nocheck(json, key, value);
}
int json_object_del(json_t *json, const char *key)
{
json_object_t *object;
if(!json_is_object(json))
return -1;
object = json_to_object(json);
return hashtable_del(&object->hashtable, string_to_key(key));
}
int json_object_clear(json_t *json)
{
json_object_t *object;
if(!json_is_object(json))
return -1;
object = json_to_object(json);
hashtable_clear(&object->hashtable);
return 0;
}
int json_object_update(json_t *object, json_t *other)
{
void *iter;
if(!json_is_object(object) || !json_is_object(other))
return -1;
iter = json_object_iter(other);
while(iter) {
const char *key;
json_t *value;
key = json_object_iter_key(iter);
value = json_object_iter_value(iter);
if(json_object_set_nocheck(object, key, value))
return -1;
iter = json_object_iter_next(other, iter);
}
return 0;
}
void *json_object_iter(json_t *json)
{
json_object_t *object;
if(!json_is_object(json))
return NULL;
object = json_to_object(json);
return hashtable_iter(&object->hashtable);
}
void *json_object_iter_at(json_t *json, const char *key)
{
json_object_t *object;
if(!key || !json_is_object(json))
return NULL;
object = json_to_object(json);
return hashtable_iter_at(&object->hashtable, string_to_key(key));
}
void *json_object_iter_next(json_t *json, void *iter)
{
json_object_t *object;
if(!json_is_object(json) || iter == NULL)
return NULL;
object = json_to_object(json);
return hashtable_iter_next(&object->hashtable, iter);
}
const object_key_t *jsonp_object_iter_fullkey(void *iter)
{
if(!iter)
return NULL;
return hashtable_iter_key(iter);
}
const char *json_object_iter_key(void *iter)
{
if(!iter)
return NULL;
return jsonp_object_iter_fullkey(iter)->key;
}
json_t *json_object_iter_value(void *iter)
{
if(!iter)
return NULL;
return (json_t *)hashtable_iter_value(iter);
}
int json_object_iter_set_new(json_t *json, void *iter, json_t *value)
{
json_object_t *object;
if(!json_is_object(json) || !iter || !value)
return -1;
object = json_to_object(json);
hashtable_iter_set(&object->hashtable, iter, value);
return 0;
}
static int json_object_equal(json_t *object1, json_t *object2)
{
void *iter;
if(json_object_size(object1) != json_object_size(object2))
return 0;
iter = json_object_iter(object1);
while(iter)
{
const char *key;
json_t *value1, *value2;
key = json_object_iter_key(iter);
value1 = json_object_iter_value(iter);
value2 = json_object_get(object2, key);
if(!json_equal(value1, value2))
return 0;
iter = json_object_iter_next(object1, iter);
}
return 1;
}
static json_t *json_object_copy(json_t *object)
{
json_t *result;
void *iter;
result = json_object();
if(!result)
return NULL;
iter = json_object_iter(object);
while(iter)
{
const char *key;
json_t *value;
key = json_object_iter_key(iter);
value = json_object_iter_value(iter);
json_object_set_nocheck(result, key, value);
iter = json_object_iter_next(object, iter);
}
return result;
}
static json_t *json_object_deep_copy(json_t *object)
{
json_t *result;
void *iter;
result = json_object();
if(!result)
return NULL;
iter = json_object_iter(object);
while(iter)
{
const char *key;
json_t *value;
key = json_object_iter_key(iter);
value = json_object_iter_value(iter);
json_object_set_new_nocheck(result, key, json_deep_copy(value));
iter = json_object_iter_next(object, iter);
}
return result;
}
/*** array ***/
json_t *json_array(void)
{
json_array_t *array = malloc(sizeof(json_array_t));
if(!array)
return NULL;
json_init(&array->json, JSON_ARRAY);
array->entries = 0;
array->size = 8;
array->table = malloc(array->size * sizeof(json_t *));
if(!array->table) {
free(array);
return NULL;
}
array->visited = 0;
return &array->json;
}
static void json_delete_array(json_array_t *array)
{
unsigned int i;
for(i = 0; i < array->entries; i++)
json_decref(array->table[i]);
free(array->table);
free(array);
}
unsigned int json_array_size(const json_t *json)
{
if(!json_is_array(json))
return 0;
return json_to_array(json)->entries;
}
json_t *json_array_get(const json_t *json, unsigned int index)
{
json_array_t *array;
if(!json_is_array(json))
return NULL;
array = json_to_array(json);
if(index >= array->entries)
return NULL;
return array->table[index];
}
int json_array_set_new(json_t *json, unsigned int index, json_t *value)
{
json_array_t *array;
if(!value)
return -1;
if(!json_is_array(json) || json == value)
{
json_decref(value);
return -1;
}
array = json_to_array(json);
if(index >= array->entries)
{
json_decref(value);
return -1;
}
json_decref(array->table[index]);
array->table[index] = value;
return 0;
}
static void array_move(json_array_t *array, unsigned int dest,
unsigned int src, unsigned int count)
{
memmove(&array->table[dest], &array->table[src], count * sizeof(json_t *));
}
static void array_copy(json_t **dest, unsigned int dpos,
json_t **src, unsigned int spos,
unsigned int count)
{
memcpy(&dest[dpos], &src[spos], count * sizeof(json_t *));
}
static json_t **json_array_grow(json_array_t *array,
unsigned int amount,
int copy)
{
unsigned int new_size;
json_t **old_table, **new_table;
if(array->entries + amount <= array->size)
return array->table;
old_table = array->table;
new_size = max(array->size + amount, array->size * 2);
new_table = malloc(new_size * sizeof(json_t *));
if(!new_table)
return NULL;
array->size = new_size;
array->table = new_table;
if(copy) {
array_copy(array->table, 0, old_table, 0, array->entries);
free(old_table);
return array->table;
}
return old_table;
}
int json_array_append_new(json_t *json, json_t *value)
{
json_array_t *array;
if(!value)
return -1;
if(!json_is_array(json) || json == value)
{
json_decref(value);
return -1;
}
array = json_to_array(json);
if(!json_array_grow(array, 1, 1)) {
json_decref(value);
return -1;
}
array->table[array->entries] = value;
array->entries++;
return 0;
}
int json_array_insert_new(json_t *json, unsigned int index, json_t *value)
{
json_array_t *array;
json_t **old_table;
if(!value)
return -1;
if(!json_is_array(json) || json == value) {
json_decref(value);
return -1;
}
array = json_to_array(json);
if(index > array->entries) {
json_decref(value);
return -1;
}
old_table = json_array_grow(array, 1, 0);
if(!old_table) {
json_decref(value);
return -1;
}
if(old_table != array->table) {
array_copy(array->table, 0, old_table, 0, index);
array_copy(array->table, index + 1, old_table, index,
array->entries - index);
free(old_table);
}
else
array_move(array, index + 1, index, array->entries - index);
array->table[index] = value;
array->entries++;
return 0;
}
int json_array_remove(json_t *json, unsigned int index)
{
json_array_t *array;
if(!json_is_array(json))
return -1;
array = json_to_array(json);
if(index >= array->entries)
return -1;
json_decref(array->table[index]);
array_move(array, index, index + 1, array->entries - index);
array->entries--;
return 0;
}
int json_array_clear(json_t *json)
{
json_array_t *array;
unsigned int i;
if(!json_is_array(json))
return -1;
array = json_to_array(json);
for(i = 0; i < array->entries; i++)
json_decref(array->table[i]);
array->entries = 0;
return 0;
}
int json_array_extend(json_t *json, json_t *other_json)
{
json_array_t *array, *other;
unsigned int i;
if(!json_is_array(json) || !json_is_array(other_json))
return -1;
array = json_to_array(json);
other = json_to_array(other_json);
if(!json_array_grow(array, other->entries, 1))
return -1;
for(i = 0; i < other->entries; i++)
json_incref(other->table[i]);
array_copy(array->table, array->entries, other->table, 0, other->entries);
array->entries += other->entries;
return 0;
}
static int json_array_equal(json_t *array1, json_t *array2)
{
unsigned int i, size;
size = json_array_size(array1);
if(size != json_array_size(array2))
return 0;
for(i = 0; i < size; i++)
{
json_t *value1, *value2;
value1 = json_array_get(array1, i);
value2 = json_array_get(array2, i);
if(!json_equal(value1, value2))
return 0;
}
return 1;
}
static json_t *json_array_copy(json_t *array)
{
json_t *result;
unsigned int i;
result = json_array();
if(!result)
return NULL;
for(i = 0; i < json_array_size(array); i++)
json_array_append(result, json_array_get(array, i));
return result;
}
static json_t *json_array_deep_copy(json_t *array)
{
json_t *result;
unsigned int i;
result = json_array();
if(!result)
return NULL;
for(i = 0; i < json_array_size(array); i++)
json_array_append_new(result, json_deep_copy(json_array_get(array, i)));
return result;
}
/*** string ***/
json_t *json_string_nocheck(const char *value)
{
json_string_t *string;
if(!value)
return NULL;
string = malloc(sizeof(json_string_t));
if(!string)
return NULL;
json_init(&string->json, JSON_STRING);
string->value = strdup(value);
if(!string->value) {
free(string);
return NULL;
}
return &string->json;
}
json_t *json_string(const char *value)
{
if(!value || !utf8_check_string(value, -1))
return NULL;
return json_string_nocheck(value);
}
const char *json_string_value(const json_t *json)
{
if(!json_is_string(json))
return NULL;
return json_to_string(json)->value;
}
int json_string_set_nocheck(json_t *json, const char *value)
{
char *dup;
json_string_t *string;
dup = strdup(value);
if(!dup)
return -1;
string = json_to_string(json);
free(string->value);
string->value = dup;
return 0;
}
int json_string_set(json_t *json, const char *value)
{
if(!value || !utf8_check_string(value, -1))
return -1;
return json_string_set_nocheck(json, value);
}
static void json_delete_string(json_string_t *string)
{
free(string->value);
free(string);
}
static int json_string_equal(json_t *string1, json_t *string2)
{
return strcmp(json_string_value(string1), json_string_value(string2)) == 0;
}
static json_t *json_string_copy(json_t *string)
{
return json_string_nocheck(json_string_value(string));
}
/*** integer ***/
json_t *json_integer(int value)
{
json_integer_t *integer = malloc(sizeof(json_integer_t));
if(!integer)
return NULL;
json_init(&integer->json, JSON_INTEGER);
integer->value = value;
return &integer->json;
}
int json_integer_value(const json_t *json)
{
if(!json_is_integer(json))
return 0;
return json_to_integer(json)->value;
}
int json_integer_set(json_t *json, int value)
{
if(!json_is_integer(json))
return -1;
json_to_integer(json)->value = value;
return 0;
}
static void json_delete_integer(json_integer_t *integer)
{
free(integer);
}
static int json_integer_equal(json_t *integer1, json_t *integer2)
{
return json_integer_value(integer1) == json_integer_value(integer2);
}
static json_t *json_integer_copy(json_t *integer)
{
return json_integer(json_integer_value(integer));
}
/*** real ***/
json_t *json_real(double value)
{
json_real_t *real = malloc(sizeof(json_real_t));
if(!real)
return NULL;
json_init(&real->json, JSON_REAL);
real->value = value;
return &real->json;
}
double json_real_value(const json_t *json)
{
if(!json_is_real(json))
return 0;
return json_to_real(json)->value;
}
int json_real_set(json_t *json, double value)
{
if(!json_is_real(json))
return 0;
json_to_real(json)->value = value;
return 0;
}
static void json_delete_real(json_real_t *real)
{
free(real);
}
static int json_real_equal(json_t *real1, json_t *real2)
{
return json_real_value(real1) == json_real_value(real2);
}
static json_t *json_real_copy(json_t *real)
{
return json_real(json_real_value(real));
}
/*** number ***/
double json_number_value(const json_t *json)
{
if(json_is_integer(json))
return json_integer_value(json);
else if(json_is_real(json))
return json_real_value(json);
else
return 0.0;
}
/*** simple values ***/
json_t *json_true(void)
{
static json_t the_true = {
.type = JSON_TRUE,
.refcount = (unsigned int)-1
};
return &the_true;
}
json_t *json_false(void)
{
static json_t the_false = {
.type = JSON_FALSE,
.refcount = (unsigned int)-1
};
return &the_false;
}
json_t *json_null(void)
{
static json_t the_null = {
.type = JSON_NULL,
.refcount = (unsigned int)-1
};
return &the_null;
}
/*** deletion ***/
void json_delete(json_t *json)
{
if(json_is_object(json))
json_delete_object(json_to_object(json));
else if(json_is_array(json))
json_delete_array(json_to_array(json));
else if(json_is_string(json))
json_delete_string(json_to_string(json));
else if(json_is_integer(json))
json_delete_integer(json_to_integer(json));
else if(json_is_real(json))
json_delete_real(json_to_real(json));
/* json_delete is not called for true, false or null */
}
/*** equality ***/
int json_equal(json_t *json1, json_t *json2)
{
if(!json1 || !json2)
return 0;
if(json_typeof(json1) != json_typeof(json2))
return 0;
/* this covers true, false and null as they are singletons */
if(json1 == json2)
return 1;
if(json_is_object(json1))
return json_object_equal(json1, json2);
if(json_is_array(json1))
return json_array_equal(json1, json2);
if(json_is_string(json1))
return json_string_equal(json1, json2);
if(json_is_integer(json1))
return json_integer_equal(json1, json2);
if(json_is_real(json1))
return json_real_equal(json1, json2);
return 0;
}
/*** copying ***/
json_t *json_copy(json_t *json)
{
if(!json)
return NULL;
if(json_is_object(json))
return json_object_copy(json);
if(json_is_array(json))
return json_array_copy(json);
if(json_is_string(json))
return json_string_copy(json);
if(json_is_integer(json))
return json_integer_copy(json);
if(json_is_real(json))
return json_real_copy(json);
if(json_is_true(json) || json_is_false(json) || json_is_null(json))
return json;
return NULL;
}
json_t *json_deep_copy(json_t *json)
{
if(!json)
return NULL;
if(json_is_object(json))
return json_object_deep_copy(json);
if(json_is_array(json))
return json_array_deep_copy(json);
/* for the rest of the types, deep copying doesn't differ from
shallow copying */
if(json_is_string(json))
return json_string_copy(json);
if(json_is_integer(json))
return json_integer_copy(json);
if(json_is_real(json))
return json_real_copy(json);
if(json_is_true(json) || json_is_false(json) || json_is_null(json))
return json;
return NULL;
}

167
configure.ac Normal file
View File

@@ -0,0 +1,167 @@
AC_INIT([nolambocoin], [1.0])
AC_PREREQ([2.59c])
AC_CANONICAL_SYSTEM
AC_CONFIG_SRCDIR([cpu-miner.c])
AM_INIT_AUTOMAKE([foreign])
AC_CONFIG_HEADERS([cpuminer-config.h])
dnl Make sure anyone changing configure.ac/Makefile.am has a clue
AM_MAINTAINER_MODE
EXTERNAL_CFLAGS="$CFLAGS"
dnl Checks for programs
AC_PROG_CC_C99
AC_PROG_GCC_TRADITIONAL
AM_PROG_CC_C_O
AM_PROG_AS
AC_PROG_RANLIB
if test -n "$EXTERNAL_CFLAGS"; then
CFLAGS="$EXTERNAL_CFLAGS"
else
CFLAGS='-Wall -O2 -fomit-frame-pointer'
fi
dnl Checks for header files
AC_HEADER_STDC
AC_CHECK_HEADERS([sys/endian.h sys/param.h syslog.h])
# sys/sysctl.h requires sys/types.h on FreeBSD
# sys/sysctl.h requires sys/param.h on OpenBSD
AC_CHECK_HEADERS([sys/sysctl.h], [], [],
[#include <sys/types.h>
#ifdef HAVE_SYS_PARAM_H
#include <sys/param.h>
#endif
])
AC_CHECK_DECLS([be32dec, le32dec, be32enc, le32enc], [], [],
[AC_INCLUDES_DEFAULT
#ifdef HAVE_SYS_ENDIAN_H
#include <sys/endian.h>
#endif
])
AC_FUNC_ALLOCA
AC_CHECK_FUNCS([getopt_long])
PTHREAD_FLAGS="-pthread"
WS2_LIBS=""
# conditional builds for all platforms
case $target in
*-*-mingw*)
have_win32=true
PTHREAD_FLAGS=""
WS2_LIBS="-lws2_32"
;;
*linux*)
if test -z "$LIBCURL"; then
LIBCURL="-lcurl"
fi
# libcurl install path (for mingw : --with-curl=/usr/local)
AC_ARG_WITH([curl],
[ --with-curl=PATH prefix where curl is installed [default=/usr]])
if test -n "$with_curl"; then
LIBCURL_CFLAGS="$LIBCURL_CFLAGS -I$with_curl/include"
LIBCURL_CPPFLAGS="$LIBCURL_CPPFLAGS -I$with_curl/include"
LIBCURL_LDFLAGS="$LIBCURL_LDFLAGS -L$with_curl/lib"
fi
# SSL install path (for mingw : --with-crypto=/usr/local/ssl)
AC_ARG_WITH([crypto],
[ --with-crypto=PATH prefix where openssl crypto is installed [default=/usr]])
if test -n "$with_crypto" ; then
LIBCURL_CFLAGS="$LIBCURL_CFLAGS -I$with_crypto/include"
LIBCURL_CPPFLAGS="$LIBCURL_CPPFLAGS -I$with_crypto/include"
LIBCURL_LDFLAGS="-L$with_crypto/lib $LIBCURL_LDFLAGS"
LIBCURL="$LIBCURL -lssl -lcrypto"
fi
CFLAGS="$CFLAGS $LIBCURL_CFLAGS"
CPPFLAGS="$CPPFLAGS $LIBCURL_CPPFLAGS"
LDFLAGS="$LDFLAGS $LIBCURL_LDFLAGS"
AC_SUBST(LIBCURL)
# AC_SUBST(LIBCURL_CFLAGS)
# AC_SUBST(LIBCURL_CPPFLAGS)
# AC_SUBST(LIBCURL_LDFLAGS)
;;
*-apple-*)
if test -z "$LIBCURL"; then
LIBCURL="-lcurl"
fi
# libcurl install path (for mingw : --with-curl=/usr/local)
AC_ARG_WITH([curl],
[ --with-curl=PATH prefix where curl is installed [default=/usr]])
if test -n "$with_curl"; then
LIBCURL_CFLAGS="$LIBCURL_CFLAGS -I$with_curl/include"
LIBCURL_CPPFLAGS="$LIBCURL_CPPFLAGS -I$with_curl/include"
LIBCURL_LDFLAGS="$LIBCURL_LDFLAGS -L$with_curl/lib"
fi
# SSL install path (for mingw : --with-crypto=/usr/local/ssl)
AC_ARG_WITH([crypto],
[ --with-crypto=PATH prefix where openssl crypto is installed [default=/usr]])
if test -n "$with_crypto" ; then
LIBCURL_CFLAGS="$LIBCURL_CFLAGS -I$with_crypto/include"
LIBCURL_CPPFLAGS="$LIBCURL_CPPFLAGS -I$with_crypto/include"
LIBCURL_LDFLAGS="-L$with_crypto/lib $LIBCURL_LDFLAGS"
LIBCURL="$LIBCURL -lssl -lcrypto"
fi
CFLAGS="$CFLAGS $LIBCURL_CFLAGS"
CPPFLAGS="$CPPFLAGS $LIBCURL_CPPFLAGS"
LDFLAGS="$LDFLAGS $LIBCURL_LDFLAGS"
AC_SUBST(LIBCURL)
# AC_SUBST(LIBCURL_CFLAGS)
# AC_SUBST(LIBCURL_CPPFLAGS)
# AC_SUBST(LIBCURL_LDFLAGS)
;;
esac
case $target in
*-*-mingw*)
# LIBCURL_CHECK_CONFIG(, 7.15.2, ,
# [AC_MSG_ERROR([Missing required libcurl >= 7.15.2])])
;;
esac
AC_CHECK_LIB(jansson, json_loads, request_jansson=false, request_jansson=true)
AC_CHECK_LIB([pthread], [pthread_create], PTHREAD_LIBS="-lpthread",
AC_CHECK_LIB([pthreadGC2], [pthread_create], PTHREAD_LIBS="-lpthreadGC2",
AC_CHECK_LIB([pthreadGC1], [pthread_create], PTHREAD_LIBS="-lpthreadGC1",
AC_CHECK_LIB([pthreadGC], [pthread_create], PTHREAD_LIBS="-lpthreadGC"
))))
AM_CONDITIONAL([WANT_JANSSON], [test x$request_jansson = xtrue])
AM_CONDITIONAL([HAVE_WINDOWS], [test x$have_win32 = xtrue])
if test x$request_jansson = xtrue
then
JANSSON_LIBS="compat/jansson/libjansson.a"
else
JANSSON_LIBS=-ljansson
fi
AC_SUBST(JANSSON_LIBS)
AC_SUBST(PTHREAD_FLAGS)
AC_SUBST(PTHREAD_LIBS)
AC_SUBST(WS2_LIBS)
AC_CONFIG_FILES([
Makefile
compat/Makefile
compat/jansson/Makefile
])
AC_OUTPUT

2102
cpu-miner.c Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,24 @@
# WARNING
# Try on Virtual Machine (Ubuntu 16.04)
# https://lxadm.com/Static_compilation_of_cpuminer
# DEPENDS
## OPENSSL
wget https://www.openssl.org/source/openssl-1.1.0g.tar.gz
tar -xvzf openssl-1.1.0g.tar.gz
cd openssl-1.1.0g/
./config no-shared
make -j$(nproc)
sudo make install
cd ..
## CURL
wget https://github.com/curl/curl/releases/download/curl-7_57_0/curl-7.57.0.tar.gz
tar -xvzf curl-7.57.0.tar.gz
cd curl-7.57.0/
./buildconf | grep "buildconf: OK"
./configure --disable-shared | grep "Static=yes"
make -j$(nproc)
sudo make install
cd ../..

View File

@@ -0,0 +1,24 @@
# WARNING
# Try on Virtual Machine (Ubuntu 16.04)
# https://lxadm.com/Static_compilation_of_cpuminer
# DEPENDS
## OPENSSL
wget https://www.openssl.org/source/openssl-1.1.0g.tar.gz
tar -xvzf openssl-1.1.0g.tar.gz
cd openssl-1.1.0g/
./config no-shared
make -j$(nproc)
sudo make install
cd ..
## CURL
wget https://github.com/curl/curl/releases/download/curl-7_57_0/curl-7.57.0.tar.gz
tar -xvzf curl-7.57.0.tar.gz
cd curl-7.57.0/
./buildconf | grep "buildconf: OK"
./configure --disable-shared | grep "Static=yes"
make -j$(nproc)
sudo make install
cd ../..

View File

@@ -0,0 +1,24 @@
# WARNING
# Try on Virtual Machine (Ubuntu 16.04)
# https://lxadm.com/Static_compilation_of_cpuminer
# DEPENDS
## OPENSSL
wget https://www.openssl.org/source/openssl-1.1.0g.tar.gz
tar -xvzf openssl-1.1.0g.tar.gz
cd openssl-1.1.0g/
./config no-shared
make -j$(nproc)
sudo make install
cd ..
## CURL
wget https://github.com/curl/curl/releases/download/curl-7_57_0/curl-7.57.0.tar.gz
tar -xvzf curl-7.57.0.tar.gz
cd curl-7.57.0/
./buildconf | grep "buildconf: OK"
./configure --disable-shared | grep "Static=yes"
make -j$(nproc)
sudo make install
cd ../..

View File

@@ -0,0 +1,24 @@
# WARNING
# Try on Virtual Machine (Ubuntu 16.04)
# https://lxadm.com/Static_compilation_of_cpuminer
# DEPENDS
## OPENSSL
wget https://www.openssl.org/source/openssl-1.1.0g.tar.gz
tar -xvzf openssl-1.1.0g.tar.gz
cd openssl-1.1.0g/
./config no-shared
make -j$(nproc)
sudo make install
cd ..
## CURL
wget https://github.com/curl/curl/releases/download/curl-7_57_0/curl-7.57.0.tar.gz
tar -xvzf curl-7.57.0.tar.gz
cd curl-7.57.0/
./buildconf | grep "buildconf: OK"
./configure --disable-shared | grep "Static=yes"
make -j$(nproc)
sudo make install
cd ../..

3
deps-osx/deps-osx.sh Normal file
View File

@@ -0,0 +1,3 @@
# https://gist.github.com/quagliero/90f493f123c7b1ddba5428ba0146329a
brew install automake openssl zlib curl jansson make

View File

@@ -0,0 +1,28 @@
#!/bin/bash
set -e
PREFIX=${PWD}/i686-w64-mingw32
CURL_PACKAGE=curl-7.54.1
CURL_PACKAGE_FILE=${CURL_PACKAGE}.tar.gz
CURL_PACKAGE_FILE_SHA256=cd404b808b253512dafec4fed0fb2cc98370d818a7991826c3021984fc27f9d0
CURL_CHECKSUM_FILE=${CURL_PACKAGE_FILE}.sha256
wget https://curl.haxx.se/download/$CURL_PACKAGE_FILE -O $CURL_PACKAGE_FILE
echo "${CURL_PACKAGE_FILE_SHA256} ${CURL_PACKAGE_FILE}" > $CURL_CHECKSUM_FILE
sha256sum -c $CURL_CHECKSUM_FILE
rm $CURL_CHECKSUM_FILE
rm -rf pthread-win32
git clone https://github.com/GerHobbelt/pthread-win32.git
tar zxvf $CURL_PACKAGE_FILE
cd $CURL_PACKAGE
./configure --host=i686-w64-mingw32 --disable-shared --enable-static --with-winssl --prefix=$PREFIX
make install
cd ../pthread-win32/
cp config.h pthreads_win32_config.h
make -f GNUmakefile CROSS="i686-w64-mingw32-" clean GC-static
cp libpthreadGC2.a ${PREFIX}/lib/libpthread.a
cp pthread.h semaphore.h sched.h ${PREFIX}/include

Binary file not shown.

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,2 @@
@CMAKE_CONFIGURABLE_FILE_CONTENT@

View File

@@ -0,0 +1,61 @@
include(CheckCSourceCompiles)
option(CURL_HIDDEN_SYMBOLS "Set to ON to hide libcurl internal symbols (=hide all symbols that aren't officially external)." ON)
mark_as_advanced(CURL_HIDDEN_SYMBOLS)
if(CURL_HIDDEN_SYMBOLS)
set(SUPPORTS_SYMBOL_HIDING FALSE)
if(CMAKE_C_COMPILER_ID MATCHES "Clang")
set(SUPPORTS_SYMBOL_HIDING TRUE)
set(_SYMBOL_EXTERN "__attribute__ ((__visibility__ (\"default\")))")
set(_CFLAG_SYMBOLS_HIDE "-fvisibility=hidden")
elseif(CMAKE_COMPILER_IS_GNUCC)
if(NOT CMAKE_VERSION VERSION_LESS 2.8.10)
set(GCC_VERSION ${CMAKE_C_COMPILER_VERSION})
else()
execute_process(COMMAND ${CMAKE_C_COMPILER} -dumpversion
OUTPUT_VARIABLE GCC_VERSION)
endif()
if(NOT GCC_VERSION VERSION_LESS 3.4)
# note: this is considered buggy prior to 4.0 but the autotools don't care, so let's ignore that fact
set(SUPPORTS_SYMBOL_HIDING TRUE)
set(_SYMBOL_EXTERN "__attribute__ ((__visibility__ (\"default\")))")
set(_CFLAG_SYMBOLS_HIDE "-fvisibility=hidden")
endif()
elseif(CMAKE_C_COMPILER_ID MATCHES "SunPro" AND NOT CMAKE_C_COMPILER_VERSION VERSION_LESS 8.0)
set(SUPPORTS_SYMBOL_HIDING TRUE)
set(_SYMBOL_EXTERN "__global")
set(_CFLAG_SYMBOLS_HIDE "-xldscope=hidden")
elseif(CMAKE_C_COMPILER_ID MATCHES "Intel" AND NOT CMAKE_C_COMPILER_VERSION VERSION_LESS 9.0)
# note: this should probably just check for version 9.1.045 but I'm not 100% sure
# so let's to it the same way autotools do.
set(SUPPORTS_SYMBOL_HIDING TRUE)
set(_SYMBOL_EXTERN "__attribute__ ((__visibility__ (\"default\")))")
set(_CFLAG_SYMBOLS_HIDE "-fvisibility=hidden")
check_c_source_compiles("#include <stdio.h>
int main (void) { printf(\"icc fvisibility bug test\"); return 0; }" _no_bug)
if(NOT _no_bug)
set(SUPPORTS_SYMBOL_HIDING FALSE)
set(_SYMBOL_EXTERN "")
set(_CFLAG_SYMBOLS_HIDE "")
endif()
elseif(MSVC)
set(SUPPORTS_SYMBOL_HIDING TRUE)
endif()
set(HIDES_CURL_PRIVATE_SYMBOLS ${SUPPORTS_SYMBOL_HIDING})
elseif(MSVC)
if(NOT CMAKE_VERSION VERSION_LESS 3.7)
set(CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS TRUE) #present since 3.4.3 but broken
set(HIDES_CURL_PRIVATE_SYMBOLS FALSE)
else()
message(WARNING "Hiding private symbols regardless CURL_HIDDEN_SYMBOLS being disabled.")
set(HIDES_CURL_PRIVATE_SYMBOLS TRUE)
endif()
elseif()
set(HIDES_CURL_PRIVATE_SYMBOLS FALSE)
endif()
set(CURL_CFLAG_SYMBOLS_HIDE ${_CFLAG_SYMBOLS_HIDE})
set(CURL_EXTERN_SYMBOL ${_SYMBOL_EXTERN})

View File

@@ -0,0 +1,551 @@
/***************************************************************************
* _ _ ____ _
* Project ___| | | | _ \| |
* / __| | | | |_) | |
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) 1998 - 2014, Daniel Stenberg, <daniel@haxx.se>, et al.
*
* This software is licensed as described in the file COPYING, which
* you should have received as part of this distribution. The terms
* are also available at https://curl.haxx.se/docs/copyright.html.
*
* You may opt to use, copy, modify, merge, publish, distribute and/or sell
* copies of the Software, and permit persons to whom the Software is
* furnished to do so, under the terms of the COPYING file.
*
* This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY
* KIND, either express or implied.
*
***************************************************************************/
#ifdef TIME_WITH_SYS_TIME
/* Time with sys/time test */
#include <sys/types.h>
#include <sys/time.h>
#include <time.h>
int
main ()
{
if ((struct tm *) 0)
return 0;
;
return 0;
}
#endif
#ifdef HAVE_FCNTL_O_NONBLOCK
/* headers for FCNTL_O_NONBLOCK test */
#include <sys/types.h>
#include <unistd.h>
#include <fcntl.h>
/* */
#if defined(sun) || defined(__sun__) || \
defined(__SUNPRO_C) || defined(__SUNPRO_CC)
# if defined(__SVR4) || defined(__srv4__)
# define PLATFORM_SOLARIS
# else
# define PLATFORM_SUNOS4
# endif
#endif
#if (defined(_AIX) || defined(__xlC__)) && !defined(_AIX41)
# define PLATFORM_AIX_V3
#endif
/* */
#if defined(PLATFORM_SUNOS4) || defined(PLATFORM_AIX_V3) || defined(__BEOS__)
#error "O_NONBLOCK does not work on this platform"
#endif
int
main ()
{
/* O_NONBLOCK source test */
int flags = 0;
if(0 != fcntl(0, F_SETFL, flags | O_NONBLOCK))
return 1;
return 0;
}
#endif
/* tests for gethostbyaddr_r or gethostbyname_r */
#if defined(HAVE_GETHOSTBYADDR_R_5_REENTRANT) || \
defined(HAVE_GETHOSTBYADDR_R_7_REENTRANT) || \
defined(HAVE_GETHOSTBYADDR_R_8_REENTRANT) || \
defined(HAVE_GETHOSTBYNAME_R_3_REENTRANT) || \
defined(HAVE_GETHOSTBYNAME_R_5_REENTRANT) || \
defined(HAVE_GETHOSTBYNAME_R_6_REENTRANT)
# define _REENTRANT
/* no idea whether _REENTRANT is always set, just invent a new flag */
# define TEST_GETHOSTBYFOO_REENTRANT
#endif
#if defined(HAVE_GETHOSTBYADDR_R_5) || \
defined(HAVE_GETHOSTBYADDR_R_7) || \
defined(HAVE_GETHOSTBYADDR_R_8) || \
defined(HAVE_GETHOSTBYNAME_R_3) || \
defined(HAVE_GETHOSTBYNAME_R_5) || \
defined(HAVE_GETHOSTBYNAME_R_6) || \
defined(TEST_GETHOSTBYFOO_REENTRANT)
#include <sys/types.h>
#include <netdb.h>
int main(void)
{
char *address = "example.com";
int length = 0;
int type = 0;
struct hostent h;
int rc = 0;
#if defined(HAVE_GETHOSTBYADDR_R_5) || \
defined(HAVE_GETHOSTBYADDR_R_5_REENTRANT) || \
\
defined(HAVE_GETHOSTBYNAME_R_3) || \
defined(HAVE_GETHOSTBYNAME_R_3_REENTRANT)
struct hostent_data hdata;
#elif defined(HAVE_GETHOSTBYADDR_R_7) || \
defined(HAVE_GETHOSTBYADDR_R_7_REENTRANT) || \
defined(HAVE_GETHOSTBYADDR_R_8) || \
defined(HAVE_GETHOSTBYADDR_R_8_REENTRANT) || \
\
defined(HAVE_GETHOSTBYNAME_R_5) || \
defined(HAVE_GETHOSTBYNAME_R_5_REENTRANT) || \
defined(HAVE_GETHOSTBYNAME_R_6) || \
defined(HAVE_GETHOSTBYNAME_R_6_REENTRANT)
char buffer[8192];
int h_errnop;
struct hostent *hp;
#endif
#ifndef gethostbyaddr_r
(void)gethostbyaddr_r;
#endif
#if defined(HAVE_GETHOSTBYADDR_R_5) || \
defined(HAVE_GETHOSTBYADDR_R_5_REENTRANT)
rc = gethostbyaddr_r(address, length, type, &h, &hdata);
#elif defined(HAVE_GETHOSTBYADDR_R_7) || \
defined(HAVE_GETHOSTBYADDR_R_7_REENTRANT)
hp = gethostbyaddr_r(address, length, type, &h, buffer, 8192, &h_errnop);
(void)hp;
#elif defined(HAVE_GETHOSTBYADDR_R_8) || \
defined(HAVE_GETHOSTBYADDR_R_8_REENTRANT)
rc = gethostbyaddr_r(address, length, type, &h, buffer, 8192, &hp, &h_errnop);
#endif
#if defined(HAVE_GETHOSTBYNAME_R_3) || \
defined(HAVE_GETHOSTBYNAME_R_3_REENTRANT)
rc = gethostbyname_r(address, &h, &hdata);
#elif defined(HAVE_GETHOSTBYNAME_R_5) || \
defined(HAVE_GETHOSTBYNAME_R_5_REENTRANT)
rc = gethostbyname_r(address, &h, buffer, 8192, &h_errnop);
(void)hp; /* not used for test */
#elif defined(HAVE_GETHOSTBYNAME_R_6) || \
defined(HAVE_GETHOSTBYNAME_R_6_REENTRANT)
rc = gethostbyname_r(address, &h, buffer, 8192, &hp, &h_errnop);
#endif
(void)length;
(void)type;
(void)rc;
return 0;
}
#endif
#ifdef HAVE_SOCKLEN_T
#ifdef _WIN32
#include <ws2tcpip.h>
#else
#include <sys/types.h>
#include <sys/socket.h>
#endif
int
main ()
{
if ((socklen_t *) 0)
return 0;
if (sizeof (socklen_t))
return 0;
;
return 0;
}
#endif
#ifdef HAVE_IN_ADDR_T
#include <sys/types.h>
#include <sys/socket.h>
#include <arpa/inet.h>
int
main ()
{
if ((in_addr_t *) 0)
return 0;
if (sizeof (in_addr_t))
return 0;
;
return 0;
}
#endif
#ifdef HAVE_BOOL_T
#ifdef HAVE_SYS_TYPES_H
#include <sys/types.h>
#endif
#ifdef HAVE_STDBOOL_H
#include <stdbool.h>
#endif
int
main ()
{
if (sizeof (bool *) )
return 0;
;
return 0;
}
#endif
#ifdef STDC_HEADERS
#include <stdlib.h>
#include <stdarg.h>
#include <string.h>
#include <float.h>
int main() { return 0; }
#endif
#ifdef RETSIGTYPE_TEST
#include <sys/types.h>
#include <signal.h>
#ifdef signal
# undef signal
#endif
#ifdef __cplusplus
extern "C" void (*signal (int, void (*)(int)))(int);
#else
void (*signal ()) ();
#endif
int
main ()
{
return 0;
}
#endif
#ifdef HAVE_INET_NTOA_R_DECL
#include <arpa/inet.h>
typedef void (*func_type)();
int main()
{
#ifndef inet_ntoa_r
func_type func;
func = (func_type)inet_ntoa_r;
#endif
return 0;
}
#endif
#ifdef HAVE_INET_NTOA_R_DECL_REENTRANT
#define _REENTRANT
#include <arpa/inet.h>
typedef void (*func_type)();
int main()
{
#ifndef inet_ntoa_r
func_type func;
func = (func_type)&inet_ntoa_r;
#endif
return 0;
}
#endif
#ifdef HAVE_GETADDRINFO
#include <netdb.h>
#include <sys/types.h>
#include <sys/socket.h>
int main(void) {
struct addrinfo hints, *ai;
int error;
memset(&hints, 0, sizeof(hints));
hints.ai_family = AF_UNSPEC;
hints.ai_socktype = SOCK_STREAM;
#ifndef getaddrinfo
(void)getaddrinfo;
#endif
error = getaddrinfo("127.0.0.1", "8080", &hints, &ai);
if (error) {
return 1;
}
return 0;
}
#endif
#ifdef HAVE_FILE_OFFSET_BITS
#ifdef _FILE_OFFSET_BITS
#undef _FILE_OFFSET_BITS
#endif
#define _FILE_OFFSET_BITS 64
#include <sys/types.h>
/* Check that off_t can represent 2**63 - 1 correctly.
We can't simply define LARGE_OFF_T to be 9223372036854775807,
since some C++ compilers masquerading as C compilers
incorrectly reject 9223372036854775807. */
#define LARGE_OFF_T (((off_t) 1 << 62) - 1 + ((off_t) 1 << 62))
int off_t_is_large[(LARGE_OFF_T % 2147483629 == 721
&& LARGE_OFF_T % 2147483647 == 1)
? 1 : -1];
int main () { ; return 0; }
#endif
#ifdef HAVE_IOCTLSOCKET
/* includes start */
#ifdef HAVE_WINDOWS_H
# ifndef WIN32_LEAN_AND_MEAN
# define WIN32_LEAN_AND_MEAN
# endif
# include <windows.h>
# ifdef HAVE_WINSOCK2_H
# include <winsock2.h>
# else
# ifdef HAVE_WINSOCK_H
# include <winsock.h>
# endif
# endif
#endif
int
main ()
{
/* ioctlsocket source code */
int socket;
unsigned long flags = ioctlsocket(socket, FIONBIO, &flags);
;
return 0;
}
#endif
#ifdef HAVE_IOCTLSOCKET_CAMEL
/* includes start */
#ifdef HAVE_WINDOWS_H
# ifndef WIN32_LEAN_AND_MEAN
# define WIN32_LEAN_AND_MEAN
# endif
# include <windows.h>
# ifdef HAVE_WINSOCK2_H
# include <winsock2.h>
# else
# ifdef HAVE_WINSOCK_H
# include <winsock.h>
# endif
# endif
#endif
int
main ()
{
/* IoctlSocket source code */
if(0 != IoctlSocket(0, 0, 0))
return 1;
;
return 0;
}
#endif
#ifdef HAVE_IOCTLSOCKET_CAMEL_FIONBIO
/* includes start */
#ifdef HAVE_WINDOWS_H
# ifndef WIN32_LEAN_AND_MEAN
# define WIN32_LEAN_AND_MEAN
# endif
# include <windows.h>
# ifdef HAVE_WINSOCK2_H
# include <winsock2.h>
# else
# ifdef HAVE_WINSOCK_H
# include <winsock.h>
# endif
# endif
#endif
int
main ()
{
/* IoctlSocket source code */
long flags = 0;
if(0 != ioctlsocket(0, FIONBIO, &flags))
return 1;
;
return 0;
}
#endif
#ifdef HAVE_IOCTLSOCKET_FIONBIO
/* includes start */
#ifdef HAVE_WINDOWS_H
# ifndef WIN32_LEAN_AND_MEAN
# define WIN32_LEAN_AND_MEAN
# endif
# include <windows.h>
# ifdef HAVE_WINSOCK2_H
# include <winsock2.h>
# else
# ifdef HAVE_WINSOCK_H
# include <winsock.h>
# endif
# endif
#endif
int
main ()
{
int flags = 0;
if(0 != ioctlsocket(0, FIONBIO, &flags))
return 1;
;
return 0;
}
#endif
#ifdef HAVE_IOCTL_FIONBIO
/* headers for FIONBIO test */
/* includes start */
#ifdef HAVE_SYS_TYPES_H
# include <sys/types.h>
#endif
#ifdef HAVE_UNISTD_H
# include <unistd.h>
#endif
#ifdef HAVE_SYS_SOCKET_H
# include <sys/socket.h>
#endif
#ifdef HAVE_SYS_IOCTL_H
# include <sys/ioctl.h>
#endif
#ifdef HAVE_STROPTS_H
# include <stropts.h>
#endif
int
main ()
{
int flags = 0;
if(0 != ioctl(0, FIONBIO, &flags))
return 1;
;
return 0;
}
#endif
#ifdef HAVE_IOCTL_SIOCGIFADDR
/* headers for FIONBIO test */
/* includes start */
#ifdef HAVE_SYS_TYPES_H
# include <sys/types.h>
#endif
#ifdef HAVE_UNISTD_H
# include <unistd.h>
#endif
#ifdef HAVE_SYS_SOCKET_H
# include <sys/socket.h>
#endif
#ifdef HAVE_SYS_IOCTL_H
# include <sys/ioctl.h>
#endif
#ifdef HAVE_STROPTS_H
# include <stropts.h>
#endif
#include <net/if.h>
int
main ()
{
struct ifreq ifr;
if(0 != ioctl(0, SIOCGIFADDR, &ifr))
return 1;
;
return 0;
}
#endif
#ifdef HAVE_SETSOCKOPT_SO_NONBLOCK
/* includes start */
#ifdef HAVE_WINDOWS_H
# ifndef WIN32_LEAN_AND_MEAN
# define WIN32_LEAN_AND_MEAN
# endif
# include <windows.h>
# ifdef HAVE_WINSOCK2_H
# include <winsock2.h>
# else
# ifdef HAVE_WINSOCK_H
# include <winsock.h>
# endif
# endif
#endif
/* includes start */
#ifdef HAVE_SYS_TYPES_H
# include <sys/types.h>
#endif
#ifdef HAVE_SYS_SOCKET_H
# include <sys/socket.h>
#endif
/* includes end */
int
main ()
{
if(0 != setsockopt(0, SOL_SOCKET, SO_NONBLOCK, 0, 0))
return 1;
;
return 0;
}
#endif
#ifdef HAVE_GLIBC_STRERROR_R
#include <string.h>
#include <errno.h>
int
main () {
char buffer[1024]; /* big enough to play with */
char *string =
strerror_r(EACCES, buffer, sizeof(buffer));
/* this should've returned a string */
if(!string || !string[0])
return 99;
return 0;
}
#endif
#ifdef HAVE_POSIX_STRERROR_R
#include <string.h>
#include <errno.h>
int
main () {
char buffer[1024]; /* big enough to play with */
int error =
strerror_r(EACCES, buffer, sizeof(buffer));
/* This should've returned zero, and written an error string in the
buffer.*/
if(!buffer[0] || error)
return 99;
return 0;
}
#endif
#ifdef HAVE_FSETXATTR_6
#include <sys/xattr.h> /* header from libc, not from libattr */
int
main() {
fsetxattr(0, 0, 0, 0, 0, 0);
return 0;
}
#endif
#ifdef HAVE_FSETXATTR_5
#include <sys/xattr.h> /* header from libc, not from libattr */
int
main() {
fsetxattr(0, 0, 0, 0, 0);
return 0;
}
#endif

View File

@@ -0,0 +1,42 @@
# - Find c-ares
# Find the c-ares includes and library
# This module defines
# CARES_INCLUDE_DIR, where to find ares.h, etc.
# CARES_LIBRARIES, the libraries needed to use c-ares.
# CARES_FOUND, If false, do not try to use c-ares.
# also defined, but not for general use are
# CARES_LIBRARY, where to find the c-ares library.
FIND_PATH(CARES_INCLUDE_DIR ares.h
/usr/local/include
/usr/include
)
SET(CARES_NAMES ${CARES_NAMES} cares)
FIND_LIBRARY(CARES_LIBRARY
NAMES ${CARES_NAMES}
PATHS /usr/lib /usr/local/lib
)
IF (CARES_LIBRARY AND CARES_INCLUDE_DIR)
SET(CARES_LIBRARIES ${CARES_LIBRARY})
SET(CARES_FOUND "YES")
ELSE (CARES_LIBRARY AND CARES_INCLUDE_DIR)
SET(CARES_FOUND "NO")
ENDIF (CARES_LIBRARY AND CARES_INCLUDE_DIR)
IF (CARES_FOUND)
IF (NOT CARES_FIND_QUIETLY)
MESSAGE(STATUS "Found c-ares: ${CARES_LIBRARIES}")
ENDIF (NOT CARES_FIND_QUIETLY)
ELSE (CARES_FOUND)
IF (CARES_FIND_REQUIRED)
MESSAGE(FATAL_ERROR "Could not find c-ares library")
ENDIF (CARES_FIND_REQUIRED)
ENDIF (CARES_FOUND)
MARK_AS_ADVANCED(
CARES_LIBRARY
CARES_INCLUDE_DIR
)

View File

@@ -0,0 +1,289 @@
# - Try to find the GSS Kerberos library
# Once done this will define
#
# GSS_ROOT_DIR - Set this variable to the root installation of GSS
#
# Read-Only variables:
# GSS_FOUND - system has the Heimdal library
# GSS_FLAVOUR - "MIT" or "Heimdal" if anything found.
# GSS_INCLUDE_DIR - the Heimdal include directory
# GSS_LIBRARIES - The libraries needed to use GSS
# GSS_LINK_DIRECTORIES - Directories to add to linker search path
# GSS_LINKER_FLAGS - Additional linker flags
# GSS_COMPILER_FLAGS - Additional compiler flags
# GSS_VERSION - This is set to version advertised by pkg-config or read from manifest.
# In case the library is found but no version info available it'll be set to "unknown"
set(_MIT_MODNAME mit-krb5-gssapi)
set(_HEIMDAL_MODNAME heimdal-gssapi)
include(CheckIncludeFile)
include(CheckIncludeFiles)
include(CheckTypeSize)
set(_GSS_ROOT_HINTS
"${GSS_ROOT_DIR}"
"$ENV{GSS_ROOT_DIR}"
)
# try to find library using system pkg-config if user didn't specify root dir
if(NOT GSS_ROOT_DIR AND NOT "$ENV{GSS_ROOT_DIR}")
if(UNIX)
find_package(PkgConfig QUIET)
pkg_search_module(_GSS_PKG ${_MIT_MODNAME} ${_HEIMDAL_MODNAME})
list(APPEND _GSS_ROOT_HINTS "${_GSS_PKG_PREFIX}")
elseif(WIN32)
list(APPEND _GSS_ROOT_HINTS "[HKEY_LOCAL_MACHINE\\SOFTWARE\\MIT\\Kerberos;InstallDir]")
endif()
endif()
if(NOT _GSS_FOUND) #not found by pkg-config. Let's take more traditional approach.
find_file(_GSS_CONFIGURE_SCRIPT
NAMES
"krb5-config"
HINTS
${_GSS_ROOT_HINTS}
PATH_SUFFIXES
bin
NO_CMAKE_PATH
NO_CMAKE_ENVIRONMENT_PATH
)
# if not found in user-supplied directories, maybe system knows better
find_file(_GSS_CONFIGURE_SCRIPT
NAMES
"krb5-config"
PATH_SUFFIXES
bin
)
if(_GSS_CONFIGURE_SCRIPT)
execute_process(
COMMAND ${_GSS_CONFIGURE_SCRIPT} "--cflags" "gssapi"
OUTPUT_VARIABLE _GSS_CFLAGS
RESULT_VARIABLE _GSS_CONFIGURE_FAILED
)
message(STATUS "CFLAGS: ${_GSS_CFLAGS}")
if(NOT _GSS_CONFIGURE_FAILED) # 0 means success
# should also work in an odd case when multiple directories are given
string(STRIP "${_GSS_CFLAGS}" _GSS_CFLAGS)
string(REGEX REPLACE " +-I" ";" _GSS_CFLAGS "${_GSS_CFLAGS}")
string(REGEX REPLACE " +-([^I][^ \\t;]*)" ";-\\1"_GSS_CFLAGS "${_GSS_CFLAGS}")
foreach(_flag ${_GSS_CFLAGS})
if(_flag MATCHES "^-I.*")
string(REGEX REPLACE "^-I" "" _val "${_flag}")
list(APPEND _GSS_INCLUDE_DIR "${_val}")
else()
list(APPEND _GSS_COMPILER_FLAGS "${_flag}")
endif()
endforeach()
endif()
execute_process(
COMMAND ${_GSS_CONFIGURE_SCRIPT} "--libs" "gssapi"
OUTPUT_VARIABLE _GSS_LIB_FLAGS
RESULT_VARIABLE _GSS_CONFIGURE_FAILED
)
message(STATUS "LDFLAGS: ${_GSS_LIB_FLAGS}")
if(NOT _GSS_CONFIGURE_FAILED) # 0 means success
# this script gives us libraries and link directories. Blah. We have to deal with it.
string(STRIP "${_GSS_LIB_FLAGS}" _GSS_LIB_FLAGS)
string(REGEX REPLACE " +-(L|l)" ";-\\1" _GSS_LIB_FLAGS "${_GSS_LIB_FLAGS}")
string(REGEX REPLACE " +-([^Ll][^ \\t;]*)" ";-\\1"_GSS_LIB_FLAGS "${_GSS_LIB_FLAGS}")
foreach(_flag ${_GSS_LIB_FLAGS})
if(_flag MATCHES "^-l.*")
string(REGEX REPLACE "^-l" "" _val "${_flag}")
list(APPEND _GSS_LIBRARIES "${_val}")
elseif(_flag MATCHES "^-L.*")
string(REGEX REPLACE "^-L" "" _val "${_flag}")
list(APPEND _GSS_LINK_DIRECTORIES "${_val}")
else()
list(APPEND _GSS_LINKER_FLAGS "${_flag}")
endif()
endforeach()
endif()
execute_process(
COMMAND ${_GSS_CONFIGURE_SCRIPT} "--version"
OUTPUT_VARIABLE _GSS_VERSION
RESULT_VARIABLE _GSS_CONFIGURE_FAILED
)
# older versions may not have the "--version" parameter. In this case we just don't care.
if(_GSS_CONFIGURE_FAILED)
set(_GSS_VERSION 0)
endif()
execute_process(
COMMAND ${_GSS_CONFIGURE_SCRIPT} "--vendor"
OUTPUT_VARIABLE _GSS_VENDOR
RESULT_VARIABLE _GSS_CONFIGURE_FAILED
)
# older versions may not have the "--vendor" parameter. In this case we just don't care.
if(_GSS_CONFIGURE_FAILED)
set(GSS_FLAVOUR "Heimdal") # most probably, shouldn't really matter
else()
if(_GSS_VENDOR MATCHES ".*H|heimdal.*")
set(GSS_FLAVOUR "Heimdal")
else()
set(GSS_FLAVOUR "MIT")
endif()
endif()
else() # either there is no config script or we are on platform that doesn't provide one (Windows?)
find_path(_GSS_INCLUDE_DIR
NAMES
"gssapi/gssapi.h"
HINTS
${_GSS_ROOT_HINTS}
PATH_SUFFIXES
include
inc
)
if(_GSS_INCLUDE_DIR) #jay, we've found something
set(CMAKE_REQUIRED_INCLUDES "${_GSS_INCLUDE_DIR}")
check_include_files( "gssapi/gssapi_generic.h;gssapi/gssapi_krb5.h" _GSS_HAVE_MIT_HEADERS)
if(_GSS_HAVE_MIT_HEADERS)
set(GSS_FLAVOUR "MIT")
else()
# prevent compiling the header - just check if we can include it
set(CMAKE_REQUIRED_DEFINITIONS "${CMAKE_REQUIRED_DEFINITIONS} -D__ROKEN_H__")
check_include_file( "roken.h" _GSS_HAVE_ROKEN_H)
check_include_file( "heimdal/roken.h" _GSS_HAVE_HEIMDAL_ROKEN_H)
if(_GSS_HAVE_ROKEN_H OR _GSS_HAVE_HEIMDAL_ROKEN_H)
set(GSS_FLAVOUR "Heimdal")
endif()
set(CMAKE_REQUIRED_DEFINITIONS "")
endif()
else()
# I'm not convienced if this is the right way but this is what autotools do at the moment
find_path(_GSS_INCLUDE_DIR
NAMES
"gssapi.h"
HINTS
${_GSS_ROOT_HINTS}
PATH_SUFFIXES
include
inc
)
if(_GSS_INCLUDE_DIR)
set(GSS_FLAVOUR "Heimdal")
endif()
endif()
# if we have headers, check if we can link libraries
if(GSS_FLAVOUR)
set(_GSS_LIBDIR_SUFFIXES "")
set(_GSS_LIBDIR_HINTS ${_GSS_ROOT_HINTS})
get_filename_component(_GSS_CALCULATED_POTENTIAL_ROOT "${_GSS_INCLUDE_DIR}" PATH)
list(APPEND _GSS_LIBDIR_HINTS ${_GSS_CALCULATED_POTENTIAL_ROOT})
if(WIN32)
if(CMAKE_SIZEOF_VOID_P EQUAL 8)
list(APPEND _GSS_LIBDIR_SUFFIXES "lib/AMD64")
if(GSS_FLAVOUR STREQUAL "MIT")
set(_GSS_LIBNAME "gssapi64")
else()
set(_GSS_LIBNAME "libgssapi")
endif()
else()
list(APPEND _GSS_LIBDIR_SUFFIXES "lib/i386")
if(GSS_FLAVOUR STREQUAL "MIT")
set(_GSS_LIBNAME "gssapi32")
else()
set(_GSS_LIBNAME "libgssapi")
endif()
endif()
else()
list(APPEND _GSS_LIBDIR_SUFFIXES "lib;lib64") # those suffixes are not checked for HINTS
if(GSS_FLAVOUR STREQUAL "MIT")
set(_GSS_LIBNAME "gssapi_krb5")
else()
set(_GSS_LIBNAME "gssapi")
endif()
endif()
find_library(_GSS_LIBRARIES
NAMES
${_GSS_LIBNAME}
HINTS
${_GSS_LIBDIR_HINTS}
PATH_SUFFIXES
${_GSS_LIBDIR_SUFFIXES}
)
endif()
endif()
else()
if(_GSS_PKG_${_MIT_MODNAME}_VERSION)
set(GSS_FLAVOUR "MIT")
set(_GSS_VERSION _GSS_PKG_${_MIT_MODNAME}_VERSION)
else()
set(GSS_FLAVOUR "Heimdal")
set(_GSS_VERSION _GSS_PKG_${_MIT_HEIMDAL}_VERSION)
endif()
endif()
set(GSS_INCLUDE_DIR ${_GSS_INCLUDE_DIR})
set(GSS_LIBRARIES ${_GSS_LIBRARIES})
set(GSS_LINK_DIRECTORIES ${_GSS_LINK_DIRECTORIES})
set(GSS_LINKER_FLAGS ${_GSS_LINKER_FLAGS})
set(GSS_COMPILER_FLAGS ${_GSS_COMPILER_FLAGS})
set(GSS_VERSION ${_GSS_VERSION})
if(GSS_FLAVOUR)
if(NOT GSS_VERSION AND GSS_FLAVOUR STREQUAL "Heimdal")
if(CMAKE_SIZEOF_VOID_P EQUAL 8)
set(HEIMDAL_MANIFEST_FILE "Heimdal.Application.amd64.manifest")
else()
set(HEIMDAL_MANIFEST_FILE "Heimdal.Application.x86.manifest")
endif()
if(EXISTS "${GSS_INCLUDE_DIR}/${HEIMDAL_MANIFEST_FILE}")
file(STRINGS "${GSS_INCLUDE_DIR}/${HEIMDAL_MANIFEST_FILE}" heimdal_version_str
REGEX "^.*version=\"[0-9]\\.[^\"]+\".*$")
string(REGEX MATCH "[0-9]\\.[^\"]+"
GSS_VERSION "${heimdal_version_str}")
endif()
if(NOT GSS_VERSION)
set(GSS_VERSION "Heimdal Unknown")
endif()
elseif(NOT GSS_VERSION AND GSS_FLAVOUR STREQUAL "MIT")
get_filename_component(_MIT_VERSION "[HKEY_LOCAL_MACHINE\\SOFTWARE\\MIT\\Kerberos\\SDK\\CurrentVersion;VersionString]" NAME CACHE)
if(WIN32 AND _MIT_VERSION)
set(GSS_VERSION "${_MIT_VERSION}")
else()
set(GSS_VERSION "MIT Unknown")
endif()
endif()
endif()
include(FindPackageHandleStandardArgs)
set(_GSS_REQUIRED_VARS GSS_LIBRARIES GSS_FLAVOUR)
find_package_handle_standard_args(GSS
REQUIRED_VARS
${_GSS_REQUIRED_VARS}
VERSION_VAR
GSS_VERSION
FAIL_MESSAGE
"Could NOT find GSS, try to set the path to GSS root folder in the system variable GSS_ROOT_DIR"
)
mark_as_advanced(GSS_INCLUDE_DIR GSS_LIBRARIES)

View File

@@ -0,0 +1,35 @@
# - Try to find the libssh2 library
# Once done this will define
#
# LIBSSH2_FOUND - system has the libssh2 library
# LIBSSH2_INCLUDE_DIR - the libssh2 include directory
# LIBSSH2_LIBRARY - the libssh2 library name
if (LIBSSH2_INCLUDE_DIR AND LIBSSH2_LIBRARY)
set(LibSSH2_FIND_QUIETLY TRUE)
endif (LIBSSH2_INCLUDE_DIR AND LIBSSH2_LIBRARY)
FIND_PATH(LIBSSH2_INCLUDE_DIR libssh2.h
)
FIND_LIBRARY(LIBSSH2_LIBRARY NAMES ssh2
)
if(LIBSSH2_INCLUDE_DIR)
file(STRINGS "${LIBSSH2_INCLUDE_DIR}/libssh2.h" libssh2_version_str REGEX "^#define[\t ]+LIBSSH2_VERSION_NUM[\t ]+0x[0-9][0-9][0-9][0-9][0-9][0-9].*")
string(REGEX REPLACE "^.*LIBSSH2_VERSION_NUM[\t ]+0x([0-9][0-9]).*$" "\\1" LIBSSH2_VERSION_MAJOR "${libssh2_version_str}")
string(REGEX REPLACE "^.*LIBSSH2_VERSION_NUM[\t ]+0x[0-9][0-9]([0-9][0-9]).*$" "\\1" LIBSSH2_VERSION_MINOR "${libssh2_version_str}")
string(REGEX REPLACE "^.*LIBSSH2_VERSION_NUM[\t ]+0x[0-9][0-9][0-9][0-9]([0-9][0-9]).*$" "\\1" LIBSSH2_VERSION_PATCH "${libssh2_version_str}")
string(REGEX REPLACE "^0(.+)" "\\1" LIBSSH2_VERSION_MAJOR "${LIBSSH2_VERSION_MAJOR}")
string(REGEX REPLACE "^0(.+)" "\\1" LIBSSH2_VERSION_MINOR "${LIBSSH2_VERSION_MINOR}")
string(REGEX REPLACE "^0(.+)" "\\1" LIBSSH2_VERSION_PATCH "${LIBSSH2_VERSION_PATCH}")
set(LIBSSH2_VERSION "${LIBSSH2_VERSION_MAJOR}.${LIBSSH2_VERSION_MINOR}.${LIBSSH2_VERSION_PATCH}")
endif(LIBSSH2_INCLUDE_DIR)
include(FindPackageHandleStandardArgs)
FIND_PACKAGE_HANDLE_STANDARD_ARGS(LibSSH2 DEFAULT_MSG LIBSSH2_INCLUDE_DIR LIBSSH2_LIBRARY )
MARK_AS_ADVANCED(LIBSSH2_INCLUDE_DIR LIBSSH2_LIBRARY LIBSSH2_VERSION_MAJOR LIBSSH2_VERSION_MINOR LIBSSH2_VERSION_PATCH LIBSSH2_VERSION)

View File

@@ -0,0 +1,13 @@
find_path(MBEDTLS_INCLUDE_DIRS mbedtls/ssl.h)
find_library(MBEDTLS_LIBRARY mbedtls)
find_library(MBEDX509_LIBRARY mbedx509)
find_library(MBEDCRYPTO_LIBRARY mbedcrypto)
set(MBEDTLS_LIBRARIES "${MBEDTLS_LIBRARY}" "${MBEDX509_LIBRARY}" "${MBEDCRYPTO_LIBRARY}")
include(FindPackageHandleStandardArgs)
find_package_handle_standard_args(MBEDTLS DEFAULT_MSG
MBEDTLS_INCLUDE_DIRS MBEDTLS_LIBRARY MBEDX509_LIBRARY MBEDCRYPTO_LIBRARY)
mark_as_advanced(MBEDTLS_INCLUDE_DIRS MBEDTLS_LIBRARY MBEDX509_LIBRARY MBEDCRYPTO_LIBRARY)

View File

@@ -0,0 +1,18 @@
include(FindPackageHandleStandardArgs)
find_path(NGHTTP2_INCLUDE_DIR "nghttp2/nghttp2.h")
find_library(NGHTTP2_LIBRARY NAMES nghttp2)
find_package_handle_standard_args(NGHTTP2
FOUND_VAR
NGHTTP2_FOUND
REQUIRED_VARS
NGHTTP2_LIBRARY
NGHTTP2_INCLUDE_DIR
FAIL_MESSAGE
"Could NOT find NGHTTP2"
)
set(NGHTTP2_INCLUDE_DIRS ${NGHTTP2_INCLUDE_DIR} )
set(NGHTTP2_LIBRARIES ${NGHTTP2_LIBRARY})

View File

@@ -0,0 +1,95 @@
#File defines convenience macros for available feature testing
# This macro checks if the symbol exists in the library and if it
# does, it prepends library to the list. It is intended to be called
# multiple times with a sequence of possibly dependent libraries in
# order of least-to-most-dependent. Some libraries depend on others
# to link correctly.
macro(CHECK_LIBRARY_EXISTS_CONCAT LIBRARY SYMBOL VARIABLE)
check_library_exists("${LIBRARY};${CURL_LIBS}" ${SYMBOL} "${CMAKE_LIBRARY_PATH}"
${VARIABLE})
if(${VARIABLE})
set(CURL_LIBS ${LIBRARY} ${CURL_LIBS})
endif(${VARIABLE})
endmacro(CHECK_LIBRARY_EXISTS_CONCAT)
# Check if header file exists and add it to the list.
# This macro is intended to be called multiple times with a sequence of
# possibly dependent header files. Some headers depend on others to be
# compiled correctly.
macro(CHECK_INCLUDE_FILE_CONCAT FILE VARIABLE)
check_include_files("${CURL_INCLUDES};${FILE}" ${VARIABLE})
if(${VARIABLE})
set(CURL_INCLUDES ${CURL_INCLUDES} ${FILE})
set(CURL_TEST_DEFINES "${CURL_TEST_DEFINES} -D${VARIABLE}")
endif(${VARIABLE})
endmacro(CHECK_INCLUDE_FILE_CONCAT)
# For other curl specific tests, use this macro.
macro(CURL_INTERNAL_TEST CURL_TEST)
if(NOT DEFINED "${CURL_TEST}")
set(MACRO_CHECK_FUNCTION_DEFINITIONS
"-D${CURL_TEST} ${CURL_TEST_DEFINES} ${CMAKE_REQUIRED_FLAGS}")
if(CMAKE_REQUIRED_LIBRARIES)
set(CURL_TEST_ADD_LIBRARIES
"-DLINK_LIBRARIES:STRING=${CMAKE_REQUIRED_LIBRARIES}")
endif(CMAKE_REQUIRED_LIBRARIES)
message(STATUS "Performing Curl Test ${CURL_TEST}")
try_compile(${CURL_TEST}
${CMAKE_BINARY_DIR}
${CMAKE_CURRENT_SOURCE_DIR}/CMake/CurlTests.c
CMAKE_FLAGS -DCOMPILE_DEFINITIONS:STRING=${MACRO_CHECK_FUNCTION_DEFINITIONS}
"${CURL_TEST_ADD_LIBRARIES}"
OUTPUT_VARIABLE OUTPUT)
if(${CURL_TEST})
set(${CURL_TEST} 1 CACHE INTERNAL "Curl test ${FUNCTION}")
message(STATUS "Performing Curl Test ${CURL_TEST} - Success")
file(APPEND ${CMAKE_BINARY_DIR}${CMAKE_FILES_DIRECTORY}/CMakeOutput.log
"Performing Curl Test ${CURL_TEST} passed with the following output:\n"
"${OUTPUT}\n")
else(${CURL_TEST})
message(STATUS "Performing Curl Test ${CURL_TEST} - Failed")
set(${CURL_TEST} "" CACHE INTERNAL "Curl test ${FUNCTION}")
file(APPEND ${CMAKE_BINARY_DIR}${CMAKE_FILES_DIRECTORY}/CMakeError.log
"Performing Curl Test ${CURL_TEST} failed with the following output:\n"
"${OUTPUT}\n")
endif(${CURL_TEST})
endif()
endmacro(CURL_INTERNAL_TEST)
macro(CURL_INTERNAL_TEST_RUN CURL_TEST)
if(NOT DEFINED "${CURL_TEST}_COMPILE")
set(MACRO_CHECK_FUNCTION_DEFINITIONS
"-D${CURL_TEST} ${CMAKE_REQUIRED_FLAGS}")
if(CMAKE_REQUIRED_LIBRARIES)
set(CURL_TEST_ADD_LIBRARIES
"-DLINK_LIBRARIES:STRING=${CMAKE_REQUIRED_LIBRARIES}")
endif(CMAKE_REQUIRED_LIBRARIES)
message(STATUS "Performing Curl Test ${CURL_TEST}")
try_run(${CURL_TEST} ${CURL_TEST}_COMPILE
${CMAKE_BINARY_DIR}
${CMAKE_CURRENT_SOURCE_DIR}/CMake/CurlTests.c
CMAKE_FLAGS -DCOMPILE_DEFINITIONS:STRING=${MACRO_CHECK_FUNCTION_DEFINITIONS}
"${CURL_TEST_ADD_LIBRARIES}"
OUTPUT_VARIABLE OUTPUT)
if(${CURL_TEST}_COMPILE AND NOT ${CURL_TEST})
set(${CURL_TEST} 1 CACHE INTERNAL "Curl test ${FUNCTION}")
message(STATUS "Performing Curl Test ${CURL_TEST} - Success")
else(${CURL_TEST}_COMPILE AND NOT ${CURL_TEST})
message(STATUS "Performing Curl Test ${CURL_TEST} - Failed")
set(${CURL_TEST} "" CACHE INTERNAL "Curl test ${FUNCTION}")
file(APPEND "${CMAKE_BINARY_DIR}${CMAKE_FILES_DIRECTORY}/CMakeError.log"
"Performing Curl Test ${CURL_TEST} failed with the following output:\n"
"${OUTPUT}")
if(${CURL_TEST}_COMPILE)
file(APPEND
"${CMAKE_BINARY_DIR}${CMAKE_FILES_DIRECTORY}/CMakeError.log"
"There was a problem running this test\n")
endif(${CURL_TEST}_COMPILE)
file(APPEND "${CMAKE_BINARY_DIR}${CMAKE_FILES_DIRECTORY}/CMakeError.log"
"\n\n")
endif(${CURL_TEST}_COMPILE AND NOT ${CURL_TEST})
endif()
endmacro(CURL_INTERNAL_TEST_RUN)

View File

@@ -0,0 +1,232 @@
include(CheckCSourceCompiles)
# The begin of the sources (macros and includes)
set(_source_epilogue "#undef inline")
macro(add_header_include check header)
if(${check})
set(_source_epilogue "${_source_epilogue}\n#include <${header}>")
endif(${check})
endmacro(add_header_include)
set(signature_call_conv)
if(HAVE_WINDOWS_H)
add_header_include(HAVE_WINSOCK2_H "winsock2.h")
add_header_include(HAVE_WINDOWS_H "windows.h")
add_header_include(HAVE_WINSOCK_H "winsock.h")
set(_source_epilogue
"${_source_epilogue}\n#ifndef WIN32_LEAN_AND_MEAN\n#define WIN32_LEAN_AND_MEAN\n#endif")
set(signature_call_conv "PASCAL")
if(HAVE_LIBWS2_32)
set(CMAKE_REQUIRED_LIBRARIES ws2_32)
endif()
else(HAVE_WINDOWS_H)
add_header_include(HAVE_SYS_TYPES_H "sys/types.h")
add_header_include(HAVE_SYS_SOCKET_H "sys/socket.h")
endif(HAVE_WINDOWS_H)
check_c_source_compiles("${_source_epilogue}
int main(void) {
recv(0, 0, 0, 0);
return 0;
}" curl_cv_recv)
if(curl_cv_recv)
if(NOT DEFINED curl_cv_func_recv_args OR "${curl_cv_func_recv_args}" STREQUAL "unknown")
foreach(recv_retv "int" "ssize_t" )
foreach(recv_arg1 "int" "ssize_t" "SOCKET")
foreach(recv_arg2 "void *" "char *")
foreach(recv_arg3 "size_t" "int" "socklen_t" "unsigned int")
foreach(recv_arg4 "int" "unsigned int")
if(NOT curl_cv_func_recv_done)
unset(curl_cv_func_recv_test CACHE)
check_c_source_compiles("
${_source_epilogue}
extern ${recv_retv} ${signature_call_conv}
recv(${recv_arg1}, ${recv_arg2}, ${recv_arg3}, ${recv_arg4});
int main(void) {
${recv_arg1} s=0;
${recv_arg2} buf=0;
${recv_arg3} len=0;
${recv_arg4} flags=0;
${recv_retv} res = recv(s, buf, len, flags);
(void) res;
return 0;
}"
curl_cv_func_recv_test)
message(STATUS
"Tested: ${recv_retv} recv(${recv_arg1}, ${recv_arg2}, ${recv_arg3}, ${recv_arg4})")
if(curl_cv_func_recv_test)
set(curl_cv_func_recv_args
"${recv_arg1},${recv_arg2},${recv_arg3},${recv_arg4},${recv_retv}")
set(RECV_TYPE_ARG1 "${recv_arg1}")
set(RECV_TYPE_ARG2 "${recv_arg2}")
set(RECV_TYPE_ARG3 "${recv_arg3}")
set(RECV_TYPE_ARG4 "${recv_arg4}")
set(RECV_TYPE_RETV "${recv_retv}")
set(HAVE_RECV 1)
set(curl_cv_func_recv_done 1)
endif(curl_cv_func_recv_test)
endif(NOT curl_cv_func_recv_done)
endforeach(recv_arg4)
endforeach(recv_arg3)
endforeach(recv_arg2)
endforeach(recv_arg1)
endforeach(recv_retv)
else()
string(REGEX REPLACE "^([^,]*),[^,]*,[^,]*,[^,]*,[^,]*$" "\\1" RECV_TYPE_ARG1 "${curl_cv_func_recv_args}")
string(REGEX REPLACE "^[^,]*,([^,]*),[^,]*,[^,]*,[^,]*$" "\\1" RECV_TYPE_ARG2 "${curl_cv_func_recv_args}")
string(REGEX REPLACE "^[^,]*,[^,]*,([^,]*),[^,]*,[^,]*$" "\\1" RECV_TYPE_ARG3 "${curl_cv_func_recv_args}")
string(REGEX REPLACE "^[^,]*,[^,]*,[^,]*,([^,]*),[^,]*$" "\\1" RECV_TYPE_ARG4 "${curl_cv_func_recv_args}")
string(REGEX REPLACE "^[^,]*,[^,]*,[^,]*,[^,]*,([^,]*)$" "\\1" RECV_TYPE_RETV "${curl_cv_func_recv_args}")
endif()
if("${curl_cv_func_recv_args}" STREQUAL "unknown")
message(FATAL_ERROR "Cannot find proper types to use for recv args")
endif("${curl_cv_func_recv_args}" STREQUAL "unknown")
else(curl_cv_recv)
message(FATAL_ERROR "Unable to link function recv")
endif(curl_cv_recv)
set(curl_cv_func_recv_args "${curl_cv_func_recv_args}" CACHE INTERNAL "Arguments for recv")
set(HAVE_RECV 1)
check_c_source_compiles("${_source_epilogue}
int main(void) {
send(0, 0, 0, 0);
return 0;
}" curl_cv_send)
if(curl_cv_send)
if(NOT DEFINED curl_cv_func_send_args OR "${curl_cv_func_send_args}" STREQUAL "unknown")
foreach(send_retv "int" "ssize_t" )
foreach(send_arg1 "int" "ssize_t" "SOCKET")
foreach(send_arg2 "const void *" "void *" "char *" "const char *")
foreach(send_arg3 "size_t" "int" "socklen_t" "unsigned int")
foreach(send_arg4 "int" "unsigned int")
if(NOT curl_cv_func_send_done)
unset(curl_cv_func_send_test CACHE)
check_c_source_compiles("
${_source_epilogue}
extern ${send_retv} ${signature_call_conv}
send(${send_arg1}, ${send_arg2}, ${send_arg3}, ${send_arg4});
int main(void) {
${send_arg1} s=0;
${send_arg2} buf=0;
${send_arg3} len=0;
${send_arg4} flags=0;
${send_retv} res = send(s, buf, len, flags);
(void) res;
return 0;
}"
curl_cv_func_send_test)
message(STATUS
"Tested: ${send_retv} send(${send_arg1}, ${send_arg2}, ${send_arg3}, ${send_arg4})")
if(curl_cv_func_send_test)
string(REGEX REPLACE "(const) .*" "\\1" send_qual_arg2 "${send_arg2}")
string(REGEX REPLACE "const (.*)" "\\1" send_arg2 "${send_arg2}")
set(curl_cv_func_send_args
"${send_arg1},${send_arg2},${send_arg3},${send_arg4},${send_retv},${send_qual_arg2}")
set(SEND_TYPE_ARG1 "${send_arg1}")
set(SEND_TYPE_ARG2 "${send_arg2}")
set(SEND_TYPE_ARG3 "${send_arg3}")
set(SEND_TYPE_ARG4 "${send_arg4}")
set(SEND_TYPE_RETV "${send_retv}")
set(HAVE_SEND 1)
set(curl_cv_func_send_done 1)
endif(curl_cv_func_send_test)
endif(NOT curl_cv_func_send_done)
endforeach(send_arg4)
endforeach(send_arg3)
endforeach(send_arg2)
endforeach(send_arg1)
endforeach(send_retv)
else()
string(REGEX REPLACE "^([^,]*),[^,]*,[^,]*,[^,]*,[^,]*,[^,]*$" "\\1" SEND_TYPE_ARG1 "${curl_cv_func_send_args}")
string(REGEX REPLACE "^[^,]*,([^,]*),[^,]*,[^,]*,[^,]*,[^,]*$" "\\1" SEND_TYPE_ARG2 "${curl_cv_func_send_args}")
string(REGEX REPLACE "^[^,]*,[^,]*,([^,]*),[^,]*,[^,]*,[^,]*$" "\\1" SEND_TYPE_ARG3 "${curl_cv_func_send_args}")
string(REGEX REPLACE "^[^,]*,[^,]*,[^,]*,([^,]*),[^,]*,[^,]*$" "\\1" SEND_TYPE_ARG4 "${curl_cv_func_send_args}")
string(REGEX REPLACE "^[^,]*,[^,]*,[^,]*,[^,]*,([^,]*),[^,]*$" "\\1" SEND_TYPE_RETV "${curl_cv_func_send_args}")
string(REGEX REPLACE "^[^,]*,[^,]*,[^,]*,[^,]*,[^,]*,([^,]*)$" "\\1" SEND_QUAL_ARG2 "${curl_cv_func_send_args}")
endif()
if("${curl_cv_func_send_args}" STREQUAL "unknown")
message(FATAL_ERROR "Cannot find proper types to use for send args")
endif("${curl_cv_func_send_args}" STREQUAL "unknown")
set(SEND_QUAL_ARG2 "const")
else(curl_cv_send)
message(FATAL_ERROR "Unable to link function send")
endif(curl_cv_send)
set(curl_cv_func_send_args "${curl_cv_func_send_args}" CACHE INTERNAL "Arguments for send")
set(HAVE_SEND 1)
check_c_source_compiles("${_source_epilogue}
int main(void) {
int flag = MSG_NOSIGNAL;
(void)flag;
return 0;
}" HAVE_MSG_NOSIGNAL)
if(NOT HAVE_WINDOWS_H)
add_header_include(HAVE_SYS_TIME_H "sys/time.h")
add_header_include(TIME_WITH_SYS_TIME "time.h")
add_header_include(HAVE_TIME_H "time.h")
endif()
check_c_source_compiles("${_source_epilogue}
int main(void) {
struct timeval ts;
ts.tv_sec = 0;
ts.tv_usec = 0;
(void)ts;
return 0;
}" HAVE_STRUCT_TIMEVAL)
include(CheckCSourceRuns)
# See HAVE_POLL in CMakeLists.txt for why poll is disabled on macOS
if(NOT APPLE)
set(CMAKE_REQUIRED_FLAGS)
if(HAVE_SYS_POLL_H)
set(CMAKE_REQUIRED_FLAGS "-DHAVE_SYS_POLL_H")
endif(HAVE_SYS_POLL_H)
check_c_source_runs("
#ifdef HAVE_SYS_POLL_H
# include <sys/poll.h>
#endif
int main(void) {
return poll((void *)0, 0, 10 /*ms*/);
}" HAVE_POLL_FINE)
endif()
set(HAVE_SIG_ATOMIC_T 1)
set(CMAKE_REQUIRED_FLAGS)
if(HAVE_SIGNAL_H)
set(CMAKE_REQUIRED_FLAGS "-DHAVE_SIGNAL_H")
set(CMAKE_EXTRA_INCLUDE_FILES "signal.h")
endif(HAVE_SIGNAL_H)
check_type_size("sig_atomic_t" SIZEOF_SIG_ATOMIC_T)
if(HAVE_SIZEOF_SIG_ATOMIC_T)
check_c_source_compiles("
#ifdef HAVE_SIGNAL_H
# include <signal.h>
#endif
int main(void) {
static volatile sig_atomic_t dummy = 0;
(void)dummy;
return 0;
}" HAVE_SIG_ATOMIC_T_NOT_VOLATILE)
if(NOT HAVE_SIG_ATOMIC_T_NOT_VOLATILE)
set(HAVE_SIG_ATOMIC_T_VOLATILE 1)
endif(NOT HAVE_SIG_ATOMIC_T_NOT_VOLATILE)
endif(HAVE_SIZEOF_SIG_ATOMIC_T)
if(HAVE_WINDOWS_H)
set(CMAKE_EXTRA_INCLUDE_FILES winsock2.h)
else()
set(CMAKE_EXTRA_INCLUDE_FILES)
if(HAVE_SYS_SOCKET_H)
set(CMAKE_EXTRA_INCLUDE_FILES sys/socket.h)
endif(HAVE_SYS_SOCKET_H)
endif()
check_type_size("struct sockaddr_storage" SIZEOF_STRUCT_SOCKADDR_STORAGE)
if(HAVE_SIZEOF_STRUCT_SOCKADDR_STORAGE)
set(HAVE_STRUCT_SOCKADDR_STORAGE 1)
endif(HAVE_SIZEOF_STRUCT_SOCKADDR_STORAGE)

View File

@@ -0,0 +1,125 @@
if(NOT UNIX)
if(WIN32)
set(HAVE_LIBDL 0)
set(HAVE_LIBUCB 0)
set(HAVE_LIBSOCKET 0)
set(NOT_NEED_LIBNSL 0)
set(HAVE_LIBNSL 0)
set(HAVE_GETHOSTNAME 1)
set(HAVE_LIBZ 0)
set(HAVE_LIBCRYPTO 0)
set(HAVE_DLOPEN 0)
set(HAVE_ALLOCA_H 0)
set(HAVE_ARPA_INET_H 0)
set(HAVE_DLFCN_H 0)
set(HAVE_FCNTL_H 1)
set(HAVE_INTTYPES_H 0)
set(HAVE_IO_H 1)
set(HAVE_MALLOC_H 1)
set(HAVE_MEMORY_H 1)
set(HAVE_NETDB_H 0)
set(HAVE_NETINET_IF_ETHER_H 0)
set(HAVE_NETINET_IN_H 0)
set(HAVE_NET_IF_H 0)
set(HAVE_PROCESS_H 1)
set(HAVE_PWD_H 0)
set(HAVE_SETJMP_H 1)
set(HAVE_SGTTY_H 0)
set(HAVE_SIGNAL_H 1)
set(HAVE_SOCKIO_H 0)
set(HAVE_STDINT_H 0)
set(HAVE_STDLIB_H 1)
set(HAVE_STRINGS_H 0)
set(HAVE_STRING_H 1)
set(HAVE_SYS_PARAM_H 0)
set(HAVE_SYS_POLL_H 0)
set(HAVE_SYS_SELECT_H 0)
set(HAVE_SYS_SOCKET_H 0)
set(HAVE_SYS_SOCKIO_H 0)
set(HAVE_SYS_STAT_H 1)
set(HAVE_SYS_TIME_H 0)
set(HAVE_SYS_TYPES_H 1)
set(HAVE_SYS_UTIME_H 1)
set(HAVE_TERMIOS_H 0)
set(HAVE_TERMIO_H 0)
set(HAVE_TIME_H 1)
set(HAVE_UNISTD_H 0)
set(HAVE_UTIME_H 0)
set(HAVE_X509_H 0)
set(HAVE_ZLIB_H 0)
set(HAVE_SIZEOF_LONG_DOUBLE 1)
set(SIZEOF_LONG_DOUBLE 8)
set(HAVE_SOCKET 1)
set(HAVE_POLL 0)
set(HAVE_SELECT 1)
set(HAVE_STRDUP 1)
set(HAVE_STRSTR 1)
set(HAVE_STRTOK_R 0)
set(HAVE_STRFTIME 1)
set(HAVE_UNAME 0)
set(HAVE_STRCASECMP 0)
set(HAVE_STRICMP 1)
set(HAVE_STRCMPI 1)
set(HAVE_GETHOSTBYADDR 1)
set(HAVE_GETTIMEOFDAY 0)
set(HAVE_INET_ADDR 1)
set(HAVE_INET_NTOA 1)
set(HAVE_INET_NTOA_R 0)
set(HAVE_TCGETATTR 0)
set(HAVE_TCSETATTR 0)
set(HAVE_PERROR 1)
set(HAVE_CLOSESOCKET 1)
set(HAVE_SETVBUF 0)
set(HAVE_SIGSETJMP 0)
set(HAVE_GETPASS_R 0)
set(HAVE_STRLCAT 0)
set(HAVE_GETPWUID 0)
set(HAVE_GETEUID 0)
set(HAVE_UTIME 1)
set(HAVE_RAND_EGD 0)
set(HAVE_RAND_SCREEN 0)
set(HAVE_RAND_STATUS 0)
set(HAVE_GMTIME_R 0)
set(HAVE_LOCALTIME_R 0)
set(HAVE_GETHOSTBYADDR_R 0)
set(HAVE_GETHOSTBYNAME_R 0)
set(HAVE_SIGNAL_FUNC 1)
set(HAVE_SIGNAL_MACRO 0)
set(HAVE_GETHOSTBYADDR_R_5 0)
set(HAVE_GETHOSTBYADDR_R_5_REENTRANT 0)
set(HAVE_GETHOSTBYADDR_R_7 0)
set(HAVE_GETHOSTBYADDR_R_7_REENTRANT 0)
set(HAVE_GETHOSTBYADDR_R_8 0)
set(HAVE_GETHOSTBYADDR_R_8_REENTRANT 0)
set(HAVE_GETHOSTBYNAME_R_3 0)
set(HAVE_GETHOSTBYNAME_R_3_REENTRANT 0)
set(HAVE_GETHOSTBYNAME_R_5 0)
set(HAVE_GETHOSTBYNAME_R_5_REENTRANT 0)
set(HAVE_GETHOSTBYNAME_R_6 0)
set(HAVE_GETHOSTBYNAME_R_6_REENTRANT 0)
set(TIME_WITH_SYS_TIME 0)
set(HAVE_O_NONBLOCK 0)
set(HAVE_IN_ADDR_T 0)
set(HAVE_INET_NTOA_R_DECL 0)
set(HAVE_INET_NTOA_R_DECL_REENTRANT 0)
if(ENABLE_IPV6)
set(HAVE_GETADDRINFO 1)
else()
set(HAVE_GETADDRINFO 0)
endif()
set(STDC_HEADERS 1)
set(RETSIGTYPE_TEST 1)
set(HAVE_SIGACTION 0)
set(HAVE_MACRO_SIGSETJMP 0)
else(WIN32)
message("This file should be included on Windows platform only")
endif(WIN32)
endif(NOT UNIX)

View File

@@ -0,0 +1,44 @@
# File containing various utilities
# Converts a CMake list to a string containing elements separated by spaces
function(TO_LIST_SPACES _LIST_NAME OUTPUT_VAR)
set(NEW_LIST_SPACE)
foreach(ITEM ${${_LIST_NAME}})
set(NEW_LIST_SPACE "${NEW_LIST_SPACE} ${ITEM}")
endforeach()
string(STRIP ${NEW_LIST_SPACE} NEW_LIST_SPACE)
set(${OUTPUT_VAR} "${NEW_LIST_SPACE}" PARENT_SCOPE)
endfunction()
# Appends a lis of item to a string which is a space-separated list, if they don't already exist.
function(LIST_SPACES_APPEND_ONCE LIST_NAME)
string(REPLACE " " ";" _LIST ${${LIST_NAME}})
list(APPEND _LIST ${ARGN})
list(REMOVE_DUPLICATES _LIST)
to_list_spaces(_LIST NEW_LIST_SPACE)
set(${LIST_NAME} "${NEW_LIST_SPACE}" PARENT_SCOPE)
endfunction()
# Convinience function that does the same as LIST(FIND ...) but with a TRUE/FALSE return value.
# Ex: IN_STR_LIST(MY_LIST "Searched item" WAS_FOUND)
function(IN_STR_LIST LIST_NAME ITEM_SEARCHED RETVAL)
list(FIND ${LIST_NAME} ${ITEM_SEARCHED} FIND_POS)
if(${FIND_POS} EQUAL -1)
set(${RETVAL} FALSE PARENT_SCOPE)
else()
set(${RETVAL} TRUE PARENT_SCOPE)
endif()
endfunction()
# Returns a list of arguments that evaluate to true
function(collect_true output_var output_count_var)
set(${output_var})
foreach(option_var IN LISTS ARGN)
if(${option_var})
list(APPEND ${output_var} ${option_var})
endif()
endforeach()
set(${output_var} ${${output_var}} PARENT_SCOPE)
list(LENGTH ${output_var} ${output_count_var})
set(${output_count_var} ${${output_count_var}} PARENT_SCOPE)
endfunction()

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,22 @@
COPYRIGHT AND PERMISSION NOTICE
Copyright (c) 1996 - 2017, Daniel Stenberg, <daniel@haxx.se>, and many
contributors, see the THANKS file.
All rights reserved.
Permission to use, copy, modify, and distribute this software for any purpose
with or without fee is hereby granted, provided that the above copyright
notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OF THIRD PARTY RIGHTS. IN
NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE
OR OTHER DEALINGS IN THE SOFTWARE.
Except as contained in this notice, the name of a copyright holder shall not
be used in advertising or otherwise to promote the sale, use or other dealings
in this Software without prior written authorization of the copyright holder.

View File

@@ -0,0 +1,146 @@
#!/bin/bash
# This script performs all of the steps needed to build a
# universal binary libcurl.framework for Mac OS X 10.4 or greater.
#
# Hendrik Visage:
# Generalizations added since Snowleopard (10.6) do not include
# the 10.4u SDK.
#
# Also note:
# 10.5 is the *ONLY* SDK that support PPC64 :( -- 10.6 do not have ppc64 support
#If you need to have PPC64 support then change below to 1
PPC64_NEEDED=0
# Apple does not support building for PPC anymore in Xcode 4 and later.
# If you're using Xcode 3 or earlier and need PPC support, then change
# the setting below to 1
PPC_NEEDED=0
# For me the default is to develop for the platform I am on, and if you
#desire compatibility with older versions then change USE_OLD to 1 :)
USE_OLD=0
VERSION=`/usr/bin/sed -ne 's/^#define LIBCURL_VERSION "\(.*\)"/\1/p' include/curl/curlver.h`
FRAMEWORK_VERSION=Versions/Release-$VERSION
#I also wanted to "copy over" the system, and thus the reason I added the
# version to Versions/Release-7.20.1 etc.
# now a simple rsync -vaP libcurl.framework /Library/Frameworks will install it
# and setup the right paths to this version, leaving the system version
# "intact", so you can "fix" it later with the links to Versions/A/...
DEVELOPER_PATH=`xcode-select --print-path`
# Around Xcode 4.3, SDKs were moved from the Developer folder into the
# MacOSX.platform folder
if test -d "$DEVELOPER_PATH/Platforms/MacOSX.platform/Developer/SDKs"; then
SDK_PATH="$DEVELOPER_PATH/Platforms/MacOSX.platform/Developer/SDKs"
else
SDK_PATH="$DEVELOPER_PATH/SDKs";
fi
OLD_SDK=`ls $SDK_PATH|head -1`
NEW_SDK=`ls -r $SDK_PATH|head -1`
if test "0"$USE_OLD -gt 0
then
SDK32=$OLD_SDK
else
SDK32=$NEW_SDK
fi
MACVER=`echo $SDK32|sed -e s/[a-zA-Z]//g -e s/.\$//`
SDK32_DIR=$SDK_PATH/$SDK32
MINVER32='-mmacosx-version-min='$MACVER
if test $PPC_NEEDED -gt 0; then
ARCHES32='-arch i386 -arch ppc'
else
ARCHES32='-arch i386'
fi
if test $PPC64_NEEDED -gt 0
then
SDK64=10.5
ARCHES64='-arch x86_64 -arch ppc64'
SDK64=`ls $SDK_PATH|grep 10.5|head -1`
else
ARCHES64='-arch x86_64'
#We "know" that 10.4 and earlier do not support 64bit
OLD_SDK64=`ls $SDK_PATH|egrep -v "10.[0-4]"|head -1`
NEW_SDK64=`ls -r $SDK_PATH|egrep -v "10.[0-4][^0-9]" | head -1`
if test $USE_OLD -gt 0
then
SDK64=$OLD_SDK64
else
SDK64=$NEW_SDK64
fi
fi
SDK64_DIR=$SDK_PATH/$SDK64
MACVER64=`echo $SDK64|sed -e s/[a-zA-Z]//g -e s/.\$//`
MINVER64='-mmacosx-version-min='$MACVER64
if test ! -z $SDK32; then
echo "----Configuring libcurl for 32 bit universal framework..."
make clean
./configure --disable-dependency-tracking --disable-static --with-gssapi --with-darwinssl \
CFLAGS="-Os -isysroot $SDK32_DIR $ARCHES32" \
LDFLAGS="-Wl,-syslibroot,$SDK32_DIR $ARCHES32 -Wl,-headerpad_max_install_names" \
CC=$CC
echo "----Building 32 bit libcurl..."
make -j `sysctl -n hw.logicalcpu_max`
echo "----Creating 32 bit framework..."
rm -r libcurl.framework
mkdir -p libcurl.framework/${FRAMEWORK_VERSION}/Resources
cp lib/.libs/libcurl.dylib libcurl.framework/${FRAMEWORK_VERSION}/libcurl
install_name_tool -id @rpath/libcurl.framework/${FRAMEWORK_VERSION}/libcurl libcurl.framework/${FRAMEWORK_VERSION}/libcurl
/usr/bin/sed -e "s/7\.12\.3/$VERSION/" lib/libcurl.plist >libcurl.framework/${FRAMEWORK_VERSION}/Resources/Info.plist
mkdir -p libcurl.framework/${FRAMEWORK_VERSION}/Headers/curl
cp include/curl/*.h libcurl.framework/${FRAMEWORK_VERSION}/Headers/curl
pushd libcurl.framework
ln -fs ${FRAMEWORK_VERSION}/libcurl libcurl
ln -fs ${FRAMEWORK_VERSION}/Resources Resources
ln -fs ${FRAMEWORK_VERSION}/Headers Headers
cd Versions
ln -fs $(basename "${FRAMEWORK_VERSION}") Current
echo Testing for SDK64
if test -d $SDK64_DIR; then
echo entering...
popd
make clean
echo "----Configuring libcurl for 64 bit universal framework..."
./configure --disable-dependency-tracking --disable-static --with-gssapi --with-darwinssl \
CFLAGS="-Os -isysroot $SDK64_DIR $ARCHES64" \
LDFLAGS="-Wl,-syslibroot,$SDK64_DIR $ARCHES64 -Wl,-headerpad_max_install_names" \
CC=$CC
echo "----Building 64 bit libcurl..."
make -j `sysctl -n hw.logicalcpu_max`
echo "----Appending 64 bit framework to 32 bit framework..."
cp lib/.libs/libcurl.dylib libcurl.framework/${FRAMEWORK_VERSION}/libcurl64
install_name_tool -id @rpath/libcurl.framework/${FRAMEWORK_VERSION}/libcurl libcurl.framework/${FRAMEWORK_VERSION}/libcurl64
cp libcurl.framework/${FRAMEWORK_VERSION}/libcurl libcurl.framework/${FRAMEWORK_VERSION}/libcurl32
pwd
lipo libcurl.framework/${FRAMEWORK_VERSION}/libcurl32 libcurl.framework/${FRAMEWORK_VERSION}/libcurl64 -create -output libcurl.framework/${FRAMEWORK_VERSION}/libcurl
rm libcurl.framework/${FRAMEWORK_VERSION}/libcurl32 libcurl.framework/${FRAMEWORK_VERSION}/libcurl64
cp libcurl.framework/${FRAMEWORK_VERSION}/Headers/curl/curlbuild.h libcurl.framework/${FRAMEWORK_VERSION}/Headers/curl/curlbuild32.h
cp include/curl/curlbuild.h libcurl.framework/${FRAMEWORK_VERSION}/Headers/curl/curlbuild64.h
cat >libcurl.framework/${FRAMEWORK_VERSION}/Headers/curl/curlbuild.h <<EOF
#ifdef __LP64__
#include "curl/curlbuild64.h"
#else
#include "curl/curlbuild32.h"
#endif
EOF
fi
pwd
lipo -info libcurl.framework/${FRAMEWORK_VERSION}/libcurl
echo "libcurl.framework is built and can now be included in other projects."
echo "Copy libcurl.framework to your bundle's Contents/Frameworks folder, ~/Library/Frameworks or /Library/Frameworks."
else
echo "Building libcurl.framework requires Mac OS X 10.4 or later with the MacOSX10.4/5/6 SDK installed."
fi

View File

@@ -0,0 +1,617 @@
#***************************************************************************
# _ _ ____ _
# Project ___| | | | _ \| |
# / __| | | | |_) | |
# | (__| |_| | _ <| |___
# \___|\___/|_| \_\_____|
#
# Copyright (C) 1998 - 2017, Daniel Stenberg, <daniel@haxx.se>, et al.
#
# This software is licensed as described in the file COPYING, which
# you should have received as part of this distribution. The terms
# are also available at https://curl.haxx.se/docs/copyright.html.
#
# You may opt to use, copy, modify, merge, publish, distribute and/or sell
# copies of the Software, and permit persons to whom the Software is
# furnished to do so, under the terms of the COPYING file.
#
# This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY
# KIND, either express or implied.
#
###########################################################################
AUTOMAKE_OPTIONS = foreign
ACLOCAL_AMFLAGS = -I m4
CMAKE_DIST = CMakeLists.txt CMake/CMakeConfigurableFile.in \
CMake/CurlTests.c CMake/FindGSS.cmake CMake/OtherTests.cmake \
CMake/Platforms/WindowsCache.cmake CMake/Utilities.cmake \
include/curl/curlbuild.h.cmake CMake/Macros.cmake \
CMake/CurlSymbolHiding.cmake CMake/FindCARES.cmake \
CMake/FindLibSSH2.cmake CMake/FindNGHTTP2.cmake \
CMake/FindMbedTLS.cmake
VC6_LIBTMPL = projects/Windows/VC6/lib/libcurl.tmpl
VC6_LIBDSP = projects/Windows/VC6/lib/libcurl.dsp.dist
VC6_LIBDSP_DEPS = $(VC6_LIBTMPL) Makefile.am lib/Makefile.inc
VC6_SRCTMPL = projects/Windows/VC6/src/curl.tmpl
VC6_SRCDSP = projects/Windows/VC6/src/curl.dsp.dist
VC6_SRCDSP_DEPS = $(VC6_SRCTMPL) Makefile.am src/Makefile.inc
VC7_LIBTMPL = projects/Windows/VC7/lib/libcurl.tmpl
VC7_LIBVCPROJ = projects/Windows/VC7/lib/libcurl.vcproj.dist
VC7_LIBVCPROJ_DEPS = $(VC7_LIBTMPL) Makefile.am lib/Makefile.inc
VC7_SRCTMPL = projects/Windows/VC7/src/curl.tmpl
VC7_SRCVCPROJ = projects/Windows/VC7/src/curl.vcproj.dist
VC7_SRCVCPROJ_DEPS = $(VC7_SRCTMPL) Makefile.am src/Makefile.inc
VC71_LIBTMPL = projects/Windows/VC7.1/lib/libcurl.tmpl
VC71_LIBVCPROJ = projects/Windows/VC7.1/lib/libcurl.vcproj.dist
VC71_LIBVCPROJ_DEPS = $(VC71_LIBTMPL) Makefile.am lib/Makefile.inc
VC71_SRCTMPL = projects/Windows/VC7.1/src/curl.tmpl
VC71_SRCVCPROJ = projects/Windows/VC7.1/src/curl.vcproj.dist
VC71_SRCVCPROJ_DEPS = $(VC71_SRCTMPL) Makefile.am src/Makefile.inc
VC8_LIBTMPL = projects/Windows/VC8/lib/libcurl.tmpl
VC8_LIBVCPROJ = projects/Windows/VC8/lib/libcurl.vcproj.dist
VC8_LIBVCPROJ_DEPS = $(VC8_LIBTMPL) Makefile.am lib/Makefile.inc
VC8_SRCTMPL = projects/Windows/VC8/src/curl.tmpl
VC8_SRCVCPROJ = projects/Windows/VC8/src/curl.vcproj.dist
VC8_SRCVCPROJ_DEPS = $(VC8_SRCTMPL) Makefile.am src/Makefile.inc
VC9_LIBTMPL = projects/Windows/VC9/lib/libcurl.tmpl
VC9_LIBVCPROJ = projects/Windows/VC9/lib/libcurl.vcproj.dist
VC9_LIBVCPROJ_DEPS = $(VC9_LIBTMPL) Makefile.am lib/Makefile.inc
VC9_SRCTMPL = projects/Windows/VC9/src/curl.tmpl
VC9_SRCVCPROJ = projects/Windows/VC9/src/curl.vcproj.dist
VC9_SRCVCPROJ_DEPS = $(VC9_SRCTMPL) Makefile.am src/Makefile.inc
VC10_LIBTMPL = projects/Windows/VC10/lib/libcurl.tmpl
VC10_LIBVCXPROJ = projects/Windows/VC10/lib/libcurl.vcxproj.dist
VC10_LIBVCXPROJ_DEPS = $(VC10_LIBTMPL) Makefile.am lib/Makefile.inc
VC10_SRCTMPL = projects/Windows/VC10/src/curl.tmpl
VC10_SRCVCXPROJ = projects/Windows/VC10/src/curl.vcxproj.dist
VC10_SRCVCXPROJ_DEPS = $(VC10_SRCTMPL) Makefile.am src/Makefile.inc
VC11_LIBTMPL = projects/Windows/VC11/lib/libcurl.tmpl
VC11_LIBVCXPROJ = projects/Windows/VC11/lib/libcurl.vcxproj.dist
VC11_LIBVCXPROJ_DEPS = $(VC11_LIBTMPL) Makefile.am lib/Makefile.inc
VC11_SRCTMPL = projects/Windows/VC11/src/curl.tmpl
VC11_SRCVCXPROJ = projects/Windows/VC11/src/curl.vcxproj.dist
VC11_SRCVCXPROJ_DEPS = $(VC11_SRCTMPL) Makefile.am src/Makefile.inc
VC12_LIBTMPL = projects/Windows/VC12/lib/libcurl.tmpl
VC12_LIBVCXPROJ = projects/Windows/VC12/lib/libcurl.vcxproj.dist
VC12_LIBVCXPROJ_DEPS = $(VC12_LIBTMPL) Makefile.am lib/Makefile.inc
VC12_SRCTMPL = projects/Windows/VC12/src/curl.tmpl
VC12_SRCVCXPROJ = projects/Windows/VC12/src/curl.vcxproj.dist
VC12_SRCVCXPROJ_DEPS = $(VC12_SRCTMPL) Makefile.am src/Makefile.inc
VC14_LIBTMPL = projects/Windows/VC14/lib/libcurl.tmpl
VC14_LIBVCXPROJ = projects/Windows/VC14/lib/libcurl.vcxproj.dist
VC14_LIBVCXPROJ_DEPS = $(VC14_LIBTMPL) Makefile.am lib/Makefile.inc
VC14_SRCTMPL = projects/Windows/VC14/src/curl.tmpl
VC14_SRCVCXPROJ = projects/Windows/VC14/src/curl.vcxproj.dist
VC14_SRCVCXPROJ_DEPS = $(VC14_SRCTMPL) Makefile.am src/Makefile.inc
VC_DIST = projects/README \
projects/build-openssl.bat \
projects/build-wolfssl.bat \
projects/checksrc.bat \
projects/Windows/VC6/curl-all.dsw \
projects/Windows/VC6/lib/libcurl.dsw \
projects/Windows/VC6/src/curl.dsw \
projects/Windows/VC7/curl-all.sln \
projects/Windows/VC7/lib/libcurl.sln \
projects/Windows/VC7/src/curl.sln \
projects/Windows/VC7.1/curl-all.sln \
projects/Windows/VC7.1/lib/libcurl.sln \
projects/Windows/VC7.1/src/curl.sln \
projects/Windows/VC8/curl-all.sln \
projects/Windows/VC8/lib/libcurl.sln \
projects/Windows/VC8/src/curl.sln \
projects/Windows/VC9/curl-all.sln \
projects/Windows/VC9/lib/libcurl.sln \
projects/Windows/VC9/src/curl.sln \
projects/Windows/VC10/curl-all.sln \
projects/Windows/VC10/lib/libcurl.sln \
projects/Windows/VC10/lib/libcurl.vcxproj.filters \
projects/Windows/VC10/src/curl.sln \
projects/Windows/VC10/src/curl.vcxproj.filters \
projects/Windows/VC11/curl-all.sln \
projects/Windows/VC11/lib/libcurl.sln \
projects/Windows/VC11/lib/libcurl.vcxproj.filters \
projects/Windows/VC11/src/curl.sln \
projects/Windows/VC11/src/curl.vcxproj.filters \
projects/Windows/VC12/curl-all.sln \
projects/Windows/VC12/lib/libcurl.sln \
projects/Windows/VC12/lib/libcurl.vcxproj.filters \
projects/Windows/VC12/src/curl.sln \
projects/Windows/VC12/src/curl.vcxproj.filters \
projects/Windows/VC14/curl-all.sln \
projects/Windows/VC14/lib/libcurl.sln \
projects/Windows/VC14/lib/libcurl.vcxproj.filters \
projects/Windows/VC14/src/curl.sln \
projects/Windows/VC14/src/curl.vcxproj.filters \
projects/generate.bat \
projects/wolfssl_options.h \
projects/wolfssl_override.props
WINBUILD_DIST = winbuild/BUILD.WINDOWS.txt winbuild/gen_resp_file.bat \
winbuild/MakefileBuild.vc winbuild/Makefile.vc
EXTRA_DIST = CHANGES COPYING maketgz Makefile.dist curl-config.in \
RELEASE-NOTES buildconf libcurl.pc.in MacOSX-Framework scripts/zsh.pl \
scripts/updatemanpages.pl $(CMAKE_DIST) $(VC_DIST) $(WINBUILD_DIST) \
lib/libcurl.vers.in buildconf.bat scripts/coverage.sh
CLEANFILES = $(VC6_LIBDSP) $(VC6_SRCDSP) $(VC7_LIBVCPROJ) $(VC7_SRCVCPROJ) \
$(VC71_LIBVCPROJ) $(VC71_SRCVCPROJ) $(VC8_LIBVCPROJ) $(VC8_SRCVCPROJ) \
$(VC9_LIBVCPROJ) $(VC9_SRCVCPROJ) $(VC10_LIBVCXPROJ) $(VC10_SRCVCXPROJ) \
$(VC11_LIBVCXPROJ) $(VC11_SRCVCXPROJ) $(VC12_LIBVCXPROJ) $(VC12_SRCVCXPROJ) \
$(VC14_LIBVCXPROJ) $(VC14_SRCVCXPROJ)
bin_SCRIPTS = curl-config
SUBDIRS = lib docs src include
DIST_SUBDIRS = $(SUBDIRS) tests packages scripts
pkgconfigdir = $(libdir)/pkgconfig
pkgconfig_DATA = libcurl.pc
# List of files required to generate VC IDE .dsp, .vcproj and .vcxproj files
include lib/Makefile.inc
include src/Makefile.inc
dist-hook:
rm -rf $(top_builddir)/tests/log
find $(distdir) -name "*.dist" -exec rm {} \;
(distit=`find $(srcdir) -name "*.dist" | grep -v ./ares/`; \
for file in $$distit; do \
strip=`echo $$file | sed -e s/^$(srcdir)// -e s/\.dist//`; \
cp $$file $(distdir)$$strip; \
done)
html:
cd docs && $(MAKE) html
pdf:
cd docs && $(MAKE) pdf
check: test examples check-docs
if CROSSCOMPILING
test-full: test
test-torture: test
test:
@echo "NOTICE: we can't run the tests when cross-compiling!"
else
test:
@(cd tests; $(MAKE) all quiet-test)
test-full:
@(cd tests; $(MAKE) all full-test)
test-nonflaky:
@(cd tests; $(MAKE) all nonflaky-test)
test-torture:
@(cd tests; $(MAKE) all torture-test)
test-event:
@(cd tests; $(MAKE) all event-test)
test-am:
@(cd tests; $(MAKE) all am-test)
endif
examples:
@(cd docs/examples; $(MAKE) check)
check-docs:
@(cd docs/libcurl; $(MAKE) check)
# This is a hook to have 'make clean' also clean up the docs and the tests
# dir. The extra check for the Makefiles being present is necessary because
# 'make distcheck' will make clean first in these directories _before_ it runs
# this hook.
clean-local:
@(if test -f tests/Makefile; then cd tests; $(MAKE) clean; fi)
@(if test -f docs/Makefile; then cd docs; $(MAKE) clean; fi)
#
# Build source and binary rpms. For rpm-3.0 and above, the ~/.rpmmacros
# must contain the following line:
# %_topdir /home/loic/local/rpm
# and that /home/loic/local/rpm contains the directory SOURCES, BUILD etc.
#
# cd /home/loic/local/rpm ; mkdir -p SOURCES BUILD RPMS/i386 SPECS SRPMS
#
# If additional configure flags are needed to build the package, add the
# following in ~/.rpmmacros
# %configure CFLAGS="%{optflags}" ./configure %{_target_platform} --prefix=%{_prefix} ${AM_CONFIGFLAGS}
# and run make rpm in the following way:
# AM_CONFIGFLAGS='--with-uri=/home/users/loic/local/RedHat-6.2' make rpm
#
rpms:
$(MAKE) RPMDIST=curl rpm
$(MAKE) RPMDIST=curl-ssl rpm
rpm:
RPM_TOPDIR=`rpm --showrc | $(PERL) -n -e 'print if(s/.*_topdir\s+(.*)/$$1/)'` ; \
cp $(srcdir)/packages/Linux/RPM/$(RPMDIST).spec $$RPM_TOPDIR/SPECS ; \
cp $(PACKAGE)-$(VERSION).tar.gz $$RPM_TOPDIR/SOURCES ; \
rpm -ba --clean --rmsource $$RPM_TOPDIR/SPECS/$(RPMDIST).spec ; \
mv $$RPM_TOPDIR/RPMS/i386/$(RPMDIST)-*.rpm . ; \
mv $$RPM_TOPDIR/SRPMS/$(RPMDIST)-*.src.rpm .
#
# Build a Solaris pkgadd format file
# run 'make pkgadd' once you've done './configure' and 'make' to make a Solaris pkgadd format
# file (which ends up back in this directory).
# The pkgadd file is in 'pkgtrans' format, so to install on Solaris, do
# pkgadd -d ./HAXXcurl-*
#
# gak - libtool requires an absolute directory, hence the pwd below...
pkgadd:
umask 022 ; \
$(MAKE) install DESTDIR=`/bin/pwd`/packages/Solaris/root ; \
cat COPYING > $(srcdir)/packages/Solaris/copyright ; \
cd $(srcdir)/packages/Solaris && $(MAKE) package
#
# Build a cygwin binary tarball installation file
# resulting .tar.bz2 file will end up at packages/Win32/cygwin
cygwinbin:
$(MAKE) -C packages/Win32/cygwin cygwinbin
# We extend the standard install with a custom hook:
install-data-hook:
cd include && $(MAKE) install
cd docs && $(MAKE) install
# We extend the standard uninstall with a custom hook:
uninstall-hook:
cd include && $(MAKE) uninstall
cd docs && $(MAKE) uninstall
ca-bundle: lib/mk-ca-bundle.pl
@echo "generating a fresh ca-bundle.crt"
@perl $< -b -l -u lib/ca-bundle.crt
ca-firefox: lib/firefox-db2pem.sh
@echo "generating a fresh ca-bundle.crt"
./lib/firefox-db2pem.sh lib/ca-bundle.crt
checksrc:
cd lib && $(MAKE) checksrc
cd src && $(MAKE) checksrc
cd tests && $(MAKE) checksrc
cd include/curl && $(MAKE) checksrc
cd docs/examples && $(MAKE) checksrc
.PHONY: vc-ide
vc-ide: $(VC6_LIBDSP_DEPS) $(VC6_SRCDSP_DEPS) $(VC7_LIBVCPROJ_DEPS) \
$(VC7_SRCVCPROJ_DEPS) $(VC71_LIBVCPROJ_DEPS) $(VC71_SRCVCPROJ_DEPS) \
$(VC8_LIBVCPROJ_DEPS) $(VC8_SRCVCPROJ_DEPS) $(VC9_LIBVCPROJ_DEPS) \
$(VC9_SRCVCPROJ_DEPS) $(VC10_LIBVCXPROJ_DEPS) $(VC10_SRCVCXPROJ_DEPS) \
$(VC11_LIBVCXPROJ_DEPS) $(VC11_SRCVCXPROJ_DEPS) $(VC12_LIBVCXPROJ_DEPS) \
$(VC12_SRCVCXPROJ_DEPS) $(VC14_LIBVCXPROJ_DEPS) $(VC14_SRCVCXPROJ_DEPS)
@(win32_lib_srcs='$(LIB_CFILES)'; \
win32_lib_hdrs='$(LIB_HFILES) config-win32.h'; \
win32_lib_rc='$(LIB_RCFILES)'; \
win32_lib_vauth_srcs='$(LIB_VAUTH_CFILES)'; \
win32_lib_vauth_hdrs='$(LIB_VAUTH_HFILES)'; \
win32_lib_vtls_srcs='$(LIB_VTLS_CFILES)'; \
win32_lib_vtls_hdrs='$(LIB_VTLS_HFILES)'; \
win32_src_srcs='$(CURL_CFILES)'; \
win32_src_hdrs='$(CURL_HFILES)'; \
win32_src_rc='$(CURL_RCFILES)'; \
win32_src_x_srcs='$(CURLX_CFILES)'; \
win32_src_x_hdrs='$(CURLX_HFILES) ../lib/config-win32.h'; \
\
sorted_lib_srcs=`for file in $$win32_lib_srcs; do echo $$file; done | sort`; \
sorted_lib_hdrs=`for file in $$win32_lib_hdrs; do echo $$file; done | sort`; \
sorted_lib_vauth_srcs=`for file in $$win32_lib_vauth_srcs; do echo $$file; done | sort`; \
sorted_lib_vauth_hdrs=`for file in $$win32_lib_vauth_hdrs; do echo $$file; done | sort`; \
sorted_lib_vtls_srcs=`for file in $$win32_lib_vtls_srcs; do echo $$file; done | sort`; \
sorted_lib_vtls_hdrs=`for file in $$win32_lib_vtls_hdrs; do echo $$file; done | sort`; \
sorted_src_srcs=`for file in $$win32_src_srcs; do echo $$file; done | sort`; \
sorted_src_hdrs=`for file in $$win32_src_hdrs; do echo $$file; done | sort`; \
sorted_src_x_srcs=`for file in $$win32_src_x_srcs; do echo $$file; done | sort`; \
sorted_src_x_hdrs=`for file in $$win32_src_x_hdrs; do echo $$file; done | sort`; \
\
awk_code='\
function gen_element(type, dir, file)\
{\
sub(/vauth\//, "", file);\
sub(/vtls\//, "", file);\
\
spaces=" ";\
if(dir == "lib\\vauth" || dir == "lib\\vtls")\
tabs=" ";\
else\
tabs=" ";\
\
if(type == "dsp") {\
printf("# Begin Source File\r\n");\
printf("\r\n");\
printf("SOURCE=..\\..\\..\\..\\%s\\%s\r\n", dir, file);\
printf("# End Source File\r\n");\
}\
else if(type == "vcproj1") {\
printf("%s<File\r\n", tabs);\
printf("%s RelativePath=\"..\\..\\..\\..\\%s\\%s\">\r\n",\
tabs, dir, file);\
printf("%s</File>\r\n", tabs);\
}\
else if(type == "vcproj2") {\
printf("%s<File\r\n", tabs);\
printf("%s RelativePath=\"..\\..\\..\\..\\%s\\%s\"\r\n",\
tabs, dir, file);\
printf("%s>\r\n", tabs);\
printf("%s</File>\r\n", tabs);\
}\
else if(type == "vcxproj") {\
i = index(file, ".");\
ext = substr(file, i == 0 ? 0 : i + 1);\
\
if(ext == "c")\
printf("%s<ClCompile Include=\"..\\..\\..\\..\\%s\\%s\" />\r\n",\
spaces, dir, file);\
else if(ext == "h")\
printf("%s<ClInclude Include=\"..\\..\\..\\..\\%s\\%s\" />\r\n",\
spaces, dir, file);\
else if(ext == "rc")\
printf("%s<ResourceCompile Include=\"..\\..\\..\\..\\%s\\%s\" />\r\n",\
spaces, dir, file);\
}\
}\
\
{\
\
if($$0 == "CURL_LIB_C_FILES") {\
split(lib_srcs, arr);\
for(val in arr) gen_element(proj_type, "lib", arr[val]);\
}\
else if($$0 == "CURL_LIB_H_FILES") {\
split(lib_hdrs, arr);\
for(val in arr) gen_element(proj_type, "lib", arr[val]);\
}\
else if($$0 == "CURL_LIB_RC_FILES") {\
split(lib_rc, arr);\
for(val in arr) gen_element(proj_type, "lib", arr[val]);\
}\
else if($$0 == "CURL_LIB_VAUTH_C_FILES") {\
split(lib_vauth_srcs, arr);\
for(val in arr) gen_element(proj_type, "lib\\vauth", arr[val]);\
}\
else if($$0 == "CURL_LIB_VAUTH_H_FILES") {\
split(lib_vauth_hdrs, arr);\
for(val in arr) gen_element(proj_type, "lib\\vauth", arr[val]);\
}\
else if($$0 == "CURL_LIB_VTLS_C_FILES") {\
split(lib_vtls_srcs, arr);\
for(val in arr) gen_element(proj_type, "lib\\vtls", arr[val]);\
}\
else if($$0 == "CURL_LIB_VTLS_H_FILES") {\
split(lib_vtls_hdrs, arr);\
for(val in arr) gen_element(proj_type, "lib\\vtls", arr[val]);\
}\
else if($$0 == "CURL_SRC_C_FILES") {\
split(src_srcs, arr);\
for(val in arr) gen_element(proj_type, "src", arr[val]);\
}\
else if($$0 == "CURL_SRC_H_FILES") {\
split(src_hdrs, arr);\
for(val in arr) gen_element(proj_type, "src", arr[val]);\
}\
else if($$0 == "CURL_SRC_RC_FILES") {\
split(src_rc, arr);\
for(val in arr) gen_element(proj_type, "src", arr[val]);\
}\
else if($$0 == "CURL_SRC_X_C_FILES") {\
split(src_x_srcs, arr);\
for(val in arr) {\
sub(/..\/lib\//, "", arr[val]);\
gen_element(proj_type, "lib", arr[val]);\
}\
}\
else if($$0 == "CURL_SRC_X_H_FILES") {\
split(src_x_hdrs, arr);\
for(val in arr) {\
sub(/..\/lib\//, "", arr[val]);\
gen_element(proj_type, "lib", arr[val]);\
}\
}\
else\
printf("%s\r\n", $$0);\
}';\
\
echo "generating '$(VC6_LIBDSP)'"; \
awk -v proj_type=dsp \
-v lib_srcs="$$sorted_lib_srcs" \
-v lib_hdrs="$$sorted_lib_hdrs" \
-v lib_rc="$$win32_lib_rc" \
-v lib_vauth_srcs="$$sorted_lib_vauth_srcs" \
-v lib_vauth_hdrs="$$sorted_lib_vauth_hdrs" \
-v lib_vtls_srcs="$$sorted_lib_vtls_srcs" \
-v lib_vtls_hdrs="$$sorted_lib_vtls_hdrs" \
"$$awk_code" $(srcdir)/$(VC6_LIBTMPL) > $(VC6_LIBDSP) || { exit 1; }; \
\
echo "generating '$(VC6_SRCDSP)'"; \
awk -v proj_type=dsp \
-v src_srcs="$$sorted_src_srcs" \
-v src_hdrs="$$sorted_src_hdrs" \
-v src_rc="$$win32_src_rc" \
-v src_x_srcs="$$sorted_src_x_srcs" \
-v src_x_hdrs="$$sorted_src_x_hdrs" \
"$$awk_code" $(srcdir)/$(VC6_SRCTMPL) > $(VC6_SRCDSP) || { exit 1; }; \
\
echo "generating '$(VC7_LIBVCPROJ)'"; \
awk -v proj_type=vcproj1 \
-v lib_srcs="$$sorted_lib_srcs" \
-v lib_hdrs="$$sorted_lib_hdrs" \
-v lib_rc="$$win32_lib_rc" \
-v lib_vauth_srcs="$$sorted_lib_vauth_srcs" \
-v lib_vauth_hdrs="$$sorted_lib_vauth_hdrs" \
-v lib_vtls_srcs="$$sorted_lib_vtls_srcs" \
-v lib_vtls_hdrs="$$sorted_lib_vtls_hdrs" \
"$$awk_code" $(srcdir)/$(VC7_LIBTMPL) > $(VC7_LIBVCPROJ) || { exit 1; }; \
\
echo "generating '$(VC7_SRCVCPROJ)'"; \
awk -v proj_type=vcproj1 \
-v src_srcs="$$sorted_src_srcs" \
-v src_hdrs="$$sorted_src_hdrs" \
-v src_rc="$$win32_src_rc" \
-v src_x_srcs="$$sorted_src_x_srcs" \
-v src_x_hdrs="$$sorted_src_x_hdrs" \
"$$awk_code" $(srcdir)/$(VC7_SRCTMPL) > $(VC7_SRCVCPROJ) || { exit 1; }; \
\
echo "generating '$(VC71_LIBVCPROJ)'"; \
awk -v proj_type=vcproj1 \
-v lib_srcs="$$sorted_lib_srcs" \
-v lib_hdrs="$$sorted_lib_hdrs" \
-v lib_rc="$$win32_lib_rc" \
-v lib_vauth_srcs="$$sorted_lib_vauth_srcs" \
-v lib_vauth_hdrs="$$sorted_lib_vauth_hdrs" \
-v lib_vtls_srcs="$$sorted_lib_vtls_srcs" \
-v lib_vtls_hdrs="$$sorted_lib_vtls_hdrs" \
"$$awk_code" $(srcdir)/$(VC71_LIBTMPL) > $(VC71_LIBVCPROJ) || { exit 1; }; \
\
echo "generating '$(VC71_SRCVCPROJ)'"; \
awk -v proj_type=vcproj1 \
-v src_srcs="$$sorted_src_srcs" \
-v src_hdrs="$$sorted_src_hdrs" \
-v src_rc="$$win32_src_rc" \
-v src_x_srcs="$$sorted_src_x_srcs" \
-v src_x_hdrs="$$sorted_src_x_hdrs" \
"$$awk_code" $(srcdir)/$(VC71_SRCTMPL) > $(VC71_SRCVCPROJ) || { exit 1; }; \
\
echo "generating '$(VC8_LIBVCPROJ)'"; \
awk -v proj_type=vcproj2 \
-v lib_srcs="$$sorted_lib_srcs" \
-v lib_hdrs="$$sorted_lib_hdrs" \
-v lib_rc="$$win32_lib_rc" \
-v lib_vauth_srcs="$$sorted_lib_vauth_srcs" \
-v lib_vauth_hdrs="$$sorted_lib_vauth_hdrs" \
-v lib_vtls_srcs="$$sorted_lib_vtls_srcs" \
-v lib_vtls_hdrs="$$sorted_lib_vtls_hdrs" \
"$$awk_code" $(srcdir)/$(VC8_LIBTMPL) > $(VC8_LIBVCPROJ) || { exit 1; }; \
\
echo "generating '$(VC8_SRCVCPROJ)'"; \
awk -v proj_type=vcproj2 \
-v src_srcs="$$sorted_src_srcs" \
-v src_hdrs="$$sorted_src_hdrs" \
-v src_rc="$$win32_src_rc" \
-v src_x_srcs="$$sorted_src_x_srcs" \
-v src_x_hdrs="$$sorted_src_x_hdrs" \
"$$awk_code" $(srcdir)/$(VC8_SRCTMPL) > $(VC8_SRCVCPROJ) || { exit 1; }; \
\
echo "generating '$(VC9_LIBVCPROJ)'"; \
awk -v proj_type=vcproj2 \
-v lib_srcs="$$sorted_lib_srcs" \
-v lib_hdrs="$$sorted_lib_hdrs" \
-v lib_rc="$$win32_lib_rc" \
-v lib_vauth_srcs="$$sorted_lib_vauth_srcs" \
-v lib_vauth_hdrs="$$sorted_lib_vauth_hdrs" \
-v lib_vtls_srcs="$$sorted_lib_vtls_srcs" \
-v lib_vtls_hdrs="$$sorted_lib_vtls_hdrs" \
"$$awk_code" $(srcdir)/$(VC9_LIBTMPL) > $(VC9_LIBVCPROJ) || { exit 1; }; \
\
echo "generating '$(VC9_SRCVCPROJ)'"; \
awk -v proj_type=vcproj2 \
-v src_srcs="$$sorted_src_srcs" \
-v src_hdrs="$$sorted_src_hdrs" \
-v src_rc="$$win32_src_rc" \
-v src_x_srcs="$$sorted_src_x_srcs" \
-v src_x_hdrs="$$sorted_src_x_hdrs" \
"$$awk_code" $(srcdir)/$(VC9_SRCTMPL) > $(VC9_SRCVCPROJ) || { exit 1; }; \
\
echo "generating '$(VC10_LIBVCXPROJ)'"; \
awk -v proj_type=vcxproj \
-v lib_srcs="$$sorted_lib_srcs" \
-v lib_hdrs="$$sorted_lib_hdrs" \
-v lib_rc="$$win32_lib_rc" \
-v lib_vauth_srcs="$$sorted_lib_vauth_srcs" \
-v lib_vauth_hdrs="$$sorted_lib_vauth_hdrs" \
-v lib_vtls_srcs="$$sorted_lib_vtls_srcs" \
-v lib_vtls_hdrs="$$sorted_lib_vtls_hdrs" \
"$$awk_code" $(srcdir)/$(VC10_LIBTMPL) > $(VC10_LIBVCXPROJ) || { exit 1; }; \
\
echo "generating '$(VC10_SRCVCXPROJ)'"; \
awk -v proj_type=vcxproj \
-v src_srcs="$$sorted_src_srcs" \
-v src_hdrs="$$sorted_src_hdrs" \
-v src_rc="$$win32_src_rc" \
-v src_x_srcs="$$sorted_src_x_srcs" \
-v src_x_hdrs="$$sorted_src_x_hdrs" \
"$$awk_code" $(srcdir)/$(VC10_SRCTMPL) > $(VC10_SRCVCXPROJ) || { exit 1; }; \
\
echo "generating '$(VC11_LIBVCXPROJ)'"; \
awk -v proj_type=vcxproj \
-v lib_srcs="$$sorted_lib_srcs" \
-v lib_hdrs="$$sorted_lib_hdrs" \
-v lib_rc="$$win32_lib_rc" \
-v lib_vauth_srcs="$$sorted_lib_vauth_srcs" \
-v lib_vauth_hdrs="$$sorted_lib_vauth_hdrs" \
-v lib_vtls_srcs="$$sorted_lib_vtls_srcs" \
-v lib_vtls_hdrs="$$sorted_lib_vtls_hdrs" \
"$$awk_code" $(srcdir)/$(VC11_LIBTMPL) > $(VC11_LIBVCXPROJ) || { exit 1; }; \
\
echo "generating '$(VC11_SRCVCXPROJ)'"; \
awk -v proj_type=vcxproj \
-v src_srcs="$$sorted_src_srcs" \
-v src_hdrs="$$sorted_src_hdrs" \
-v src_rc="$$win32_src_rc" \
-v src_x_srcs="$$sorted_src_x_srcs" \
-v src_x_hdrs="$$sorted_src_x_hdrs" \
"$$awk_code" $(srcdir)/$(VC11_SRCTMPL) > $(VC11_SRCVCXPROJ) || { exit 1; }; \
\
echo "generating '$(VC12_LIBVCXPROJ)'"; \
awk -v proj_type=vcxproj \
-v lib_srcs="$$sorted_lib_srcs" \
-v lib_hdrs="$$sorted_lib_hdrs" \
-v lib_rc="$$win32_lib_rc" \
-v lib_vauth_srcs="$$sorted_lib_vauth_srcs" \
-v lib_vauth_hdrs="$$sorted_lib_vauth_hdrs" \
-v lib_vtls_srcs="$$sorted_lib_vtls_srcs" \
-v lib_vtls_hdrs="$$sorted_lib_vtls_hdrs" \
"$$awk_code" $(srcdir)/$(VC12_LIBTMPL) > $(VC12_LIBVCXPROJ) || { exit 1; }; \
\
echo "generating '$(VC12_SRCVCXPROJ)'"; \
awk -v proj_type=vcxproj \
-v src_srcs="$$sorted_src_srcs" \
-v src_hdrs="$$sorted_src_hdrs" \
-v src_rc="$$win32_src_rc" \
-v src_x_srcs="$$sorted_src_x_srcs" \
-v src_x_hdrs="$$sorted_src_x_hdrs" \
"$$awk_code" $(srcdir)/$(VC12_SRCTMPL) > $(VC12_SRCVCXPROJ) || { exit 1; }; \
\
echo "generating '$(VC14_LIBVCXPROJ)'"; \
awk -v proj_type=vcxproj \
-v lib_srcs="$$sorted_lib_srcs" \
-v lib_hdrs="$$sorted_lib_hdrs" \
-v lib_rc="$$win32_lib_rc" \
-v lib_vauth_srcs="$$sorted_lib_vauth_srcs" \
-v lib_vauth_hdrs="$$sorted_lib_vauth_hdrs" \
-v lib_vtls_srcs="$$sorted_lib_vtls_srcs" \
-v lib_vtls_hdrs="$$sorted_lib_vtls_hdrs" \
"$$awk_code" $(srcdir)/$(VC14_LIBTMPL) > $(VC14_LIBVCXPROJ) || { exit 1; }; \
\
echo "generating '$(VC14_SRCVCXPROJ)'"; \
awk -v proj_type=vcxproj \
-v src_srcs="$$sorted_src_srcs" \
-v src_hdrs="$$sorted_src_hdrs" \
-v src_rc="$$win32_src_rc" \
-v src_x_srcs="$$sorted_src_x_srcs" \
-v src_x_hdrs="$$sorted_src_x_hdrs" \
"$$awk_code" $(srcdir)/$(VC14_SRCTMPL) > $(VC14_SRCVCXPROJ) || { exit 1; };)

View File

@@ -0,0 +1,49 @@
_ _ ____ _
___| | | | _ \| |
/ __| | | | |_) | |
| (__| |_| | _ <| |___
\___|\___/|_| \_\_____|
README
Curl is a command line tool for transferring data specified with URL
syntax. Find out how to use curl by reading the curl.1 man page or the
MANUAL document. Find out how to install Curl by reading the INSTALL
document.
libcurl is the library curl is using to do its job. It is readily
available to be used by your software. Read the libcurl.3 man page to
learn how!
You find answers to the most frequent questions we get in the FAQ document.
Study the COPYING file for distribution terms and similar. If you distribute
curl binaries or other binaries that involve libcurl, you might enjoy the
LICENSE-MIXING document.
CONTACT
If you have problems, questions, ideas or suggestions, please contact us
by posting to a suitable mailing list. See https://curl.haxx.se/mail/
All contributors to the project are listed in the THANKS document.
WEB SITE
Visit the curl web site for the latest news and downloads:
https://curl.haxx.se/
GIT
To download the very latest source off the GIT server do this:
git clone https://github.com/curl/curl.git
(you'll get a directory named curl created, filled with the source code)
NOTICE
Curl contains pieces of source code that is Copyright (c) 1998, 1999
Kungliga Tekniska Högskolan. This notice is included here to comply with the
distribution terms.

View File

@@ -0,0 +1,219 @@
Curl and libcurl 7.54.1
Public curl releases: 166
Command line options: 207
curl_easy_setopt() options: 245
Public functions in libcurl: 61
Contributors: 1571
This release includes the following changes:
o curl: show the libcurl release date in --version output [32]
This release includes the following bugfixes:
o CVE-2017-9502: file: URL buffer overflow [65]
o openssl: fix memory leak in servercert
o tests: remove the html and PDF versions from the tarball
o mbedtls: enable NTLM (& SMB) even if MD4 support is unavailable
o typecheck-gcc: handle function pointers properly [1]
o llist: no longer uses malloc [2]
o gnutls: removed some code when --disable-verbose is configured
o lib: fix maybe-uninitialized warnings
o multi: clarify condition in curl_multi_wait [3]
o schannel: Don't treat encrypted partial record as pending data [4]
o configure: fix the -ldl check for openssl, add -lpthread check [5]
o configure: accept -Og and -Ofast GCC flags [6]
o Makefile: avoid use of GNU-specific form of $< [7]
o if2ip: fix -Wcast-align warning
o configure: stop prepending to LDFLAGS, CPPFLAGS [8]
o curl: set a 100K buffer size by default [9]
o typecheck-gcc: fix _curl_is_slist_info [10]
o nss: do not leak PKCS #11 slot while loading a key [11]
o nss: load libnssckbi.so if no other trust is specified [12]
o examples: ftpuploadfrommem.c [13]
o url: declare get_protocol_family() static [14]
o examples/cookie_interface.c: changed to example.com
o test1443: test --remote-time
o curl: use utimes instead of obsolescent utime when available
o url: fixed a memory leak on OOM while setting CURLOPT_BUFFERSIZE
o curl_rtmp: fix missing-variable-declarations warnings
o tests: fixed OOM handling of unit tests to abort test
o curl_setup: Ensure no more than one IDN lib is enabled [15]
o tool: Fix missing prototype warnings for CURL_DOES_CONVERSIONS [16]
o CURLOPT_BUFFERSIZE: 1024 bytes is now the minimum size [17]
o curl: non-boolean command line args reject --no- prefixes [18]
o telnet: Write full buffer instead of byte-by-byte [19]
o typecheck-gcc: add missing string options [20]
o typecheck-gcc: add support for CURLINFO_SOCKET [21]
o opt man pages: they all have examples now
o curl_setup_once: use SEND_QUAL_ARG2 for swrite [22]
o test557: set a known good numeric locale
o schannel: return a more specific error code for SEC_E_UNTRUSTED_ROOT
o tests/server: make string literals const
o runtests: use -R for random order [23]
o unit1305: fix compiler warning
o curl_slist_append.3: clarify a NULL input creates a new list
o tests/server: run checksrc by default in debug-builds
o tests: fix -Wcast-qual warnings
o runtests.pl: simplify the datacheck read section
o curl: remove --environment and tool_writeenv.c [24]
o buildconf: fix hang on IRIX [25]
o tftp: silence bad-function-cast warning
o asyn-thread: fix unused macro warnings
o tool_parsecfg: fix -Wcast-qual warning
o sendrecv: fix MinGW-w64 warning
o test537: use correct variable type [26]
o rand: treat fake entropy the same regardless of endianness [27]
o curl: generate the --help output [28]
o tests: removed redundant --trace-ascii arguments
o multi: assign IDs to all timers and make each timer singleton
o multi: use a fixed array of timers instead of malloc [29]
o mbedtls: Support server renegotiation request [30]
o pipeline: fix mistakenly trying to pipeline POSTs [31]
o lib510: don't write past the end of the buffer if it's too small
o CURLOPT_HTTPPROXYTUNNEL.3: clarify, add example
o SecureTransport/DarwinSSL: Implement public key pinning [33]
o curl.1: clarify --config
o curl_sasl: fix build error with CURL_DISABLE_CRYPTO_AUTH + USE_NTLM [34]
o darwinssl: Fix exception when processing a client-side certificate [35]
o curl.1: mention --oauth2-bearer's <token> argument
o mkhelp.pl: do not add current time into curl binary [36]
o asiohiper.cpp / evhiperfifo.c: deal with negative timerfunction input [37]
o ssh: fix memory leak in disconnect due to timeout [38]
o tests: stabilize test 1034 [39]
o cmake: auto detection of CURL_CA_BUNDLE/CURL_CA_PATH [40]
o assert: avoid, use DEBUGASSERT instead [41]
o LDAP: using ldap_bind_s on Windows with methods [42]
o redirect: store the "would redirect to" URL when max redirs is reached [43]
o winbuild: fix the nghttp2 build [44]
o examples: fix -Wimplicit-fallthrough warnings
o time: fix type conversions and compiler warnings [45]
o mbedtls: fix variable shadow warning
o test557: fix ubsan runtime error due to int left shift [46]
o transfer: init the infilesize from the postfields [47]
o docs: clarify NO_PROXY further [48]
o build-wolfssl: Sync config with wolfSSL 3.11
o curl-compilers.m4: enable -Wshift-sign-overflow for clang [49]
o example/externalsocket.c: make it use CLOSESOCKETFUNCTION too
o lib574.c: use correct callback proto
o lib583: fix compiler warning
o curl-compilers.m4: fix compiler_num for clang [50]
o typecheck-gcc.h: separate getinfo slist checks from other pointers [51]
o typecheck-gcc.h: check CURLINFO_TLS_SSL_PTR and CURLINFO_TLS_SESSION
o typecheck-gcc.h: check CURLINFO_CERTINFO [52]
o build: provide easy code coverage measuring [53]
o test1537: dedicated tests of the URL (un)escape API calls [54]
o curl_endian: remove unused functions [55]
o test1538: verify the libcurl strerror API calls
o MD(4|5): silence cast-align clang warning
o dedotdot: fixed output for ".." and "." only input [56]
o cyassl: define build macros before including ssl.h [57]
o updatemanpages.pl: error out on too old git version
o curl_sasl: fix unused-variable warning
o x509asn1: fix implicit-fallthrough warning with GCC 7
o libtest: fix implicit-fallthrough warnings with GCC 7
o BINDINGS: add Ring binding [58]
o curl_ntlm_core: pass unsigned char to toupper
o test1262: verify ftp download with -z for "if older than this"
o test1521: test all curl_easy_setopt options [59]
o typecheck-gcc: allow CURLOPT_STDERR to be NULL too
o metalink: remove unused printf() argument
o file: make speedcheck use current time for checks [60]
o configure: fix link with librtmp when specifying path [61]
o examples/multi-uv.c: fix deprecated symbol [62]
o cmake: Fix inconsistency regarding mbed TLS include directory [63]
o setopt: check CURLOPT_ADDRESS_SCOPE option range
o gitignore: ignore all vim swap files [64]
o urlglob: fix division by zero
o libressl: OCSP and intermediate certs workaround no longer needed [66]
This release includes the following known bugs:
o see docs/KNOWN_BUGS (https://curl.haxx.se/docs/knownbugs.html)
This release would not have looked like this without help, code, reports and
advice from friends like these:
Akhil Kedia, Alan Jenkins, Anatol Belski, Bernhard M. Wiedemann,
Brian Childs, canavan at github, Chris Carlmar, Dan Fandrich,
Daniel Stenberg, Edward Thomson, Gisle Vanem, GwanYeong Kim,
Helmut K. C. Tessarek, Joel Depooter, jonrumsey at github, Kai Engert,
Kamil Dudka, Kevin Ji, Lloyd Fournier, Mahmoud Samir Fayed, Marcel Raad,
Martin Kepplinger, Max Dymond, Michael Kaufmann, Nick Zitzmann, Paul Harris,
Phil Crump, Piotr Dobrogost, Ray Satiro, Richard Hsu, Ron Eldor,
Ryuichi KAWAMATA, Sergei Nikulov, Simon Warta, stootill at github,
Stuart Henderson, TheAssassin at github, Thomas Klausner, Travis Burtrum,
Vincas Razma, wyattoday at github,
(41 contributors)
Thanks! (and sorry if I forgot to mention someone)
References to bug reports and discussions on issues:
[1] = https://curl.haxx.se/bug/?i=1403
[2] = https://curl.haxx.se/bug/?i=1435
[3] = https://curl.haxx.se/bug/?i=1439
[4] = https://curl.haxx.se/bug/?i=1392
[5] = https://curl.haxx.se/bug/?i=1427
[6] = https://curl.haxx.se/bug/?i=1440
[7] = https://curl.haxx.se/bug/?i=1432
[8] = https://curl.haxx.se/bug/?i=1420
[9] = https://curl.haxx.se/bug/?i=1446
[10] = https://curl.haxx.se/bug/?i=1447
[11] = https://bugzilla.redhat.com/1444860
[12] = https://curl.haxx.se/bug/?i=1414
[13] = https://curl.haxx.se/bug/?i=1451
[14] = https://curl.haxx.se/mail/lib-2017-04/0127.html
[15] = https://github.com/curl/curl/issues/1441#issuecomment-297689856
[16] = https://curl.haxx.se/bug/?i=1460
[17] = https://curl.haxx.se/bug/?i=1449
[18] = https://curl.haxx.se/bug/?i=1453
[19] = https://curl.haxx.se/bug/?i=1389
[20] = https://curl.haxx.se/bug/?i=1452
[21] = https://curl.haxx.se/bug/?i=1452
[22] = https://curl.haxx.se/bug/?i=1464
[23] = https://curl.haxx.se/bug/?i=1466
[24] = https://curl.haxx.se/bug/?i=1463
[25] = https://curl.haxx.se/bug/?i=1471
[26] = https://curl.haxx.se/bug/?i=1469
[27] = https://curl.haxx.se/bug/?i=1315
[28] = https://curl.haxx.se/bug/?i=1465
[29] = https://curl.haxx.se/bug/?i=1472
[30] = https://curl.haxx.se/bug/?i=1475
[31] = https://curl.haxx.se/bug/?i=1481
[32] = https://curl.haxx.se/bug/?i=1474
[33] = https://curl.haxx.se/bug/?i=1400
[34] = https://curl.haxx.se/bug/?i=1487
[35] = https://curl.haxx.se/bug/?i=1450
[36] = https://curl.haxx.se/bug/?i=1490
[37] = https://curl.haxx.se/bug/?i=1253
[38] = https://curl.haxx.se/bug/?i=1479
[39] = https://curl.haxx.se/bug/?i=1488
[40] = https://curl.haxx.se/bug/?i=1461
[41] = https://curl.haxx.se/bug/?i=1504
[42] = https://curl.haxx.se/bug/?i=878
[43] = https://curl.haxx.se/bug/?i=1489
[44] = https://curl.haxx.se/bug/?i=1321
[45] = https://curl.haxx.se/bug/?i=1499
[46] = https://curl.haxx.se/bug/?i=1516
[47] = https://curl.haxx.se/bug/?i=1294
[48] = https://curl.haxx.se/bug/?i=1208
[49] = https://curl.haxx.se/bug/?i=1516
[50] = https://curl.haxx.se/bug/?i=1522
[51] = https://curl.haxx.se/bug/?i=1524
[52] = https://curl.haxx.se/bug/?i=846
[53] = https://curl.haxx.se/bug/?i=1528
[54] = https://curl.haxx.se/bug/?i=1530
[55] = https://curl.haxx.se/bug/?i=1529
[56] = https://curl.haxx.se/bug/?i=1532
[57] = https://curl.haxx.se/bug/?i=1536
[58] = https://curl.haxx.se/bug/?i=1539
[59] = https://curl.haxx.se/bug/?i=1543
[60] = https://curl.haxx.se/bug/?i=1550
[61] = https://curl.haxx.se/mail/lib-2017-06/0017.html
[62] = https://curl.haxx.se/bug/?i=1557
[63] = https://curl.haxx.se/bug/?i=1541
[64] = https://curl.haxx.se/bug/?i=1561
[65] = https://curl.haxx.se/docs/adv_20170614.html
[66] = https://curl.haxx.se/mail/lib-2017-06/0038.html

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,449 @@
#!/bin/sh
#***************************************************************************
# _ _ ____ _
# Project ___| | | | _ \| |
# / __| | | | |_) | |
# | (__| |_| | _ <| |___
# \___|\___/|_| \_\_____|
#
# Copyright (C) 1998 - 2017, Daniel Stenberg, <daniel@haxx.se>, et al.
#
# This software is licensed as described in the file COPYING, which
# you should have received as part of this distribution. The terms
# are also available at https://curl.haxx.se/docs/copyright.html.
#
# You may opt to use, copy, modify, merge, publish, distribute and/or sell
# copies of the Software, and permit persons to whom the Software is
# furnished to do so, under the terms of the COPYING file.
#
# This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY
# KIND, either express or implied.
#
###########################################################################
#--------------------------------------------------------------------------
# die prints argument string to stdout and exits this shell script.
#
die(){
echo "buildconf: $@"
exit 1
}
#--------------------------------------------------------------------------
# findtool works as 'which' but we use a different name to make it more
# obvious we aren't using 'which'! ;-)
# Unlike 'which' does, the current directory is ignored.
#
findtool(){
file="$1"
if { echo "$file" | grep "/" >/dev/null 2>&1; } then
# when file is given with a path check it first
if test -f "$file"; then
echo "$file"
return
fi
fi
old_IFS=$IFS; IFS=':'
for path in $PATH
do
IFS=$old_IFS
# echo "checks for $file in $path" >&2
if test "$path" -a "$path" != '.' -a -f "$path/$file"; then
echo "$path/$file"
return
fi
done
IFS=$old_IFS
}
#--------------------------------------------------------------------------
# removethis() removes all files and subdirectories with the given name,
# inside and below the current subdirectory at invocation time.
#
removethis(){
if test "$#" = "1"; then
find . -depth -name $1 -print > buildconf.tmp.$$
while read fdname
do
if test -f "$fdname"; then
rm -f "$fdname"
elif test -d "$fdname"; then
rm -f -r "$fdname"
fi
done < buildconf.tmp.$$
rm -f buildconf.tmp.$$
fi
}
#--------------------------------------------------------------------------
# Ensure that buildconf runs from the subdirectory where configure.ac lives
#
if test ! -f configure.ac ||
test ! -f src/tool_main.c ||
test ! -f lib/urldata.h ||
test ! -f include/curl/curl.h ||
test ! -f m4/curl-functions.m4; then
echo "Can not run buildconf from outside of curl's source subdirectory!"
echo "Change to the subdirectory where buildconf is found, and try again."
exit 1
fi
#--------------------------------------------------------------------------
# autoconf 2.57 or newer. Unpatched version 2.67 does not generate proper
# configure script. Unpatched version 2.68 is simply unusable, we should
# disallow 2.68 usage.
#
need_autoconf="2.57"
ac_version=`${AUTOCONF:-autoconf} --version 2>/dev/null|head -n 1| sed -e 's/^[^0-9]*//' -e 's/[a-z]* *$//'`
if test -z "$ac_version"; then
echo "buildconf: autoconf not found."
echo " You need autoconf version $need_autoconf or newer installed."
exit 1
fi
old_IFS=$IFS; IFS='.'; set $ac_version; IFS=$old_IFS
if test "$1" = "2" -a "$2" -lt "57" || test "$1" -lt "2"; then
echo "buildconf: autoconf version $ac_version found."
echo " You need autoconf version $need_autoconf or newer installed."
echo " If you have a sufficient autoconf installed, but it"
echo " is not named 'autoconf', then try setting the"
echo " AUTOCONF environment variable."
exit 1
fi
if test "$1" = "2" -a "$2" -eq "67"; then
echo "buildconf: autoconf version $ac_version (BAD)"
echo " Unpatched version generates broken configure script."
elif test "$1" = "2" -a "$2" -eq "68"; then
echo "buildconf: autoconf version $ac_version (BAD)"
echo " Unpatched version generates unusable configure script."
else
echo "buildconf: autoconf version $ac_version (ok)"
fi
am4te_version=`${AUTOM4TE:-autom4te} --version 2>/dev/null|head -n 1| sed -e 's/autom4te\(.*\)/\1/' -e 's/^[^0-9]*//' -e 's/[a-z]* *$//'`
if test -z "$am4te_version"; then
echo "buildconf: autom4te not found. Weird autoconf installation!"
exit 1
fi
if test "$am4te_version" = "$ac_version"; then
echo "buildconf: autom4te version $am4te_version (ok)"
else
echo "buildconf: autom4te version $am4te_version (ERROR: does not match autoconf version)"
exit 1
fi
#--------------------------------------------------------------------------
# autoheader 2.50 or newer
#
ah_version=`${AUTOHEADER:-autoheader} --version 2>/dev/null|head -n 1| sed -e 's/^[^0-9]*//' -e 's/[a-z]* *$//'`
if test -z "$ah_version"; then
echo "buildconf: autoheader not found."
echo " You need autoheader version 2.50 or newer installed."
exit 1
fi
old_IFS=$IFS; IFS='.'; set $ah_version; IFS=$old_IFS
if test "$1" = "2" -a "$2" -lt "50" || test "$1" -lt "2"; then
echo "buildconf: autoheader version $ah_version found."
echo " You need autoheader version 2.50 or newer installed."
echo " If you have a sufficient autoheader installed, but it"
echo " is not named 'autoheader', then try setting the"
echo " AUTOHEADER environment variable."
exit 1
fi
echo "buildconf: autoheader version $ah_version (ok)"
#--------------------------------------------------------------------------
# automake 1.7 or newer
#
need_automake="1.7"
am_version=`${AUTOMAKE:-automake} --version 2>/dev/null|head -n 1| sed -e 's/^.* \([0-9]\)/\1/' -e 's/[a-z]* *$//' -e 's/\(.*\)\(-p.*\)/\1/'`
if test -z "$am_version"; then
echo "buildconf: automake not found."
echo " You need automake version $need_automake or newer installed."
exit 1
fi
old_IFS=$IFS; IFS='.'; set $am_version; IFS=$old_IFS
if test "$1" = "1" -a "$2" -lt "7" || test "$1" -lt "1"; then
echo "buildconf: automake version $am_version found."
echo " You need automake version $need_automake or newer installed."
echo " If you have a sufficient automake installed, but it"
echo " is not named 'automake', then try setting the"
echo " AUTOMAKE environment variable."
exit 1
fi
echo "buildconf: automake version $am_version (ok)"
acloc_version=`${ACLOCAL:-aclocal} --version 2>/dev/null|head -n 1| sed -e 's/^.* \([0-9]\)/\1/' -e 's/[a-z]* *$//' -e 's/\(.*\)\(-p.*\)/\1/'`
if test -z "$acloc_version"; then
echo "buildconf: aclocal not found. Weird automake installation!"
exit 1
fi
if test "$acloc_version" = "$am_version"; then
echo "buildconf: aclocal version $acloc_version (ok)"
else
echo "buildconf: aclocal version $acloc_version (ERROR: does not match automake version)"
exit 1
fi
#--------------------------------------------------------------------------
# GNU libtoolize preliminary check
#
want_lt_major=1
want_lt_minor=4
want_lt_patch=2
want_lt_version=1.4.2
# This approach that tries 'glibtoolize' first is intended for systems that
# have GNU libtool named as 'glibtoolize' and libtoolize not being GNU's.
libtoolize=`findtool glibtoolize 2>/dev/null`
if test ! -x "$libtoolize"; then
libtoolize=`findtool ${LIBTOOLIZE:-libtoolize}`
fi
if test -z "$libtoolize"; then
echo "buildconf: libtoolize not found."
echo " You need GNU libtoolize $want_lt_version or newer installed."
exit 1
fi
lt_pver=`$libtoolize --version 2>/dev/null|head -n 1`
lt_qver=`echo $lt_pver|sed -e "s/([^)]*)//g" -e "s/^[^0-9]*//g"`
lt_version=`echo $lt_qver|sed -e "s/[- ].*//" -e "s/\([a-z]*\)$//"`
if test -z "$lt_version"; then
echo "buildconf: libtoolize not found."
echo " You need GNU libtoolize $want_lt_version or newer installed."
exit 1
fi
old_IFS=$IFS; IFS='.'; set $lt_version; IFS=$old_IFS
lt_major=$1
lt_minor=$2
lt_patch=$3
if test -z "$lt_major"; then
lt_status="bad"
elif test "$lt_major" -gt "$want_lt_major"; then
lt_status="good"
elif test "$lt_major" -lt "$want_lt_major"; then
lt_status="bad"
elif test -z "$lt_minor"; then
lt_status="bad"
elif test "$lt_minor" -gt "$want_lt_minor"; then
lt_status="good"
elif test "$lt_minor" -lt "$want_lt_minor"; then
lt_status="bad"
elif test -z "$lt_patch"; then
lt_status="bad"
elif test "$lt_patch" -gt "$want_lt_patch"; then
lt_status="good"
elif test "$lt_patch" -lt "$want_lt_patch"; then
lt_status="bad"
else
lt_status="good"
fi
if test "$lt_status" != "good"; then
echo "buildconf: libtoolize version $lt_version found."
echo " You need GNU libtoolize $want_lt_version or newer installed."
exit 1
fi
echo "buildconf: libtoolize version $lt_version (ok)"
#--------------------------------------------------------------------------
# m4 check
#
m4=`(${M4:-m4} --version 0<&- || ${M4:-gm4} --version) 2>/dev/null 0<&- | head -n 1`;
m4_version=`echo $m4 | sed -e 's/^.* \([0-9]\)/\1/' -e 's/[a-z]* *$//'`
if { echo $m4 | grep "GNU" >/dev/null 2>&1; } then
echo "buildconf: GNU m4 version $m4_version (ok)"
else
if test -z "$m4"; then
echo "buildconf: m4 version not recognized. You need a GNU m4 installed!"
else
echo "buildconf: m4 version $m4 found. You need a GNU m4 installed!"
fi
exit 1
fi
#--------------------------------------------------------------------------
# perl check
#
PERL=`findtool ${PERL:-perl}`
if test -z "$PERL"; then
echo "buildconf: perl not found"
exit 1
fi
#--------------------------------------------------------------------------
# Remove files generated on previous buildconf/configure run.
#
for fname in .deps \
.libs \
*.la \
*.lo \
*.a \
*.o \
Makefile \
Makefile.in \
aclocal.m4 \
aclocal.m4.bak \
ares_build.h \
ares_config.h \
ares_config.h.in \
autom4te.cache \
compile \
config.guess \
curl_config.h \
curl_config.h.in \
config.log \
config.lt \
config.status \
config.sub \
configure \
configurehelp.pm \
curl-config \
curlbuild.h \
depcomp \
libcares.pc \
libcurl.pc \
libtool \
libtool.m4 \
libtool.m4.tmp \
ltmain.sh \
ltoptions.m4 \
ltsugar.m4 \
ltversion.m4 \
lt~obsolete.m4 \
missing \
install-sh \
stamp-h1 \
stamp-h2 \
stamp-h3 ; do
removethis "$fname"
done
#--------------------------------------------------------------------------
# run the correct scripts now
#
echo "buildconf: running libtoolize"
${libtoolize} --copy --force || die "libtoolize command failed"
# When using libtool 1.5.X (X < 26) we copy libtool.m4 to our local m4
# subdirectory and this local copy is patched to fix some warnings that
# are triggered when running aclocal and using autoconf 2.62 or later.
if test "$lt_major" = "1" && test "$lt_minor" = "5"; then
if test -z "$lt_patch" || test "$lt_patch" -lt "26"; then
echo "buildconf: copying libtool.m4 to local m4 subdir"
ac_dir=`${ACLOCAL:-aclocal} --print-ac-dir`
if test -f $ac_dir/libtool.m4; then
cp -f $ac_dir/libtool.m4 m4/libtool.m4
else
echo "buildconf: $ac_dir/libtool.m4 not found"
fi
if test -f m4/libtool.m4; then
echo "buildconf: renaming some variables in local m4/libtool.m4"
$PERL -i.tmp -pe \
's/lt_prog_compiler_pic_works/lt_cv_prog_compiler_pic_works/g; \
s/lt_prog_compiler_static_works/lt_cv_prog_compiler_static_works/g;' \
m4/libtool.m4
rm -f m4/libtool.m4.tmp
fi
fi
fi
if test -f m4/libtool.m4; then
echo "buildconf: converting all mv to mv -f in local m4/libtool.m4"
$PERL -i.tmp -pe 's/\bmv +([^-\s])/mv -f $1/g' m4/libtool.m4
rm -f m4/libtool.m4.tmp
fi
echo "buildconf: running aclocal"
${ACLOCAL:-aclocal} -I m4 $ACLOCAL_FLAGS || die "aclocal command failed"
echo "buildconf: converting all mv to mv -f in local aclocal.m4"
$PERL -i.bak -pe 's/\bmv +([^-\s])/mv -f $1/g' aclocal.m4
echo "buildconf: running autoheader"
${AUTOHEADER:-autoheader} || die "autoheader command failed"
echo "buildconf: running autoconf"
${AUTOCONF:-autoconf} || die "autoconf command failed"
if test -d ares; then
cd ares
echo "buildconf: running in ares"
./buildconf
cd ..
fi
echo "buildconf: running automake"
${AUTOMAKE:-automake} --add-missing --copy || die "automake command failed"
#--------------------------------------------------------------------------
# GNU libtool complementary check
#
# Depending on the libtool and automake versions being used, config.guess
# might not be installed in the subdirectory until automake has finished.
# So we can not attempt to use it until this very last buildconf stage.
#
if test ! -f ./config.guess; then
echo "buildconf: config.guess not found"
else
buildhost=`./config.guess 2>/dev/null|head -n 1`
case $buildhost in
*-*-darwin*)
need_lt_major=1
need_lt_minor=5
need_lt_patch=26
need_lt_check="yes"
;;
*-*-hpux*)
need_lt_major=1
need_lt_minor=5
need_lt_patch=24
need_lt_check="yes"
;;
esac
if test ! -z "$need_lt_check"; then
if test -z "$lt_major"; then
lt_status="bad"
elif test "$lt_major" -gt "$need_lt_major"; then
lt_status="good"
elif test "$lt_major" -lt "$need_lt_major"; then
lt_status="bad"
elif test -z "$lt_minor"; then
lt_status="bad"
elif test "$lt_minor" -gt "$need_lt_minor"; then
lt_status="good"
elif test "$lt_minor" -lt "$need_lt_minor"; then
lt_status="bad"
elif test -z "$lt_patch"; then
lt_status="bad"
elif test "$lt_patch" -gt "$need_lt_patch"; then
lt_status="good"
elif test "$lt_patch" -lt "$need_lt_patch"; then
lt_status="bad"
else
lt_status="good"
fi
if test "$lt_status" != "good"; then
need_lt_version="$need_lt_major.$need_lt_minor.$need_lt_patch"
echo "buildconf: libtool version $lt_version found."
echo " $buildhost requires GNU libtool $need_lt_version or newer installed."
rm -f configure
exit 1
fi
fi
fi
#--------------------------------------------------------------------------
# Finished successfully.
#
echo "buildconf: OK"
exit 0

View File

@@ -0,0 +1,350 @@
@echo off
rem ***************************************************************************
rem * _ _ ____ _
rem * Project ___| | | | _ \| |
rem * / __| | | | |_) | |
rem * | (__| |_| | _ <| |___
rem * \___|\___/|_| \_\_____|
rem *
rem * Copyright (C) 1998 - 2016, Daniel Stenberg, <daniel@haxx.se>, et al.
rem *
rem * This software is licensed as described in the file COPYING, which
rem * you should have received as part of this distribution. The terms
rem * are also available at https://curl.haxx.se/docs/copyright.html.
rem *
rem * You may opt to use, copy, modify, merge, publish, distribute and/or sell
rem * copies of the Software, and permit persons to whom the Software is
rem * furnished to do so, under the terms of the COPYING file.
rem *
rem * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY
rem * KIND, either express or implied.
rem *
rem ***************************************************************************
rem NOTES
rem
rem This batch file must be used to set up a git tree to build on systems where
rem there is no autotools support (i.e. DOS and Windows).
rem
:begin
rem Set our variables
if "%OS%" == "Windows_NT" setlocal
set MODE=GENERATE
rem Switch to this batch file's directory
cd /d "%~0\.." 1>NUL 2>&1
rem Check we are running from a curl git repository
if not exist GIT-INFO goto norepo
rem Detect programs. HAVE_<PROGNAME>
rem When not found the variable is set undefined. The undefined pattern
rem allows for statements like "if not defined HAVE_PERL (command)"
groff --version <NUL 1>NUL 2>&1
if errorlevel 1 (set HAVE_GROFF=) else (set HAVE_GROFF=Y)
nroff --version <NUL 1>NUL 2>&1
if errorlevel 1 (set HAVE_NROFF=) else (set HAVE_NROFF=Y)
perl --version <NUL 1>NUL 2>&1
if errorlevel 1 (set HAVE_PERL=) else (set HAVE_PERL=Y)
gzip --version <NUL 1>NUL 2>&1
if errorlevel 1 (set HAVE_GZIP=) else (set HAVE_GZIP=Y)
:parseArgs
if "%~1" == "" goto start
if /i "%~1" == "-clean" (
set MODE=CLEAN
) else if /i "%~1" == "-?" (
goto syntax
) else if /i "%~1" == "-h" (
goto syntax
) else if /i "%~1" == "-help" (
goto syntax
) else (
goto unknown
)
shift & goto parseArgs
:start
if "%MODE%" == "GENERATE" (
echo.
echo Generating prerequisite files
call :generate
if errorlevel 4 goto nogencurlbuild
if errorlevel 3 goto nogenhugehelp
if errorlevel 2 goto nogenmakefile
if errorlevel 1 goto warning
) else (
echo.
echo Removing prerequisite files
call :clean
if errorlevel 3 goto nocleancurlbuild
if errorlevel 2 goto nocleanhugehelp
if errorlevel 1 goto nocleanmakefile
)
goto success
rem Main generate function.
rem
rem Returns:
rem
rem 0 - success
rem 1 - success with simplified tool_hugehelp.c
rem 2 - failed to generate Makefile
rem 3 - failed to generate tool_hugehelp.c
rem 4 - failed to generate curlbuild.h
rem
:generate
if "%OS%" == "Windows_NT" setlocal
set BASIC_HUGEHELP=0
rem Create Makefile
echo * %CD%\Makefile
if exist Makefile.dist (
copy /Y Makefile.dist Makefile 1>NUL 2>&1
if errorlevel 1 (
if "%OS%" == "Windows_NT" endlocal
exit /B 2
)
)
rem Create tool_hugehelp.c
echo * %CD%\src\tool_hugehelp.c
call :genHugeHelp
if errorlevel 2 (
if "%OS%" == "Windows_NT" endlocal
exit /B 3
)
if errorlevel 1 (
set BASIC_HUGEHELP=1
)
cmd /c exit 0
rem Create curlbuild.h
echo * %CD%\include\curl\curlbuild.h
if exist include\curl\curlbuild.h.dist (
copy /Y include\curl\curlbuild.h.dist include\curl\curlbuild.h 1>NUL 2>&1
if errorlevel 1 (
if "%OS%" == "Windows_NT" endlocal
exit /B 4
)
)
rem Setup c-ares git tree
if exist ares\buildconf.bat (
echo.
echo Configuring c-ares build environment
cd ares
call buildconf.bat
cd ..
)
if "%BASIC_HUGEHELP%" == "1" (
if "%OS%" == "Windows_NT" endlocal
exit /B 1
)
if "%OS%" == "Windows_NT" endlocal
exit /B 0
rem Main clean function.
rem
rem Returns:
rem
rem 0 - success
rem 1 - failed to clean Makefile
rem 2 - failed to clean tool_hugehelp.c
rem 3 - failed to clean curlbuild.h
rem
:clean
rem Remove Makefile
echo * %CD%\Makefile
if exist Makefile (
del Makefile 2>NUL
if exist Makefile (
exit /B 1
)
)
rem Remove tool_hugehelp.c
echo * %CD%\src\tool_hugehelp.c
if exist src\tool_hugehelp.c (
del src\tool_hugehelp.c 2>NUL
if exist src\tool_hugehelp.c (
exit /B 2
)
)
rem Remove curlbuild.h
echo * %CD%\include\curl\curlbuild.h
if exist include\curl\curlbuild.h (
del include\curl\curlbuild.h 2>NUL
if exist include\curl\curlbuild.h (
exit /B 3
)
)
exit /B
rem Function to generate src\tool_hugehelp.c
rem
rem Returns:
rem
rem 0 - full tool_hugehelp.c generated
rem 1 - simplified tool_hugehelp.c
rem 2 - failure
rem
:genHugeHelp
if "%OS%" == "Windows_NT" setlocal
set LC_ALL=C
set ROFFCMD=
set BASIC=1
if defined HAVE_PERL (
if defined HAVE_GROFF (
set ROFFCMD=groff -mtty-char -Tascii -P-c -man
) else if defined HAVE_NROFF (
set ROFFCMD=nroff -c -Tascii -man
)
)
if defined ROFFCMD (
echo #include "tool_setup.h"> src\tool_hugehelp.c
echo #include "tool_hugehelp.h">> src\tool_hugehelp.c
if defined HAVE_GZIP (
echo #ifndef HAVE_LIBZ>> src\tool_hugehelp.c
)
%ROFFCMD% docs\curl.1 2>NUL | perl src\mkhelp.pl docs\MANUAL >> src\tool_hugehelp.c
if defined HAVE_GZIP (
echo #else>> src\tool_hugehelp.c
%ROFFCMD% docs\curl.1 2>NUL | perl src\mkhelp.pl -c docs\MANUAL >> src\tool_hugehelp.c
echo #endif /^* HAVE_LIBZ ^*/>> src\tool_hugehelp.c
)
set BASIC=0
) else (
if exist src\tool_hugehelp.c.cvs (
copy /Y src\tool_hugehelp.c.cvs src\tool_hugehelp.c 1>NUL 2>&1
) else (
echo #include "tool_setup.h"> src\tool_hugehelp.c
echo #include "tool_hugehelp.hd">> src\tool_hugehelp.c
echo.>> src\tool_hugehelp.c
echo void hugehelp(void^)>> src\tool_hugehelp.c
echo {>> src\tool_hugehelp.c
echo #ifdef USE_MANUAL>> src\tool_hugehelp.c
echo fputs("Built-in manual not included\n", stdout^);>> src\tool_hugehelp.c
echo #endif>> src\tool_hugehelp.c
echo }>> src\tool_hugehelp.c
)
)
findstr "/C:void hugehelp(void)" src\tool_hugehelp.c 1>NUL 2>&1
if errorlevel 1 (
if "%OS%" == "Windows_NT" endlocal
exit /B 2
)
if "%BASIC%" == "1" (
if "%OS%" == "Windows_NT" endlocal
exit /B 1
)
if "%OS%" == "Windows_NT" endlocal
exit /B 0
rem Function to clean-up local variables under DOS, Windows 3.x and
rem Windows 9x as setlocal isn't available until Windows NT
rem
:dosCleanup
set MODE=
set HAVE_GROFF=
set HAVE_NROFF=
set HAVE_PERL=
set HAVE_GZIP=
set BASIC_HUGEHELP=
set LC_ALL
set ROFFCMD=
set BASIC=
exit /B
:syntax
rem Display the help
echo.
echo Usage: buildconf [-clean]
echo.
echo -clean - Removes the files
goto error
:unknown
echo.
echo Error: Unknown argument '%1'
goto error
:norepo
echo.
echo Error: This batch file should only be used with a curl git repository
goto error
:nogenmakefile
echo.
echo Error: Unable to generate Makefile
goto error
:nogenhugehelp
echo.
echo Error: Unable to generate src\tool_hugehelp.c
goto error
:nogencurlbuild
echo.
echo Error: Unable to generate include\curl\curlbuild.h
goto error
:nocleanmakefile
echo.
echo Error: Unable to clean Makefile
goto error
:nocleanhugehelp
echo.
echo Error: Unable to clean src\tool_hugehelp.c
goto error
:nocleancurlbuild
echo.
echo Error: Unable to clean include\curl\curlbuild.h
goto error
:warning
echo.
echo Warning: The curl manual could not be integrated in the source. This means when
echo you build curl the manual will not be available (curl --man^). Integration of
echo the manual is not required and a summary of the options will still be available
echo (curl --help^). To integrate the manual your PATH is required to have
echo groff/nroff, perl and optionally gzip for compression.
goto success
:error
if "%OS%" == "Windows_NT" (
endlocal
) else (
call :dosCleanup
)
exit /B 1
:success
if "%OS%" == "Windows_NT" (
endlocal
) else (
call :dosCleanup
)
exit /B 0

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,178 @@
#! /bin/sh
#***************************************************************************
# _ _ ____ _
# Project ___| | | | _ \| |
# / __| | | | |_) | |
# | (__| |_| | _ <| |___
# \___|\___/|_| \_\_____|
#
# Copyright (C) 2001 - 2012, Daniel Stenberg, <daniel@haxx.se>, et al.
#
# This software is licensed as described in the file COPYING, which
# you should have received as part of this distribution. The terms
# are also available at https://curl.haxx.se/docs/copyright.html.
#
# You may opt to use, copy, modify, merge, publish, distribute and/or sell
# copies of the Software, and permit persons to whom the Software is
# furnished to do so, under the terms of the COPYING file.
#
# This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY
# KIND, either express or implied.
#
###########################################################################
prefix=/home/ak/git/SUGAR/CPUMINER/sugarmaker/deps-win32/i686-w64-mingw32
exec_prefix=${prefix}
includedir=${prefix}/include
cppflag_curl_staticlib=-DCURL_STATICLIB
usage()
{
cat <<EOF
Usage: curl-config [OPTION]
Available values for OPTION include:
--built-shared says 'yes' if libcurl was built shared
--ca ca bundle install path
--cc compiler
--cflags pre-processor and compiler flags
--checkfor [version] check for (lib)curl of the specified version
--configure the arguments given to configure when building curl
--features newline separated list of enabled features
--help display this help and exit
--libs library linking information
--prefix curl install prefix
--protocols newline separated list of enabled protocols
--static-libs static libcurl library linking information
--version output version information
--vernum output the version information as a number (hexadecimal)
EOF
exit $1
}
if test $# -eq 0; then
usage 1
fi
while test $# -gt 0; do
case "$1" in
# this deals with options in the style
# --option=value and extracts the value part
# [not currently used]
-*=*) value=`echo "$1" | sed 's/[-_a-zA-Z0-9]*=//'` ;;
*) value= ;;
esac
case "$1" in
--built-shared)
echo no
;;
--ca)
echo "/etc/ssl/certs/ca-certificates.crt"
;;
--cc)
echo "i686-w64-mingw32-gcc"
;;
--prefix)
echo "$prefix"
;;
--feature|--features)
for feature in SSL IPv6 SSPI SPNEGO Kerberos NTLM ""; do
test -n "$feature" && echo "$feature"
done
;;
--protocols)
for protocol in DICT FILE FTP FTPS GOPHER HTTP HTTPS IMAP IMAPS LDAP LDAPS POP3 POP3S RTSP SMB SMBS SMTP SMTPS TELNET TFTP; do
echo "$protocol"
done
;;
--version)
echo libcurl 7.54.1
exit 0
;;
--checkfor)
checkfor=$2
cmajor=`echo $checkfor | cut -d. -f1`
cminor=`echo $checkfor | cut -d. -f2`
# when extracting the patch part we strip off everything after a
# dash as that's used for things like version 1.2.3-CVS
cpatch=`echo $checkfor | cut -d. -f3 | cut -d- -f1`
checknum=`echo "$cmajor*256*256 + $cminor*256 + ${cpatch:-0}" | bc`
numuppercase=`echo 073601 | tr 'a-f' 'A-F'`
nownum=`echo "obase=10; ibase=16; $numuppercase" | bc`
if test "$nownum" -ge "$checknum"; then
# silent success
exit 0
else
echo "requested version $checkfor is newer than existing 7.54.1"
exit 1
fi
;;
--vernum)
echo 073601
exit 0
;;
--help)
usage 0
;;
--cflags)
if test "X$cppflag_curl_staticlib" = "X-DCURL_STATICLIB"; then
CPPFLAG_CURL_STATICLIB="-DCURL_STATICLIB "
else
CPPFLAG_CURL_STATICLIB=""
fi
if test "X${prefix}/include" = "X/usr/include"; then
echo "$CPPFLAG_CURL_STATICLIB"
else
echo "${CPPFLAG_CURL_STATICLIB}-I${prefix}/include"
fi
;;
--libs)
if test "X${exec_prefix}/lib" != "X/usr/lib" -a "X${exec_prefix}/lib" != "X/usr/lib64"; then
CURLLIBDIR="-L${exec_prefix}/lib "
else
CURLLIBDIR=""
fi
if test "Xyes" = "Xyes"; then
echo ${CURLLIBDIR}-lcurl -lcrypt32 -lwldap32 -lws2_32
else
echo ${CURLLIBDIR}-lcurl
fi
;;
--static-libs)
if test "Xyes" != "Xno" ; then
echo ${exec_prefix}/lib/libcurl.a -lcrypt32 -lwldap32 -lws2_32
else
echo "curl was built with static libraries disabled" >&2
exit 1
fi
;;
--configure)
echo " '--host=i686-w64-mingw32' '--disable-shared' '--enable-static' '--with-winssl' '--prefix=/home/ak/git/SUGAR/CPUMINER/sugarmaker/deps-win32/i686-w64-mingw32' 'host_alias=i686-w64-mingw32'"
;;
*)
echo "unknown option: $1"
usage 1
;;
esac
shift
done
exit 0

View File

@@ -0,0 +1,178 @@
#! /bin/sh
#***************************************************************************
# _ _ ____ _
# Project ___| | | | _ \| |
# / __| | | | |_) | |
# | (__| |_| | _ <| |___
# \___|\___/|_| \_\_____|
#
# Copyright (C) 2001 - 2012, Daniel Stenberg, <daniel@haxx.se>, et al.
#
# This software is licensed as described in the file COPYING, which
# you should have received as part of this distribution. The terms
# are also available at https://curl.haxx.se/docs/copyright.html.
#
# You may opt to use, copy, modify, merge, publish, distribute and/or sell
# copies of the Software, and permit persons to whom the Software is
# furnished to do so, under the terms of the COPYING file.
#
# This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY
# KIND, either express or implied.
#
###########################################################################
prefix=@prefix@
exec_prefix=@exec_prefix@
includedir=@includedir@
cppflag_curl_staticlib=@CPPFLAG_CURL_STATICLIB@
usage()
{
cat <<EOF
Usage: curl-config [OPTION]
Available values for OPTION include:
--built-shared says 'yes' if libcurl was built shared
--ca ca bundle install path
--cc compiler
--cflags pre-processor and compiler flags
--checkfor [version] check for (lib)curl of the specified version
--configure the arguments given to configure when building curl
--features newline separated list of enabled features
--help display this help and exit
--libs library linking information
--prefix curl install prefix
--protocols newline separated list of enabled protocols
--static-libs static libcurl library linking information
--version output version information
--vernum output the version information as a number (hexadecimal)
EOF
exit $1
}
if test $# -eq 0; then
usage 1
fi
while test $# -gt 0; do
case "$1" in
# this deals with options in the style
# --option=value and extracts the value part
# [not currently used]
-*=*) value=`echo "$1" | sed 's/[-_a-zA-Z0-9]*=//'` ;;
*) value= ;;
esac
case "$1" in
--built-shared)
echo @ENABLE_SHARED@
;;
--ca)
echo @CURL_CA_BUNDLE@
;;
--cc)
echo "@CC@"
;;
--prefix)
echo "$prefix"
;;
--feature|--features)
for feature in @SUPPORT_FEATURES@ ""; do
test -n "$feature" && echo "$feature"
done
;;
--protocols)
for protocol in @SUPPORT_PROTOCOLS@; do
echo "$protocol"
done
;;
--version)
echo libcurl @CURLVERSION@
exit 0
;;
--checkfor)
checkfor=$2
cmajor=`echo $checkfor | cut -d. -f1`
cminor=`echo $checkfor | cut -d. -f2`
# when extracting the patch part we strip off everything after a
# dash as that's used for things like version 1.2.3-CVS
cpatch=`echo $checkfor | cut -d. -f3 | cut -d- -f1`
checknum=`echo "$cmajor*256*256 + $cminor*256 + ${cpatch:-0}" | bc`
numuppercase=`echo @VERSIONNUM@ | tr 'a-f' 'A-F'`
nownum=`echo "obase=10; ibase=16; $numuppercase" | bc`
if test "$nownum" -ge "$checknum"; then
# silent success
exit 0
else
echo "requested version $checkfor is newer than existing @CURLVERSION@"
exit 1
fi
;;
--vernum)
echo @VERSIONNUM@
exit 0
;;
--help)
usage 0
;;
--cflags)
if test "X$cppflag_curl_staticlib" = "X-DCURL_STATICLIB"; then
CPPFLAG_CURL_STATICLIB="-DCURL_STATICLIB "
else
CPPFLAG_CURL_STATICLIB=""
fi
if test "X@includedir@" = "X/usr/include"; then
echo "$CPPFLAG_CURL_STATICLIB"
else
echo "${CPPFLAG_CURL_STATICLIB}-I@includedir@"
fi
;;
--libs)
if test "X@libdir@" != "X/usr/lib" -a "X@libdir@" != "X/usr/lib64"; then
CURLLIBDIR="-L@libdir@ "
else
CURLLIBDIR=""
fi
if test "X@REQUIRE_LIB_DEPS@" = "Xyes"; then
echo ${CURLLIBDIR}-lcurl @LIBCURL_LIBS@
else
echo ${CURLLIBDIR}-lcurl
fi
;;
--static-libs)
if test "X@ENABLE_STATIC@" != "Xno" ; then
echo @libdir@/libcurl.@libext@ @LDFLAGS@ @LIBCURL_LIBS@
else
echo "curl was built with static libraries disabled" >&2
exit 1
fi
;;
--configure)
echo @CONFIGURE_OPTIONS@
;;
*)
echo "unknown option: $1"
usage 1
;;
esac
shift
done
exit 0

View File

@@ -0,0 +1,118 @@
libcurl bindings
================
Creative people have written bindings or interfaces for various environments
and programming languages. Using one of these allows you to take advantage of
curl powers from within your favourite language or system.
This is a list of all known interfaces as of this writing.
The bindings listed below are not part of the curl/libcurl distribution
archives, but must be downloaded and installed separately.
[Ada95](http://www.almroth.com/adacurl/index.html) Written by Andreas Almroth
[Basic](http://scriptbasic.com/) ScriptBasic bindings written by Peter Verhas
C++: [curlpp](http://curlpp.org/) Written by Jean-Philippe Barrette-LaPierre,
[curlcpp](https://github.com/JosephP91/curlcpp) by Giuseppe Persico and [C++
Requests](https://github.com/whoshuu/cpr) by Huu Nguyen
[Ch](https://chcurl.sourceforge.io/) Written by Stephen Nestinger and Jonathan Rogado
Cocoa: [BBHTTP](https://github.com/brunodecarvalho/BBHTTP) written by Bruno de Carvalho
[curlhandle](https://github.com/karelia/curlhandle) Written by Dan Wood
[D](https://dlang.org/library/std/net/curl.html) Written by Kenneth Bogert
[Delphi](https://github.com/Mercury13/curl4delphi) Written by Mikhail Merkuryev
[Dylan](https://dylanlibs.sourceforge.io/) Written by Chris Double
[Eiffel](https://room.eiffel.com/library/curl) Written by Eiffel Software
[Euphoria](http://rays-web.com/eulibcurl.htm) Written by Ray Smith
[Falcon](http://www.falconpl.org/index.ftd?page_id=prjs&prj_id=curl)
[Ferite](http://www.ferite.org/) Written by Paul Querna
[Gambas](https://gambas.sourceforge.io/)
[glib/GTK+](http://atterer.net/glibcurl/) Written by Richard Atterer
Go: [go-curl](https://github.com/andelf/go-curl) by ShuYu Wang
[Guile](http://www.lonelycactus.com/guile-curl.html) Written by Michael L. Gran
[Harbour](https://github.com/vszakats/harbour-core/tree/master/contrib/hbcurl) Written by Viktor Szakáts
[Haskell](https://hackage.haskell.org/cgi-bin/hackage-scripts/package/curl) Written by Galois, Inc
[Java](https://github.com/pjlegato/curl-java)
[Julia](https://github.com/forio/Curl.jl) Written by Paul Howe
[Lisp](https://common-lisp.net/project/cl-curl/) Written by Liam Healy
Lua: [luacurl](http://luacurl.luaforge.net/) by Alexander Marinov, [Lua-cURL](https://github.com/Lua-cURL) by Jürgen Hötzel
[Mono](https://forge.novell.com/modules/xfmod/project/?libcurl-mono) Written by Jeffrey Phillips
[.NET](https://sourceforge.net/projects/libcurl-net/) libcurl-net by Jeffrey Phillips
[node.js](https://github.com/JCMais/node-libcurl) node-libcurl by Jonathan Cardoso Machado
[Object-Pascal](http://www.tekool.com/opcurl) Free Pascal, Delphi and Kylix binding written by Christophe Espern.
[OCaml](http://opam.ocaml.org/packages/ocurl/) Written by Lars Nilsson and ygrek
[Pascal](http://houston.quik.com/jkp/curlpas/) Free Pascal, Delphi and Kylix binding written by Jeffrey Pohlmeyer.
Perl: [WWW--Curl](https://github.com/szbalint/WWW--Curl) Maintained by Cris
Bailiff and Bálint Szilakszi,
[perl6-net-curl](https://github.com/azawawi/perl6-net-curl) by Ahmad M. Zawawi
[PHP](https://php.net/curl) Originally written by Sterling Hughes
[PostgreSQL](http://gborg.postgresql.org/project/pgcurl/projdisplay.php) Written by Gian Paolo Ciceri
[Python](http://pycurl.io/) PycURL by Kjetil Jacobsen
[R](https://cran.r-project.org/package=curl)
[Rexx](https://rexxcurl.sourceforge.io/) Written Mark Hessling
[Ring](http://ring-lang.sourceforge.net/doc1.3/libcurl.html) RingLibCurl by Mahmoud Fayed
RPG, support for ILE/RPG on OS/400 is included in source distribution
Ruby: [curb](http://curb.rubyforge.org/) written by Ross Bamford, [ruby-curl-multi](http://curl-multi.rubyforge.org/) written by Kristjan Petursson and Keith Rarick
[Rust](https://github.com/carllerche/curl-rust) curl-rust - by Carl Lerche
[Scheme](https://www.metapaper.net/lisovsky/web/curl/) Bigloo binding by Kirill Lisovsky
[Scilab](https://help.scilab.org/docs/current/fr_FR/getURL.html) binding by Sylvestre Ledru
[S-Lang](http://www.jedsoft.org/slang/modules/curl.html) by John E Davis
[Smalltalk](http://www.squeaksource.com/CurlPlugin/) Written by Danil Osipchuk
[SP-Forth](http://spf.cvs.sourceforge.net/viewvc/spf/devel/~ac/lib/lin/curl/) Written by Andrey Cherezov
[SPL](http://www.clifford.at/spl/) Written by Clifford Wolf
[Tcl](http://mirror.yellow5.com/tclcurl/) Tclcurl by Andrés García
[Visual Basic](https://sourceforge.net/projects/libcurl-vb/) libcurl-vb by Jeffrey Phillips
[Visual Foxpro](http://www.ctl32.com.ar/libcurl.asp) by Carlos Alloatti
[Q](https://q-lang.sourceforge.io/) The libcurl module is part of the default install
[wxWidgets](https://wxcode.sourceforge.io/components/wxcurl/) Written by Casey O'Donnell
[XBLite](http://perso.wanadoo.fr/xblite/libraries.html) Written by David Szafranski
[Xojo](https://github.com/charonn0/RB-libcURL) Written by Andrew Lambert

View File

@@ -0,0 +1,280 @@
_ _ ____ _
___| | | | _ \| |
/ __| | | | |_) | |
| (__| |_| | _ <| |___
\___|\___/|_| \_\_____|
BUGS
1. Bugs
1.1 There are still bugs
1.2 Where to report
1.3 What to report
1.4 libcurl problems
1.5 Who will fix the problems
1.6 How to get a stack trace
1.7 Bugs in libcurl bindings
1.8 Bugs in old versions
2. Bug fixing procedure
2.1 What happens on first filing
2.2 First response
2.3 Not reproducible
2.4 Unresponsive
2.5 Lack of time/interest
2.6 KNOWN_BUGS
2.7 TODO
2.8 Closing off stalled bugs
==============================================================================
1.1 There are still bugs
Curl and libcurl have grown substantially since the beginning. At the time
of writing (January 2013), there are about 83,000 lines of source code, and
by the time you read this it has probably grown even more.
Of course there are lots of bugs left. And lots of misfeatures.
To help us make curl the stable and solid product we want it to be, we need
bug reports and bug fixes.
1.2 Where to report
If you can't fix a bug yourself and submit a fix for it, try to report an as
detailed report as possible to a curl mailing list to allow one of us to
have a go at a solution. You can optionally also post your bug/problem at
curl's bug tracking system over at
https://github.com/curl/curl/issues
Please read the rest of this document below first before doing that!
If you feel you need to ask around first, find a suitable mailing list and
post there. The lists are available on https://curl.haxx.se/mail/
1.3 What to report
When reporting a bug, you should include all information that will help us
understand what's wrong, what you expected to happen and how to repeat the
bad behavior. You therefore need to tell us:
- your operating system's name and version number
- what version of curl you're using (curl -V is fine)
- versions of the used libraries that libcurl is built to use
- what URL you were working with (if possible), at least which protocol
and anything and everything else you think matters. Tell us what you
expected to happen, tell use what did happen, tell us how you could make it
work another way. Dig around, try out, test. Then include all the tiny bits
and pieces in your report. You will benefit from this yourself, as it will
enable us to help you quicker and more accurately.
Since curl deals with networks, it often helps us if you include a protocol
debug dump with your bug report. The output you get by using the -v or
--trace options.
If curl crashed, causing a core dump (in unix), there is hardly any use to
send that huge file to anyone of us. Unless we have an exact same system
setup as you, we can't do much with it. Instead we ask you to get a stack
trace and send that (much smaller) output to us instead!
The address and how to subscribe to the mailing lists are detailed in the
MANUAL file.
1.4 libcurl problems
When you've written your own application with libcurl to perform transfers,
it is even more important to be specific and detailed when reporting bugs.
Tell us the libcurl version and your operating system. Tell us the name and
version of all relevant sub-components like for example the SSL library
you're using and what name resolving your libcurl uses. If you use SFTP or
SCP, the libssh2 version is relevant etc.
Showing us a real source code example repeating your problem is the best way
to get our attention and it will greatly increase our chances to understand
your problem and to work on a fix (if we agree it truly is a problem).
Lots of problems that appear to be libcurl problems are actually just abuses
of the libcurl API or other malfunctions in your applications. It is advised
that you run your problematic program using a memory debug tool like
valgrind or similar before you post memory-related or "crashing" problems to
us.
1.5 Who will fix the problems
If the problems or bugs you describe are considered to be bugs, we want to
have the problems fixed.
There are no developers in the curl project that are paid to work on bugs.
All developers that take on reported bugs do this on a voluntary basis. We
do it out of an ambition to keep curl and libcurl excellent products and out
of pride.
But please do not assume that you can just lump over something to us and it
will then magically be fixed after some given time. Most often we need
feedback and help to understand what you've experienced and how to repeat a
problem. Then we may only be able to assist YOU to debug the problem and to
track down the proper fix.
We get reports from many people every month and each report can take a
considerable amount of time to really go to the bottom with.
1.6 How to get a stack trace
First, you must make sure that you compile all sources with -g and that you
don't 'strip' the final executable. Try to avoid optimizing the code as
well, remove -O, -O2 etc from the compiler options.
Run the program until it cores.
Run your debugger on the core file, like '<debugger> curl core'. <debugger>
should be replaced with the name of your debugger, in most cases that will
be 'gdb', but 'dbx' and others also occur.
When the debugger has finished loading the core file and presents you a
prompt, enter 'where' (without the quotes) and press return.
The list that is presented is the stack trace. If everything worked, it is
supposed to contain the chain of functions that were called when curl
crashed. Include the stack trace with your detailed bug report. It'll help a
lot.
1.7 Bugs in libcurl bindings
There will of course pop up bugs in libcurl bindings. You should then
primarily approach the team that works on that particular binding and see
what you can do to help them fix the problem.
If you suspect that the problem exists in the underlying libcurl, then
please convert your program over to plain C and follow the steps outlined
above.
1.8 Bugs in old versions
The curl project typically releases new versions every other month, and we
fix several hundred bugs per year. For a huge table of releases, number of
bug fixes and more, see: https://curl.haxx.se/docs/releases.html
The developers in the curl project do not have bandwidth or energy enough to
maintain several branches or to spend much time on hunting down problems in
old versions when chances are we already fixed them or at least that they've
changed nature and appearance in later versions.
When you experience a problem and want to report it, you really SHOULD
include the version number of the curl you're using when you experience the
issue. If that version number shows us that you're using an out-of-date
curl, you should also try out a modern curl version to see if the problem
persists or how/if it has changed in apperance.
Even if you cannot immediately upgrade your application/system to run the
latest curl version, you can most often at least run a test version or
experimental build or similar, to get this confirmed or not.
At times people insist that they cannot upgrade to a modern curl version,
but instead they "just want the bug fixed". That's fine, just don't count on
us spending many cycles on trying to identify which single commit, if that's
even possible, that at some point in the past fixed the problem you're now
experiencing.
Security wise, it is almost always a bad idea to lag behind the current curl
versions by a lot. We keeping discovering and reporting security problems
over time see you can see in this table:
https://curl.haxx.se/docs/vulnerabilities.html
2. Bug fixing procedure
2.1 What happens on first filing
When a new issue is posted in the issue tracker or on the mailing list, the
team of developers first need to see the report. Maybe they took the day
off, maybe they're off in the woods hunting. Have patience. Allow at least a
few days before expecting someone to have responded.
In the issue tracker you can expect that some labels will be set on the
issue to help categorize it.
2.2 First response
If your issue/bug report wasn't perfect at once (and few are), chances are
that someone will ask follow-up questions. Which version did you use? Which
options did you use? How often does the problem occur? How can we reproduce
this problem? Which protocols does it involve? Or perhaps much more specific
and deep diving questions. It all depends on your specific issue.
You should then respond to these follow-up questions and provide more info
about the problem, so that we can help you figure it out. Or maybe you can
help us figure it out. An active back-and-forth communication is important
and the key for finding a cure and landing a fix.
2.3 Not reproducible
For problems that we can't reproduce and can't understand even after having
gotten all the info we need and having studied the source code over again,
are really hard to solve so then we may require further work from you who
actually see or experience the problem.
2.4 Unresponsive
If the problem haven't been understood or reproduced, and there's nobody
responding to follow-up questions or questions asking for clarifications or
for discussing possible ways to move forward with the task, we take that as
a strong suggestion that the bug is not important.
Unimportant issues will be closed as inactive sooner or later as they can't
be fixed. The inactivity period (waiting for responses) should not be
shorter than two weeks but may extend months.
2.5 Lack of time/interest
Bugs that are filed and are understood can unfortunately end up in the
"nobody cares enough about it to work on it" category. Such bugs are
perfectly valid problems that *should* get fixed but apparently aren't. We
try to mark such bugs as "KNOWN_BUGS material" after a time of inactivity
and if no activity is noticed after yet some time those bugs are added to
KNOWN_BUGS and are closed in the issue tracker.
2.6 KNOWN_BUGS
This is a list of known bugs. Bugs we know exist and that have been pointed
out but that haven't yet been fixed. The reasons for why they haven't been
fixed can involve anything really, but the primary reason is that nobody has
considered these problems to be important enough to spend the necessary time
and effort to have them fixed.
The KNOWN_BUGS are always up for grabs and we will always love the ones who
bring one of them back to live and offers solutions to them.
The KNOWN_BUGS document has a sibling document known as TODO.
2.7 TODO
Issues that are filed or reported that aren't really bugs but more missing
features or ideas for future improvements and so on are marked as
'enhancement' or 'feature-request' and will be added to the TODO document
instead and the issue is closed. We don't keep TODO items in the issue
tracker.
The TODO document is full of ideas and suggestions of what we can add or fix
one day. You're always encouraged and free to grab one of those items and
take up a discussion with the curl development team on how that could be
implemented or provided in the project so that you can work on ticking it
odd that document.
If the issue is rather a bug and not a missing feature or functionality, it
is listed in KNOWN_BUGS instead.
2.8 Closing off stalled bugs
The issue and pull request trackers on https://github.com/curl/curl will
only hold "active" entries (using a non-precise definition of what active
actually is, but they're at least not completely dead). Those that are
abandonded or in other ways dormant will be closed and sometimes added to
TODO and KNOWN_BUGS instead.
This way, we only have "active" issues open on github. Irrelevant issues and
pull requests will not distract developes or casual visitors.

View File

@@ -0,0 +1,124 @@
# checksrc
This is the tool we use within the curl project to scan C source code and
check that it adheres to our [Source Code Style guide](CODE_STYLE.md).
## Usage
checksrc.pl [options] [file1] [file2] ...
## Command line options
`-W[file]` whitelists that file and excludes it from being checked. Helpful
when, for example, one of the files is generated.
`-D[dir]` directory name to prepend to file names when accessing them.
`-h` shows the help output, that also lists all recognized warnings
## What does checksrc warn for?
checksrc does not check and verify the code against the entire style guide,
but the script is instead an effort to detect the most common mistakes and
syntax mistakes that contributors make before they get accustomed to our code
style. Heck, many of us regulars do the mistakes too and this script helps us
keep the code in shape.
checksrc.pl -h
Lists how to use the script and it lists all existing warnings it has and
problems it detects. At the time of this writing, the existing checksrc
warnings are:
- `BADCOMMAND`: There's a bad !checksrc! instruction in the code. See the
**Ignore certain warnings** section below for details.
- `BANNEDFUNC`: A banned function was used. The functions sprintf, vsprintf,
strcat, strncat, gets are **never** allowed in curl source code.
- `BRACEELSE`: '} else' on the same line. The else is supposed to be on the
following line.
- `BRACEPOS`: wrong position for an open brace (`{`).
- `COMMANOSPACE`: a comma without following space
- `COPYRIGHT`: the file is missing a copyright statement!
- `CPPCOMMENTS`: `//` comment detected, that's not C89 compliant
- `FOPENMODE`: `fopen()` needs a macro for the mode string, use it
- `INDENTATION`: detected a wrong start column for code. Note that this warning
only checks some specific places and will certainly miss many bad
indentations.
- `LONGLINE`: A line is longer than 79 columns.
- `PARENBRACE`: `){` was used without sufficient space in between.
- `RETURNNOSPACE`: `return` was used without space between the keyword and the
following value.
- `SPACEAFTERPAREN`: there was a space after open parenthesis, `( text`.
- `SPACEBEFORECLOSE`: there was a space before a close parenthesis, `text )`.
- `SPACEBEFORECOMMA`: there was a space before a comma, `one , two`.
- `SPACEBEFOREPAREN`: there was a space before an open parenthesis, `if (`,
where one was not expected
- `SPACESEMILCOLON`: there was a space before semicolon, ` ;`.
- `TABS`: TAB characters are not allowed!
- `TRAILINGSPACE`: Trailing white space on the line
- `UNUSEDIGNORE`: a checksrc inlined warning ignore was asked for but not used,
that's an ignore that should be removed or changed to get used.
## Ignore certain warnings
Due to the nature of the source code and the flaws of the checksrc tool, there
is sometimes a need to ignore specific warnings. checksrc allows a few
different ways to do this.
### Inline ignore
You can control what to ignore within a specific source file by providing
instructions to checksrc in the source code itself. You need a magic marker
that is `!checksrc!` followed by the instruction. The instruction can ask to
ignore a specific warning N number of times or you ignore all of them until
you mark the end of the ignored section.
Inline ignores are only done for that single specific source code file.
Example
/* !checksrc! disable LONGLINE all */
This will ignore the warning for overly long lines until it is re-enabled with:
/* !checksrc! enable LONGLINE */
If the enabling isn't performed before the end of the file, it will be enabled
automatically for the next file.
You can also opt to ignore just N violations so that if you have a single long
line you just can't shorten and is agreed to be fine anyway:
/* !checksrc! disable LONGLINE 1 */
... and the warning for long lines will be enabled again automatically after
it has ignored that single warning. The number `1` can of course be changed to
any other integer number. It can be used to make sure only the exact intended
instances are ignored and nothing extra.
### Directory wide ignore patterns
This is a method we've transitioned away from. Use inline ignores as far as
possible.
Make a `checksrc.whitelist` file in the directory of the source code with the
false positive, and include the full offending line into this file.

View File

@@ -0,0 +1,426 @@
# Ciphers
With curl's options `CURLOPT_SSL_CIPHER_LIST` and `--ciphers` users can
control which ciphers to consider when negotiating TLS connections.
The names of the known ciphers differ depending on which TLS backend that
libcurl was built to use. This is an attempt to list known cipher names.
## OpenSSL
(based on [OpenSSL docs](https://www.openssl.org/docs/man1.1.0/apps/ciphers.html))
### SSL3 cipher suites
`NULL-MD5`
`NULL-SHA`
`RC4-MD5`
`RC4-SHA`
`IDEA-CBC-SHA`
`DES-CBC3-SHA`
`DH-DSS-DES-CBC3-SHA`
`DH-RSA-DES-CBC3-SHA`
`DHE-DSS-DES-CBC3-SHA`
`DHE-RSA-DES-CBC3-SHA`
`ADH-RC4-MD5`
`ADH-DES-CBC3-SHA`
### TLS v1.0 cipher suites
`NULL-MD5`
`NULL-SHA`
`RC4-MD5`
`RC4-SHA`
`IDEA-CBC-SHA`
`DES-CBC3-SHA`
`DHE-DSS-DES-CBC3-SHA`
`DHE-RSA-DES-CBC3-SHA`
`ADH-RC4-MD5`
`ADH-DES-CBC3-SHA`
### AES ciphersuites from RFC3268, extending TLS v1.0
`AES128-SHA`
`AES256-SHA`
`DH-DSS-AES128-SHA`
`DH-DSS-AES256-SHA`
`DH-RSA-AES128-SHA`
`DH-RSA-AES256-SHA`
`DHE-DSS-AES128-SHA`
`DHE-DSS-AES256-SHA`
`DHE-RSA-AES128-SHA`
`DHE-RSA-AES256-SHA`
`ADH-AES128-SHA`
`ADH-AES256-SHA`
### SEED ciphersuites from RFC4162, extending TLS v1.0
`SEED-SHA`
`DH-DSS-SEED-SHA`
`DH-RSA-SEED-SHA`
`DHE-DSS-SEED-SHA`
`DHE-RSA-SEED-SHA`
`ADH-SEED-SHA`
### GOST ciphersuites, extending TLS v1.0
`GOST94-GOST89-GOST89`
`GOST2001-GOST89-GOST89`
`GOST94-NULL-GOST94`
`GOST2001-NULL-GOST94`
### Elliptic curve cipher suites
`ECDHE-RSA-NULL-SHA`
`ECDHE-RSA-RC4-SHA`
`ECDHE-RSA-DES-CBC3-SHA`
`ECDHE-RSA-AES128-SHA`
`ECDHE-RSA-AES256-SHA`
`ECDHE-ECDSA-NULL-SHA`
`ECDHE-ECDSA-RC4-SHA`
`ECDHE-ECDSA-DES-CBC3-SHA`
`ECDHE-ECDSA-AES128-SHA`
`ECDHE-ECDSA-AES256-SHA`
`AECDH-NULL-SHA`
`AECDH-RC4-SHA`
`AECDH-DES-CBC3-SHA`
`AECDH-AES128-SHA`
`AECDH-AES256-SHA`
### TLS v1.2 cipher suites
`NULL-SHA256`
`AES128-SHA256`
`AES256-SHA256`
`AES128-GCM-SHA256`
`AES256-GCM-SHA384`
`DH-RSA-AES128-SHA256`
`DH-RSA-AES256-SHA256`
`DH-RSA-AES128-GCM-SHA256`
`DH-RSA-AES256-GCM-SHA384`
`DH-DSS-AES128-SHA256`
`DH-DSS-AES256-SHA256`
`DH-DSS-AES128-GCM-SHA256`
`DH-DSS-AES256-GCM-SHA384`
`DHE-RSA-AES128-SHA256`
`DHE-RSA-AES256-SHA256`
`DHE-RSA-AES128-GCM-SHA256`
`DHE-RSA-AES256-GCM-SHA384`
`DHE-DSS-AES128-SHA256`
`DHE-DSS-AES256-SHA256`
`DHE-DSS-AES128-GCM-SHA256`
`DHE-DSS-AES256-GCM-SHA384`
`ECDHE-RSA-AES128-SHA256`
`ECDHE-RSA-AES256-SHA384`
`ECDHE-RSA-AES128-GCM-SHA256`
`ECDHE-RSA-AES256-GCM-SHA384`
`ECDHE-ECDSA-AES128-SHA256`
`ECDHE-ECDSA-AES256-SHA384`
`ECDHE-ECDSA-AES128-GCM-SHA256`
`ECDHE-ECDSA-AES256-GCM-SHA384`
`ADH-AES128-SHA256`
`ADH-AES256-SHA256`
`ADH-AES128-GCM-SHA256`
`ADH-AES256-GCM-SHA384`
`AES128-CCM`
`AES256-CCM`
`DHE-RSA-AES128-CCM`
`DHE-RSA-AES256-CCM`
`AES128-CCM8`
`AES256-CCM8`
`DHE-RSA-AES128-CCM8`
`DHE-RSA-AES256-CCM8`
`ECDHE-ECDSA-AES128-CCM`
`ECDHE-ECDSA-AES256-CCM`
`ECDHE-ECDSA-AES128-CCM8`
`ECDHE-ECDSA-AES256-CCM8`
### Camellia HMAC-Based ciphersuites from RFC6367, extending TLS v1.2
`ECDHE-ECDSA-CAMELLIA128-SHA256`
`ECDHE-ECDSA-CAMELLIA256-SHA384`
`ECDHE-RSA-CAMELLIA128-SHA256`
`ECDHE-RSA-CAMELLIA256-SHA384`
## NSS
### Totally insecure
`rc4`
`rc4-md5`
`rc4export`
`rc2`
`rc2export`
`des`
`desede3`
### SSL3/TLS cipher suites
`rsa_rc4_128_md5`
`rsa_rc4_128_sha`
`rsa_3des_sha`
`rsa_des_sha`
`rsa_rc4_40_md5`
`rsa_rc2_40_md5`
`rsa_null_md5`
`rsa_null_sha`
`fips_3des_sha`
`fips_des_sha`
`fortezza`
`fortezza_rc4_128_sha`
`fortezza_null`
### TLS 1.0 Exportable 56-bit Cipher Suites
`rsa_des_56_sha`
`rsa_rc4_56_sha`
### AES ciphers
`dhe_dss_aes_128_cbc_sha`
`dhe_dss_aes_256_cbc_sha`
`dhe_rsa_aes_128_cbc_sha`
`dhe_rsa_aes_256_cbc_sha`
`rsa_aes_128_sha`
`rsa_aes_256_sha`
### ECC ciphers
`ecdh_ecdsa_null_sha`
`ecdh_ecdsa_rc4_128_sha`
`ecdh_ecdsa_3des_sha`
`ecdh_ecdsa_aes_128_sha`
`ecdh_ecdsa_aes_256_sha`
`ecdhe_ecdsa_null_sha`
`ecdhe_ecdsa_rc4_128_sha`
`ecdhe_ecdsa_3des_sha`
`ecdhe_ecdsa_aes_128_sha`
`ecdhe_ecdsa_aes_256_sha`
`ecdh_rsa_null_sha`
`ecdh_rsa_128_sha`
`ecdh_rsa_3des_sha`
`ecdh_rsa_aes_128_sha`
`ecdh_rsa_aes_256_sha`
`ecdhe_rsa_null`
`ecdhe_rsa_rc4_128_sha`
`ecdhe_rsa_3des_sha`
`ecdhe_rsa_aes_128_sha`
`ecdhe_rsa_aes_256_sha`
`ecdh_anon_null_sha`
`ecdh_anon_rc4_128sha`
`ecdh_anon_3des_sha`
`ecdh_anon_aes_128_sha`
`ecdh_anon_aes_256_sha`
### HMAC-SHA256 cipher suites
`rsa_null_sha_256`
`rsa_aes_128_cbc_sha_256`
`rsa_aes_256_cbc_sha_256`
`dhe_rsa_aes_128_cbc_sha_256`
`dhe_rsa_aes_256_cbc_sha_256`
`ecdhe_ecdsa_aes_128_cbc_sha_256`
`ecdhe_rsa_aes_128_cbc_sha_256`
### AES GCM cipher suites in RFC 5288 and RFC 5289
`rsa_aes_128_gcm_sha_256`
`dhe_rsa_aes_128_gcm_sha_256`
`dhe_dss_aes_128_gcm_sha_256`
`ecdhe_ecdsa_aes_128_gcm_sha_256`
`ecdh_ecdsa_aes_128_gcm_sha_256`
`ecdhe_rsa_aes_128_gcm_sha_256`
`ecdh_rsa_aes_128_gcm_sha_256`
### cipher suites using SHA384
`rsa_aes_256_gcm_sha_384`
`dhe_rsa_aes_256_gcm_sha_384`
`dhe_dss_aes_256_gcm_sha_384`
`ecdhe_ecdsa_aes_256_sha_384`
`ecdhe_rsa_aes_256_sha_384`
`ecdhe_ecdsa_aes_256_gcm_sha_384`
`ecdhe_rsa_aes_256_gcm_sha_384`
### chacha20-poly1305 cipher suites
`ecdhe_rsa_chacha20_poly1305_sha_256`
`ecdhe_ecdsa_chacha20_poly1305_sha_256`
`dhe_rsa_chacha20_poly1305_sha_256`
## GSKit
Ciphers are internally defined as numeric codes (http://www.ibm.com/support/knowledgecenter/ssw_ibm_i_73/apis/gsk_attribute_set_buffer.htm),
but libcurl maps them to the following case-insensitive names.
### SSL2 cipher suites (insecure: disabled by default)
`rc2-md5`
`rc4-md5`
`exp-rc2-md5`
`exp-rc4-md5`
`des-cbc-md5`
`des-cbc3-md5`
### SSL3 cipher suites
`null-md5`
`null-sha`
`rc4-md5`
`rc4-sha`
`exp-rc2-cbc-md5`
`exp-rc4-md5`
`exp-des-cbc-sha`
`des-cbc3-sha`
### TLS v1.0 cipher suites
`null-md5`
`null-sha`
`rc4-md5`
`rc4-sha`
`exp-rc2-cbc-md5`
`exp-rc4-md5`
`exp-des-cbc-sha`
`des-cbc3-sha`
`aes128-sha`
`aes256-sha`
### TLS v1.1 cipher suites
`null-md5`
`null-sha`
`rc4-md5`
`rc4-sha`
`exp-des-cbc-sha`
`des-cbc3-sha`
`aes128-sha`
`aes256-sha`
### TLS v1.2 cipher suites
`null-md5`
`null-sha`
`null-sha256`
`rc4-md5`
`rc4-sha`
`des-cbc3-sha`
`aes128-sha`
`aes256-sha`
`aes128-sha256`
`aes256-sha256`
`aes128-gcm-sha256`
`aes256-gcm-sha384`
## WolfSSL
`RC4-SHA`,
`RC4-MD5`,
`DES-CBC3-SHA`,
`AES128-SHA`,
`AES256-SHA`,
`NULL-SHA`,
`NULL-SHA256`,
`DHE-RSA-AES128-SHA`,
`DHE-RSA-AES256-SHA`,
`DHE-PSK-AES256-GCM-SHA384`,
`DHE-PSK-AES128-GCM-SHA256`,
`PSK-AES256-GCM-SHA384`,
`PSK-AES128-GCM-SHA256`,
`DHE-PSK-AES256-CBC-SHA384`,
`DHE-PSK-AES128-CBC-SHA256`,
`PSK-AES256-CBC-SHA384`,
`PSK-AES128-CBC-SHA256`,
`PSK-AES128-CBC-SHA`,
`PSK-AES256-CBC-SHA`,
`DHE-PSK-AES128-CCM`,
`DHE-PSK-AES256-CCM`,
`PSK-AES128-CCM`,
`PSK-AES256-CCM`,
`PSK-AES128-CCM-8`,
`PSK-AES256-CCM-8`,
`DHE-PSK-NULL-SHA384`,
`DHE-PSK-NULL-SHA256`,
`PSK-NULL-SHA384`,
`PSK-NULL-SHA256`,
`PSK-NULL-SHA`,
`HC128-MD5`,
`HC128-SHA`,
`HC128-B2B256`,
`AES128-B2B256`,
`AES256-B2B256`,
`RABBIT-SHA`,
`NTRU-RC4-SHA`,
`NTRU-DES-CBC3-SHA`,
`NTRU-AES128-SHA`,
`NTRU-AES256-SHA`,
`AES128-CCM-8`,
`AES256-CCM-8`,
`ECDHE-ECDSA-AES128-CCM`,
`ECDHE-ECDSA-AES128-CCM-8`,
`ECDHE-ECDSA-AES256-CCM-8`,
`ECDHE-RSA-AES128-SHA`,
`ECDHE-RSA-AES256-SHA`,
`ECDHE-ECDSA-AES128-SHA`,
`ECDHE-ECDSA-AES256-SHA`,
`ECDHE-RSA-RC4-SHA`,
`ECDHE-RSA-DES-CBC3-SHA`,
`ECDHE-ECDSA-RC4-SHA`,
`ECDHE-ECDSA-DES-CBC3-SHA`,
`AES128-SHA256`,
`AES256-SHA256`,
`DHE-RSA-AES128-SHA256`,
`DHE-RSA-AES256-SHA256`,
`ECDH-RSA-AES128-SHA`,
`ECDH-RSA-AES256-SHA`,
`ECDH-ECDSA-AES128-SHA`,
`ECDH-ECDSA-AES256-SHA`,
`ECDH-RSA-RC4-SHA`,
`ECDH-RSA-DES-CBC3-SHA`,
`ECDH-ECDSA-RC4-SHA`,
`ECDH-ECDSA-DES-CBC3-SHA`,
`AES128-GCM-SHA256`,
`AES256-GCM-SHA384`,
`DHE-RSA-AES128-GCM-SHA256`,
`DHE-RSA-AES256-GCM-SHA384`,
`ECDHE-RSA-AES128-GCM-SHA256`,
`ECDHE-RSA-AES256-GCM-SHA384`,
`ECDHE-ECDSA-AES128-GCM-SHA256`,
`ECDHE-ECDSA-AES256-GCM-SHA384`,
`ECDH-RSA-AES128-GCM-SHA256`,
`ECDH-RSA-AES256-GCM-SHA384`,
`ECDH-ECDSA-AES128-GCM-SHA256`,
`ECDH-ECDSA-AES256-GCM-SHA384`,
`CAMELLIA128-SHA`,
`DHE-RSA-CAMELLIA128-SHA`,
`CAMELLIA256-SHA`,
`DHE-RSA-CAMELLIA256-SHA`,
`CAMELLIA128-SHA256`,
`DHE-RSA-CAMELLIA128-SHA256`,
`CAMELLIA256-SHA256`,
`DHE-RSA-CAMELLIA256-SHA256`,
`ECDHE-RSA-AES128-SHA256`,
`ECDHE-ECDSA-AES128-SHA256`,
`ECDH-RSA-AES128-SHA256`,
`ECDH-ECDSA-AES128-SHA256`,
`ECDHE-RSA-AES256-SHA384`,
`ECDHE-ECDSA-AES256-SHA384`,
`ECDH-RSA-AES256-SHA384`,
`ECDH-ECDSA-AES256-SHA384`,
`ECDHE-RSA-CHACHA20-POLY1305`,
`ECDHE-ECDSA-CHACHA20-POLY1305`,
`DHE-RSA-CHACHA20-POLY1305`,
`ECDHE-RSA-CHACHA20-POLY1305-OLD`,
`ECDHE-ECDSA-CHACHA20-POLY1305-OLD`,
`DHE-RSA-CHACHA20-POLY1305-OLD`,
`ADH-AES128-SHA`,
`QSH`,
`RENEGOTIATION-INFO`,
`IDEA-CBC-SHA`,
`ECDHE-ECDSA-NULL-SHA`,
`ECDHE-PSK-NULL-SHA256`,
`ECDHE-PSK-AES128-CBC-SHA256`,
`PSK-CHACHA20-POLY1305`,
`ECDHE-PSK-CHACHA20-POLY1305`,
`DHE-PSK-CHACHA20-POLY1305`,
`EDH-RSA-DES-CBC3-SHA`,

View File

@@ -0,0 +1,3 @@
#add_subdirectory(examples)
add_subdirectory(libcurl)
add_subdirectory(cmdline-opts)

View File

@@ -0,0 +1,32 @@
Contributor Code of Conduct
===========================
As contributors and maintainers of this project, we pledge to respect all
people who contribute through reporting issues, posting feature requests,
updating documentation, submitting pull requests or patches, and other
activities.
We are committed to making participation in this project a harassment-free
experience for everyone, regardless of level of experience, gender, gender
identity and expression, sexual orientation, disability, personal appearance,
body size, race, ethnicity, age, or religion.
Examples of unacceptable behavior by participants include the use of sexual
language or imagery, derogatory comments or personal attacks, trolling, public
or private harassment, insults, or other unprofessional conduct.
Project maintainers have the right and responsibility to remove, edit, or
reject comments, commits, code, wiki edits, issues, and other contributions
that are not aligned to this Code of Conduct. Project maintainers who do not
follow the Code of Conduct may be removed from the project team.
This code of conduct applies both within project spaces and in public spaces
when an individual is representing the project or its community.
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported by opening an issue or contacting one or more of the project
maintainers.
This Code of Conduct is adapted from the [Contributor
Covenant](http://contributor-covenant.org), version 1.1.0, available at
[http://contributor-covenant.org/version/1/1/0/](http://contributor-covenant.org/version/1/1/0/)

View File

@@ -0,0 +1,238 @@
# curl C code style
Source code that has a common style is easier to read than code that uses
different styles in different places. It helps making the code feel like one
single code base. Easy-to-read is a very important property of code and helps
making it easier to review when new things are added and it helps debugging
code when developers are trying to figure out why things go wrong. A unified
style is more important than individual contributors having their own personal
tastes satisfied.
Our C code has a few style rules. Most of them are verified and upheld by the
`lib/checksrc.pl` script. Invoked with `make checksrc` or even by default by
the build system when built after `./configure --enable-debug` has been used.
It is normally not a problem for anyone to follow the guidelines, as you just
need to copy the style already used in the source code and there are no
particularly unusual rules in our set of rules.
We also work hard on writing code that are warning-free on all the major
platforms and in general on as many platforms as possible. Code that obviously
will cause warnings will not be accepted as-is.
## Naming
Try using a non-confusing naming scheme for your new functions and variable
names. It doesn't necessarily have to mean that you should use the same as in
other places of the code, just that the names should be logical,
understandable and be named according to what they're used for. File-local
functions should be made static. We like lower case names.
See the [INTERNALS](INTERNALS.md) document on how we name non-exported
library-global symbols.
## Indenting
We use only spaces for indentation, never TABs. We use two spaces for each new
open brace.
if(something_is_true) {
while(second_statement == fine) {
moo();
}
}
## Comments
Since we write C89 code, `//` comments are not allowed. They weren't
introduced in the C standard until C99. We use only `/*` and `*/` comments:
/* this is a comment */
## Long lines
Source code in curl may never be wider than 79 columns and there are two
reasons for maintaining this even in the modern era of very large and high
resolution screens:
1. Narrower columns are easier to read than very wide ones. There's a reason
newspapers have used columns for decades or centuries.
2. Narrower columns allow developers to easier show multiple pieces of code
next to each other in different windows. I often have two or three source
code windows next to each other on the same screen - as well as multiple
terminal and debugging windows.
## Braces
In if/while/do/for expressions, we write the open brace on the same line as
the keyword and we then set the closing brace on the same indentation level as
the initial keyword. Like this:
if(age < 40) {
/* clearly a youngster */
}
You may omit the braces if they would contain only a one-line statement:
if(!x)
continue;
For functions the opening brace should be on a separate line:
int main(int argc, char **argv)
{
return 1;
}
## 'else' on the following line
When adding an `else` clause to a conditional expression using braces, we add
it on a new line after the closing brace. Like this:
if(age < 40) {
/* clearly a youngster */
}
else {
/* probably grumpy */
}
## No space before parentheses
When writing expressions using if/while/do/for, there shall be no space
between the keyword and the open parenthesis. Like this:
while(1) {
/* loop forever */
}
## Use boolean conditions
Rather than test a conditional value such as a bool against TRUE or FALSE, a
pointer against NULL or != NULL and an int against zero or not zero in
if/while conditions we prefer:
result = do_something();
if(!result) {
/* something went wrong */
return result;
}
## No assignments in conditions
To increase readability and reduce complexity of conditionals, we avoid
assigning variables within if/while conditions. We frown upon this style:
if((ptr = malloc(100)) == NULL)
return NULL;
and instead we encourage the above version to be spelled out more clearly:
ptr = malloc(100);
if(!ptr)
return NULL;
## New block on a new line
We never write multiple statements on the same source line, even for very
short if() conditions.
if(a)
return TRUE;
else if(b)
return FALSE;
and NEVER:
if(a) return TRUE;
else if(b) return FALSE;
## Space around operators
Please use spaces on both sides of operators in C expressions. Postfix `(),
[], ->, ., ++, --` and Unary `+, - !, ~, &` operators excluded they should
have no space.
Examples:
bla = func();
who = name[0];
age += 1;
true = !false;
size += -2 + 3 * (a + b);
ptr->member = a++;
struct.field = b--;
ptr = &address;
contents = *pointer;
complement = ~bits;
empty = (!*string) ? TRUE : FALSE;
## Column alignment
Some statements cannot be completed on a single line because the line would
be too long, the statement too hard to read, or due to other style guidelines
above. In such a case the statement will span multiple lines.
If a continuation line is part of an expression or sub-expression then you
should align on the appropriate column so that it's easy to tell what part of
the statement it is. Operators should not start continuation lines. In other
cases follow the 2-space indent guideline. Here are some examples from libcurl:
~~~c
if(Curl_pipeline_wanted(handle->multi, CURLPIPE_HTTP1) &&
(handle->set.httpversion != CURL_HTTP_VERSION_1_0) &&
(handle->set.httpreq == HTTPREQ_GET ||
handle->set.httpreq == HTTPREQ_HEAD))
/* didn't ask for HTTP/1.0 and a GET or HEAD */
return TRUE;
~~~
~~~c
case CURLOPT_KEEP_SENDING_ON_ERROR:
data->set.http_keep_sending_on_error = (0 != va_arg(param, long)) ?
TRUE : FALSE;
break;
~~~
~~~c
data->set.http_disable_hostname_check_before_authentication =
(0 != va_arg(param, long)) ? TRUE : FALSE;
~~~
~~~c
if(option) {
result = parse_login_details(option, strlen(option),
(userp ? &user : NULL),
(passwdp ? &passwd : NULL),
NULL);
}
~~~
~~~c
DEBUGF(infof(data, "Curl_pp_readresp_ %d bytes of trailing "
"server response left\n",
(int)clipamount));
~~~
## Platform dependent code
Use `#ifdef HAVE_FEATURE` to do conditional code. We avoid checking for
particular operating systems or hardware in the #ifdef lines. The HAVE_FEATURE
shall be generated by the configure script for unix-like systems and they are
hard-coded in the config-[system].h files for the others.
We also encourage use of macros/functions that possibly are empty or defined
to constants when libcurl is built without that feature, to make the code
seamless. Like this style where the `magic()` function works differently
depending on a build-time conditional:
#ifdef HAVE_MAGIC
void magic(int a)
{
return a + 2;
}
#else
#define magic(x) 1
#endif
int content = magic(3);

View File

@@ -0,0 +1,266 @@
# Contributing to the curl project
This document is intended to offer guidelines on how to best contribute to the
curl project. This concerns new features as well as corrections to existing
flaws or bugs.
## Learning curl
### Join the Community
Skip over to [https://curl.haxx.se/mail/](https://curl.haxx.se/mail/) and join
the appropriate mailing list(s). Read up on details before you post
questions. Read this file before you start sending patches! We prefer
questions sent to and discussions being held on the mailing list(s), not sent
to individuals.
Before posting to one of the curl mailing lists, please read up on the
[mailing list etiquette](https://curl.haxx.se/mail/etiquette.html).
We also hang out on IRC in #curl on irc.freenode.net
If you're at all interested in the code side of things, consider clicking
'watch' on the [curl repo on github](https://github.com/curl/curl) to get
notified on pull requests and new issues posted there.
### License and copyright
When contributing with code, you agree to put your changes and new code under
the same license curl and libcurl is already using unless stated and agreed
otherwise.
If you add a larger piece of code, you can opt to make that file or set of
files to use a different license as long as they don't enforce any changes to
the rest of the package and they make sense. Such "separate parts" can not be
GPL licensed (as we don't want copyleft to affect users of libcurl) but they
must use "GPL compatible" licenses (as we want to allow users to use libcurl
properly in GPL licensed environments).
When changing existing source code, you do not alter the copyright of the
original file(s). The copyright will still be owned by the original creator(s)
or those who have been assigned copyright by the original author(s).
By submitting a patch to the curl project, you are assumed to have the right
to the code and to be allowed by your employer or whatever to hand over that
patch/code to us. We will credit you for your changes as far as possible, to
give credit but also to keep a trace back to who made what changes. Please
always provide us with your full real name when contributing!
### What To Read
Source code, the man pages, the [INTERNALS
document](https://curl.haxx.se/dev/internals.html),
[TODO](https://curl.haxx.se/docs/todo.html),
[KNOWN_BUGS](https://curl.haxx.se/docs/knownbugs.html) and the [most recent
changes](https://curl.haxx.se/dev/sourceactivity.html) in git. Just lurking on
the [curl-library mailing
list](https://curl.haxx.se/mail/list.cgi?list=curl-library) will give you a
lot of insights on what's going on right now. Asking there is a good idea too.
## Write a good patch
### Follow code style
When writing C code, follow the
[CODE_STYLE](https://curl.haxx.se/dev/code-style.html) already established in
the project. Consistent style makes code easier to read and mistakes less
likely to happen. Run `make checksrc` before you submit anything, to make sure
you follow the basic style. That script doesn't verify everything, but if it
complains you know you have work to do.
### Non-clobbering All Over
When you write new functionality or fix bugs, it is important that you don't
fiddle all over the source files and functions. Remember that it is likely
that other people have done changes in the same source files as you have and
possibly even in the same functions. If you bring completely new
functionality, try writing it in a new source file. If you fix bugs, try to
fix one bug at a time and send them as separate patches.
### Write Separate Changes
It is annoying when you get a huge patch from someone that is said to fix 511
odd problems, but discussions and opinions don't agree with 510 of them - or
509 of them were already fixed in a different way. Then the person merging
this change needs to extract the single interesting patch from somewhere
within the huge pile of source, and that creates a lot of extra work.
Preferably, each fix that corrects a problem should be in its own patch/commit
with its own description/commit message stating exactly what they correct so
that all changes can be selectively applied by the maintainer or other
interested parties.
Also, separate changes enable bisecting much better for tracking problems
and regression in the future.
### Patch Against Recent Sources
Please try to get the latest available sources to make your patches against.
It makes the lives of the developers so much easier. The very best is if you
get the most up-to-date sources from the git repository, but the latest
release archive is quite OK as well!
### Documentation
Writing docs is dead boring and one of the big problems with many open source
projects. But someone's gotta do it! It makes things a lot easier if you
submit a small description of your fix or your new features with every
contribution so that it can be swiftly added to the package documentation.
The documentation is always made in man pages (nroff formatted) or plain
ASCII files. All HTML files on the web site and in the release archives are
generated from the nroff/ASCII versions.
### Test Cases
Since the introduction of the test suite, we can quickly verify that the main
features are working as they're supposed to. To maintain this situation and
improve it, all new features and functions that are added need to be tested
in the test suite. Every feature that is added should get at least one valid
test case that verifies that it works as documented. If every submitter also
posts a few test cases, it won't end up as a heavy burden on a single person!
If you don't have test cases or perhaps you have done something that is very
hard to write tests for, do explain exactly how you have otherwise tested and
verified your changes.
## Sharing Your Changes
### How to get your changes into the main sources
Ideally you file a [pull request on
github](https://github.com/curl/curl/pulls), but you can also send your plain
patch to [the curl-library mailing
list](https://curl.haxx.se/mail/list.cgi?list=curl-library).
Either way, your change will be reviewed and discussed there and you will be
expected to correct flaws pointed out and update accordingly, or the change
risks stalling and eventually just getting deleted without action. As a
submitter of a change, you are the owner of that change until it has been merged.
Respond on the list or on github about the change and answer questions and/or
fix nits/flaws. This is very important. We will take lack of replies as a
sign that you're not very anxious to get your patch accepted and we tend to
simply drop such changes.
### About pull requests
With github it is easy to send a [pull
request](https://github.com/curl/curl/pulls) to the curl project to have
changes merged.
We strongly prefer pull requests to mailed patches, as it makes it a proper
git commit that is easy to merge and they are easy to track and not that easy
to loose in the flood of many emails, like they sometimes do on the mailing
lists.
Every pull request submitted will automatically be tested in several different
ways. Every pull request is verfied that:
- ... the code still builds, warning-free, on Linux and macOS, with both
clang and gcc
- ... the code still builds fine on Windows with several MSVC versions
- ... the code still builds with cmake on Linux, with gcc and clang
- ... the code follows rudimentary code style rules
- ... the test suite still runs 100% fine
- ... the release tarball (the "dist") still works
- ... the code coverage doesn't shrink drastically
If the pull-request fails one of these tests, it will show up as a red X and
you are expected to fix the problem. If you don't understand whan the issue is
or have other problems to fix the complaint, just ask and other project
members will likely be able to help out.
When you adjust your pull requests after review, consider squashing the
commits so that we can review the full updated version more easily.
### Making quality patches
Make the patch against as recent source versions as possible.
If you've followed the tips in this document and your patch still hasn't been
incorporated or responded to after some weeks, consider resubmitting it to the
list or better yet: change it to a pull request.
### Write good commit messages
A short guide to how to write commit messages in the curl project.
---- start ----
[area]: [short line describing the main effect]
-- empty line --
[full description, no wider than 72 columns that describe as much as
possible as to why this change is made, and possibly what things
it fixes and everything else that is related]
-- empty line --
[Closes/Fixes #1234 - if this closes or fixes a github issue]
[Bug: URL to source of the report or more related discussion]
[Reported-by: John Doe - credit the reporter]
[whatever-else-by: credit all helpers, finders, doers]
---- stop ----
Don't forget to use commit --author="" if you commit someone else's work, and
make sure that you have your own user and email setup correctly in git before
you commit
### Write Access to git Repository
If you are a very frequent contributor, you may be given push access to the
git repository and then you'll be able to push your changes straight into the
git repo instead of sending changes as pull requests or by mail as patches.
Just ask if this is what you'd want. You will be required to have posted
several high quality patches first, before you can be granted push access.
### How To Make a Patch with git
You need to first checkout the repository:
git clone https://github.com/curl/curl.git
You then proceed and edit all the files you like and you commit them to your
local repository:
git commit [file]
As usual, group your commits so that you commit all changes at once that
constitute a logical change.
Once you have done all your commits and you're happy with what you see, you
can make patches out of your changes that are suitable for mailing:
git format-patch remotes/origin/master
This creates files in your local directory named NNNN-[name].patch for each
commit.
Now send those patches off to the curl-library list. You can of course opt to
do that with the 'git send-email' command.
### How To Make a Patch without git
Keep a copy of the unmodified curl sources. Make your changes in a separate
source tree. When you think you have something that you want to offer the
curl community, use GNU diff to generate patches.
If you have modified a single file, try something like:
diff -u unmodified-file.c my-changed-one.c > my-fixes.diff
If you have modified several files, possibly in different directories, you
can use diff recursively:
diff -ur curl-original-dir curl-modified-sources-dir > my-fixes.diff
The GNU diff and GNU patch tools exist for virtually all platforms, including
all kinds of Unixes and Windows:
For unix-like operating systems:
- [https://savannah.gnu.org/projects/patch/](https://savannah.gnu.org/projects/patch/)
- [https://www.gnu.org/software/diffutils/](https://www.gnu.org/software/diffutils/)
For Windows:
- [https://gnuwin32.sourceforge.io/packages/patch.htm](https://gnuwin32.sourceforge.io/packages/patch.htm)
- [https://gnuwin32.sourceforge.io/packages/diffutils.htm](https://gnuwin32.sourceforge.io/packages/diffutils.htm)

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,206 @@
_ _ ____ _
___| | | | _ \| |
/ __| | | | |_) | |
| (__| |_| | _ <| |___
\___|\___/|_| \_\_____|
FEATURES
curl tool
- config file support
- multiple URLs in a single command line
- range "globbing" support: [0-13], {one,two,three}
- multiple file upload on a single command line
- custom maximum transfer rate
- redirectable stderr
- metalink support (*13)
libcurl
- full URL syntax with no length limit
- custom maximum download time
- custom least download speed acceptable
- custom output result after completion
- guesses protocol from host name unless specified
- uses .netrc
- progress bar with time statistics while downloading
- "standard" proxy environment variables support
- compiles on win32 (reported builds on 40+ operating systems)
- selectable network interface for outgoing traffic
- IPv6 support on unix and Windows
- persistent connections
- socks 4 + 5 support, with or without local name resolving
- supports user name and password in proxy environment variables
- operations through proxy "tunnel" (using CONNECT)
- support for large files (>2GB and >4GB) during upload and download
- replaceable memory functions (malloc, free, realloc, etc)
- asynchronous name resolving (*6)
- both a push and a pull style interface
- international domain names (*11)
HTTP
- HTTP/1.1 compliant (optionally uses 1.0)
- GET
- PUT
- HEAD
- POST
- Pipelining
- multipart formpost (RFC1867-style)
- authentication: Basic, Digest, NTLM (*9) and Negotiate (SPNEGO) (*3)
to server and proxy
- resume (both GET and PUT)
- follow redirects
- maximum amount of redirects to follow
- custom HTTP request
- cookie get/send fully parsed
- reads/writes the netscape cookie file format
- custom headers (replace/remove internally generated headers)
- custom user-agent string
- custom referrer string
- range
- proxy authentication
- time conditions
- via http-proxy
- retrieve file modification date
- Content-Encoding support for deflate and gzip
- "Transfer-Encoding: chunked" support in uploads
- data compression (*12)
- HTTP/2 (*5)
HTTPS (*1)
- (all the HTTP features)
- using client certificates
- verify server certificate
- via http-proxy
- select desired encryption
- force usage of a specific SSL version (SSLv2 (*7), SSLv3 (*10) or TLSv1)
FTP
- download
- authentication
- Kerberos 5 (*14)
- active/passive using PORT, EPRT, PASV or EPSV
- single file size information (compare to HTTP HEAD)
- 'type=' URL support
- dir listing
- dir listing names-only
- upload
- upload append
- upload via http-proxy as HTTP PUT
- download resume
- upload resume
- custom ftp commands (before and/or after the transfer)
- simple "range" support
- via http-proxy
- all operations can be tunneled through a http-proxy
- customizable to retrieve file modification date
- no dir depth limit
FTPS (*1)
- implicit ftps:// support that use SSL on both connections
- explicit "AUTH TLS" and "AUTH SSL" usage to "upgrade" plain ftp://
connection to use SSL for both or one of the connections
SCP (*8)
- both password and public key auth
SFTP (*8)
- both password and public key auth
- with custom commands sent before/after the transfer
TFTP
- download
- upload
TELNET
- connection negotiation
- custom telnet options
- stdin/stdout I/O
LDAP (*2)
- full LDAP URL support
DICT
- extended DICT URL support
FILE
- URL support
- upload
- resume
SMB
- SMBv1 over TCP and SSL
- download
- upload
- authentication with NTLMv1
SMTP
- authentication: Plain, Login, CRAM-MD5, Digest-MD5, NTLM (*9), Kerberos 5
(*4) and External.
- send e-mails
- mail from support
- mail size support
- mail auth support for trusted server-to-server relaying
- multiple recipients
- via http-proxy
SMTPS (*1)
- implicit smtps:// support
- explicit "STARTTLS" usage to "upgrade" plain smtp:// connections to use SSL
- via http-proxy
POP3
- authentication: Clear Text, APOP and SASL
- SASL based authentication: Plain, Login, CRAM-MD5, Digest-MD5, NTLM (*9),
Kerberos 5 (*4) and External.
- list e-mails
- retrieve e-mails
- enhanced command support for: CAPA, DELE, TOP, STAT, UIDL and NOOP via
custom requests
- via http-proxy
POP3S (*1)
- implicit pop3s:// support
- explicit "STLS" usage to "upgrade" plain pop3:// connections to use SSL
- via http-proxy
IMAP
- authentication: Clear Text and SASL
- SASL based authentication: Plain, Login, CRAM-MD5, Digest-MD5, NTLM (*9),
Kerberos 5 (*4) and External.
- list the folders of a mailbox
- select a mailbox with support for verifying the UIDVALIDITY
- fetch e-mails with support for specifying the UID and SECTION
- upload e-mails via the append command
- enhanced command support for: EXAMINE, CREATE, DELETE, RENAME, STATUS,
STORE, COPY and UID via custom requests
- via http-proxy
IMAPS (*1)
- implicit imaps:// support
- explicit "STARTTLS" usage to "upgrade" plain imap:// connections to use SSL
- via http-proxy
FOOTNOTES
=========
*1 = requires OpenSSL, GnuTLS, NSS, yassl, axTLS, PolarSSL, WinSSL (native
Windows), Secure Transport (native iOS/OS X) or GSKit (native IBM i)
*2 = requires OpenLDAP or WinLDAP
*3 = requires a GSS-API implementation (such as Heimdal or MIT Kerberos) or
SSPI (native Windows)
*4 = requires a GSS-API implementation, however, only Windows SSPI is
currently supported
*5 = requires nghttp2 and possibly a recent TLS library
*6 = requires c-ares
*7 = requires OpenSSL, NSS, GSKit, WinSSL or Secure Transport; GnuTLS, for
example, only supports SSLv3 and TLSv1
*8 = requires libssh2
*9 = requires OpenSSL, GnuTLS, mbedTLS, NSS, yassl, Secure Transport or SSPI
(native Windows)
*10 = requires any of the SSL libraries in (*1) above other than axTLS, which
does not support SSLv3
*11 = requires libidn or Windows
*12 = requires libz
*13 = requires libmetalink, and either an Apple or Microsoft operating
system, or OpenSSL, or GnuTLS, or NSS
*14 = requires a GSS-API implementation (such as Heimdal or MIT Kerberos)

View File

@@ -0,0 +1,277 @@
How curl Became Like This
=========================
Towards the end of 1996, Daniel Stenberg was spending time writing an IRC bot
for an Amiga related channel on EFnet. He then came up with the idea to make
currency-exchange calculations available to Internet Relay Chat (IRC)
users. All the necessary data were published on the Web; he just needed to
automate their retrieval.
Daniel simply adopted an existing command-line open-source tool, httpget, that
Brazilian Rafael Sagula had written and recently released version 0.1 of. After
a few minor adjustments, it did just what he needed.
1997
----
HttpGet 1.0 was released on April 8th 1997 with brand new HTTP proxy support.
We soon found and fixed support for getting currencies over GOPHER. Once FTP
download support was added, the name of the project was changed and urlget 2.0
was released in August 1997. The http-only days were already passed.
1998
----
The project slowly grew bigger. When upload capabilities were added and the
name once again was misleading, a second name change was made and on March 20,
1998 curl 4 was released. (The version numbering from the previous names was
kept.)
(Unrelated to this project a company called Curl Corporation registered a US
trademark on the name "CURL" on May 18 1998. That company had then already
registered the curl.com domain back in November of the previous year. All this
was revealed to us much later.)
SSL support was added, powered by the SSLeay library.
August: first announcement of curl on freshmeat.net.
October: with the curl 4.9 release and the introduction of cookie support,
curl was no longer released under the GPL license. Now we're at 4000 lines of
code, we switched over to the MPL license to restrict the effects of
"copyleft".
November: configure script and reported successful compiles on several
major operating systems. The never-quite-understood -F option was added and
curl could now simulate quite a lot of a browser. TELNET support was added.
Curl 5 was released in December 1998 and introduced the first ever curl man
page. People started making Linux RPM packages out of it.
1999
----
January: DICT support added.
OpenSSL took over and SSLeay was abandoned.
May: first Debian package.
August: LDAP:// and FILE:// support added. The curl web site gets 1300 visits
weekly. Moved site to curl.haxx.nu.
September: Released curl 6.0. 15000 lines of code.
December 28: added the project on Sourceforge and started using its services
for managing the project.
2000
----
Spring: major internal overhaul to provide a suitable library interface.
The first non-beta release was named 7.1 and arrived in August. This offered
the easy interface and turned out to be the beginning of actually getting
other software and programs to be based on and powered by libcurl. Almost
20000 lines of code.
June: the curl site moves to "curl.haxx.se"
August, the curl web site gets 4000 visits weekly.
The PHP guys adopted libcurl already the same month, when the first ever third
party libcurl binding showed up. CURL has been a supported module in PHP since
the release of PHP 4.0.2. This would soon get followers. More than 16
different bindings exist at the time of this writing.
September: kerberos4 support was added.
November: started the work on a test suite for curl. It was later re-written
from scratch again. The libcurl major SONAME number was set to 1.
2001
----
January: Daniel released curl 7.5.2 under a new license again: MIT (or
MPL). The MIT license is extremely liberal and can be combined with GPL
in other projects. This would finally put an end to the "complaints" from
people involved in GPLed projects that previously were prohibited from using
libcurl while it was released under MPL only. (Due to the fact that MPL is
deemed "GPL incompatible".)
March 22: curl supports HTTP 1.1 starting with the release of 7.7. This
also introduced libcurl's ability to do persistent connections. 24000 lines of
code. The libcurl major SONAME number was bumped to 2 due to this overhaul.
The first experimental ftps:// support was added.
August: curl is bundled in Mac OS X, 10.1. It was already becoming more and
more of a standard utility of Linux distributions and a regular in the BSD
ports collections. The curl web site gets 8000 visits weekly. Curl Corporation
contacted Daniel to discuss "the name issue". After Daniel's reply, they have
never since got back in touch again.
September: libcurl 7.9 introduces cookie jar and curl_formadd(). During the
forthcoming 7.9.x releases, we introduced the multi interface slowly and
without many whistles.
2002
----
June: the curl web site gets 13000 visits weekly. curl and libcurl is
35000 lines of code. Reported successful compiles on more than 40 combinations
of CPUs and operating systems.
To estimate number of users of the curl tool or libcurl library is next to
impossible. Around 5000 downloaded packages each week from the main site gives
a hint, but the packages are mirrored extensively, bundled with numerous OS
distributions and otherwise retrieved as part of other software.
September: with the release of curl 7.10 it is released under the MIT license
only.
2003
----
January: Started working on the distributed curl tests. The autobuilds.
February: the curl site averages at 20000 visits weekly. At any given moment,
there's an average of 3 people browsing the curl.haxx.se site.
Multiple new authentication schemes are supported: Digest (May), NTLM (June)
and Negotiate (June).
November: curl 7.10.8 is released. 45000 lines of code. ~55000 unique visitors
to the curl.haxx.se site. Five official web mirrors.
December: full-fledged SSL for FTP is supported.
2004
----
January: curl 7.11.0 introduced large file support.
June: curl 7.12.0 introduced IDN support. 10 official web mirrors.
This release bumped the major SONAME to 3 due to the removal of the
curl_formparse() function
August: Curl and libcurl 7.12.1
Public curl release number: 82
Releases counted from the very beginning: 109
Available command line options: 96
Available curl_easy_setopt() options: 120
Number of public functions in libcurl: 36
Amount of public web site mirrors: 12
Number of known libcurl bindings: 26
2005
----
April: GnuTLS can now optionally be used for the secure layer when curl is
built.
April: Added the multi_socket() API
September: TFTP support was added.
More than 100,000 unique visitors of the curl web site. 25 mirrors.
December: security vulnerability: libcurl URL Buffer Overflow
2006
----
January: We dropped support for Gopher. We found bugs in the implementation
that turned out to have been introduced years ago, so with the conclusion that
nobody had found out in all this time we removed it instead of fixing it.
March: security vulnerability: libcurl TFTP Packet Buffer Overflow
September: The major SONAME number for libcurl was bumped to 4 due to the
removal of ftp third party transfer support.
November: Added SCP and SFTP support
2007
----
February: Added support for the Mozilla NSS library to do the SSL/TLS stuff
July: security vulnerability: libcurl GnuTLS insufficient cert verification
2008
----
November:
Command line options: 128
curl_easy_setopt() options: 158
Public functions in libcurl: 58
Known libcurl bindings: 37
Contributors: 683
145,000 unique visitors. >100 GB downloaded.
2009
----
March: security vulnerability: libcurl Arbitrary File Access
August: security vulnerability: libcurl embedded zero in cert name
December: Added support for IMAP, POP3 and SMTP
2010
----
January: Added support for RTSP
February: security vulnerability: libcurl data callback excessive length
March: The project switched over to use git (hosted by github) instead of CVS
for source code control
May: Added support for RTMP
Added support for PolarSSL to do the SSL/TLS stuff
August:
Public curl releases: 117
Command line options: 138
curl_easy_setopt() options: 180
Public functions in libcurl: 58
Known libcurl bindings: 39
Contributors: 808
Gopher support added (re-added actually, see January 2006)
2012
----
July: Added support for Schannel (native Windows TLS backend) and Darwin SSL
(Native Mac OS X and iOS TLS backend).
Supports metalink
October: SSH-agent support.
2013
----
February: Cleaned up internals to always uses the "multi" non-blocking
approach internally and only expose the blocking API with a wrapper.
September: First small steps on supporting HTTP/2 with nghttp2.
October: Removed krb4 support.
December: Happy eyeballs.
2014
----
March: first real release supporting HTTP/2
September: Web site had 245,000 unique visitors and served 236GB data

View File

@@ -0,0 +1,104 @@
# HTTP Cookies
## Cookie overview
Cookies are `name=contents` pairs that a HTTP server tells the client to
hold and then the client sends back those to the server on subsequent
requests to the same domains and paths for which the cookies were set.
Cookies are either "session cookies" which typically are forgotten when the
session is over which is often translated to equal when browser quits, or
the cookies aren't session cookies they have expiration dates after which
the client will throw them away.
Cookies are set to the client with the Set-Cookie: header and are sent to
servers with the Cookie: header.
For a very long time, the only spec explaining how to use cookies was the
original [Netscape spec from 1994](https://curl.haxx.se/rfc/cookie_spec.html).
In 2011, [RFC6265](https://www.ietf.org/rfc/rfc6265.txt) was finally
published and details how cookies work within HTTP.
## Cookies saved to disk
Netscape once created a file format for storing cookies on disk so that they
would survive browser restarts. curl adopted that file format to allow
sharing the cookies with browsers, only to see browsers move away from that
format. Modern browsers no longer use it, while curl still does.
The netscape cookie file format stores one cookie per physical line in the
file with a bunch of associated meta data, each field separated with
TAB. That file is called the cookiejar in curl terminology.
When libcurl saves a cookiejar, it creates a file header of its own in which
there is a URL mention that will link to the web version of this document.
## Cookies with curl the command line tool
curl has a full cookie "engine" built in. If you just activate it, you can
have curl receive and send cookies exactly as mandated in the specs.
Command line options:
`-b, --cookie`
tell curl a file to read cookies from and start the cookie engine, or if it
isn't a file it will pass on the given string. -b name=var works and so does
-b cookiefile.
`-j, --junk-session-cookies`
when used in combination with -b, it will skip all "session cookies" on load
so as to appear to start a new cookie session.
`-c, --cookie-jar`
tell curl to start the cookie engine and write cookies to the given file
after the request(s)
## Cookies with libcurl
libcurl offers several ways to enable and interface the cookie engine. These
options are the ones provided by the native API. libcurl bindings may offer
access to them using other means.
`CURLOPT_COOKIE`
Is used when you want to specify the exact contents of a cookie header to
send to the server.
`CURLOPT_COOKIEFILE`
Tell libcurl to activate the cookie engine, and to read the initial set of
cookies from the given file. Read-only.
`CURLOPT_COOKIEJAR`
Tell libcurl to activate the cookie engine, and when the easy handle is
closed save all known cookies to the given cookiejar file. Write-only.
`CURLOPT_COOKIELIST`
Provide detailed information about a single cookie to add to the internal
storage of cookies. Pass in the cookie as a HTTP header with all the details
set, or pass in a line from a netscape cookie file. This option can also be
used to flush the cookies etc.
`CURLINFO_COOKIELIST`
Extract cookie information from the internal cookie storage as a linked
list.
## Cookies with javascript
These days a lot of the web is built up by javascript. The webbrowser loads
complete programs that render the page you see. These javascript programs
can also set and access cookies.
Since curl and libcurl are plain HTTP clients without any knowledge of or
capability to handle javascript, such cookies will not be detected or used.
Often, if you want to mimic what a browser does on such web sites, you can
record web browser HTTP traffic when using such a site and then repeat the
cookie operations using curl or libcurl.

View File

@@ -0,0 +1,126 @@
HTTP/2 with curl
================
[HTTP/2 Spec](https://www.rfc-editor.org/rfc/rfc7540.txt)
[http2 explained](https://daniel.haxx.se/http2/)
Build prerequisites
-------------------
- nghttp2
- OpenSSL, libressl, BoringSSL, NSS, GnutTLS, mbedTLS, wolfSSL or SChannel
with a new enough version.
[nghttp2](https://nghttp2.org/)
-------------------------------
libcurl uses this 3rd party library for the low level protocol handling
parts. The reason for this is that HTTP/2 is much more complex at that layer
than HTTP/1.1 (which we implement on our own) and that nghttp2 is an already
existing and well functional library.
We require at least version 1.0.0.
Over an http:// URL
-------------------
If `CURLOPT_HTTP_VERSION` is set to `CURL_HTTP_VERSION_2_0`, libcurl will
include an upgrade header in the initial request to the host to allow
upgrading to HTTP/2.
Possibly we can later introduce an option that will cause libcurl to fail if
not possible to upgrade. Possibly we introduce an option that makes libcurl
use HTTP/2 at once over http://
Over an https:// URL
--------------------
If `CURLOPT_HTTP_VERSION` is set to `CURL_HTTP_VERSION_2_0`, libcurl will use
ALPN (or NPN) to negotiate which protocol to continue with. Possibly introduce
an option that will cause libcurl to fail if not possible to use HTTP/2.
`CURL_HTTP_VERSION_2TLS` was added in 7.47.0 as a way to ask libcurl to prefer
HTTP/2 for HTTPS but stick to 1.1 by default for plain old HTTP connections.
ALPN is the TLS extension that HTTP/2 is expected to use. The NPN extension is
for a similar purpose, was made prior to ALPN and is used for SPDY so early
HTTP/2 servers are implemented using NPN before ALPN support is widespread.
`CURLOPT_SSL_ENABLE_ALPN` and `CURLOPT_SSL_ENABLE_NPN` are offered to allow
applications to explicitly disable ALPN or NPN.
SSL libs
--------
The challenge is the ALPN and NPN support and all our different SSL
backends. You may need a fairly updated SSL library version for it to provide
the necessary TLS features. Right now we support:
- OpenSSL: ALPN and NPN
- libressl: ALPN and NPN
- BoringSSL: ALPN and NPN
- NSS: ALPN and NPN
- GnuTLS: ALPN
- mbedTLS: ALPN
- SChannel: ALPN
- wolfSSL: ALPN
Multiplexing
------------
Starting in 7.43.0, libcurl fully supports HTTP/2 multiplexing, which is the
term for doing multiple independent transfers over the same physical TCP
connection.
To take advantage of multiplexing, you need to use the multi interface and set
`CURLMOPT_PIPELINING` to `CURLPIPE_MULTIPLEX`. With that bit set, libcurl will
attempt to re-use existing HTTP/2 connections and just add a new stream over
that when doing subsequent parallel requests.
While libcurl sets up a connection to a HTTP server there is a period during
which it doesn't know if it can pipeline or do multiplexing and if you add new
transfers in that period, libcurl will default to start new connections for
those transfers. With the new option `CURLOPT_PIPEWAIT` (added in 7.43.0), you
can ask that a transfer should rather wait and see in case there's a
connection for the same host in progress that might end up being possible to
multiplex on. It favours keeping the number of connections low to the cost of
slightly longer time to first byte transferred.
Applications
------------
We hide HTTP/2's binary nature and convert received HTTP/2 traffic to headers
in HTTP 1.1 style. This allows applications to work unmodified.
curl tool
---------
curl offers the `--http2` command line option to enable use of HTTP/2.
curl offers the `--http2-prior-knowledge` command line option to enable use of
HTTP/2 without HTTP/1.1 Upgrade.
Since 7.47.0, the curl tool enables HTTP/2 by default for HTTPS connections.
curl tool limitations
---------------------
The command line tool won't do any HTTP/2 multiplexing even though libcurl
supports it, simply because the curl tool is not written to take advantage of
the libcurl API that's necessary for this (the multi interface). We have an
outstanding TODO item for this and **you** can help us make it happen.
The command line tool also doesn't support HTTP/2 server push for the same
reason it doesn't do multiplexing: it needs to use the multi interface for
that so that multiplexing is supported.
HTTP Alternative Services
-------------------------
Alt-Svc is an extension with a corresponding frame (ALTSVC) in HTTP/2 that
tells the client about an alternative "route" to the same content for the same
origin server that you get the response from. A browser or long-living client
can use that hint to create a new connection asynchronously. For libcurl, we
may introduce a way to bring such clues to the application and/or let a
subsequent request use the alternate route automatically.
[Detailed in RFC 7838](https://tools.ietf.org/html/rfc7838)

View File

@@ -0,0 +1,102 @@
_ _ ____ _
___| | | | _ \| |
/ __| | | | |_) | |
| (__| |_| | _ <| |___
\___|\___/|_| \_\_____|
How To Compile with CMake
Building with CMake
==========================
This document describes how to compile, build and install curl and libcurl
from source code using the CMake build tool. To build with CMake, you will
of course have to first install CMake. The minimum required version of
CMake is specified in the file CMakeLists.txt found in the top of the curl
source tree. Once the correct version of CMake is installed you can follow
the instructions below for the platform you are building on.
CMake builds can be configured either from the command line, or from one
of CMake's GUI's.
Current flaws in the curl CMake build
=====================================
Missing features in the cmake build:
- Builds libcurl without large file support
- Does not support all SSL libraries (only OpenSSL, WinSSL, DarwinSSL, and
mbed TLS)
- Doesn't build with SCP and SFTP support (libssh2) (see issue #1155)
- Doesn't allow different resolver backends (no c-ares build support)
- No RTMP support built
- Doesn't allow build curl and libcurl debug enabled
- Doesn't allow a custom CA bundle path
- Doesn't allow you to disable specific protocols from the build
- Doesn't find or use krb4 or GSS
- Rebuilds test files too eagerly, but still can't run the tests
- Does't detect the correct strerror_r flavor when cross-compiling (issue #1123)
Important notice
==================
If you got your curl sources from a distribution tarball, make sure to
delete the generic 'include/curl/curlbuild.h' file that comes with it:
rm -f curl/include/curl/curlbuild.h
The purpose of this file is to provide reasonable definitions for systems
where autoconfiguration is not available. CMake will create its own
version of this file in its build directory. If the "generic" version
is not deleted, weird build errors may occur on some systems.
Command Line CMake
==================
A CMake build of curl is similar to the autotools build of curl. It
consists of the following steps after you have unpacked the source.
1. Create an out of source build tree parallel to the curl source
tree and change into that directory
$ mkdir curl-build
$ cd curl-build
2. Run CMake from the build tree, giving it the path to the top of
the curl source tree. CMake will pick a compiler for you. If you
want to specify the compile, you can set the CC environment
variable prior to running CMake.
$ cmake ../curl
$ make
3. Install to default location:
$ make install
(The test suite does not work with the cmake build)
ccmake
=========
CMake comes with a curses based interface called ccmake. To run ccmake on
a curl use the instructions for the command line cmake, but substitute
ccmake ../curl for cmake ../curl. This will bring up a curses interface
with instructions on the bottom of the screen. You can press the "c" key
to configure the project, and the "g" key to generate the project. After
the project is generated, you can run make.
cmake-gui
=========
CMake also comes with a Qt based GUI called cmake-gui. To configure with
cmake-gui, you run cmake-gui and follow these steps:
1. Fill in the "Where is the source code" combo box with the path to
the curl source tree.
2. Fill in the "Where to build the binaries" combo box with the path
to the directory for your build tree, ideally this should not be the
same as the source tree, but a parallel directory called curl-build or
something similar.
3. Once the source and binary directories are specified, press the
"Configure" button.
4. Select the native build tool that you want to use.
5. At this point you can change any of the options presented in the
GUI. Once you have selected all the options you want, click the
"Generate" button.
6. Run the native build tool that you used CMake to generate.

View File

@@ -0,0 +1,513 @@
# how to install curl and libcurl
## Installing Binary Packages
Lots of people download binary distributions of curl and libcurl. This
document does not describe how to install curl or libcurl using such a binary
package. This document describes how to compile, build and install curl and
libcurl from source code.
## Building from git
If you get your code off a git repository instead of a release tarball, see
the `GIT-INFO` file in the root directory for specific instructions on how to
proceed.
# Unix
A normal Unix installation is made in three or four steps (after you've
unpacked the source archive):
./configure
make
make test (optional)
make install
You probably need to be root when doing the last command.
Get a full listing of all available configure options by invoking it like:
./configure --help
If you want to install curl in a different file hierarchy than `/usr/local`,
specify that when running configure:
./configure --prefix=/path/to/curl/tree
If you have write permission in that directory, you can do 'make install'
without being root. An example of this would be to make a local install in
your own home directory:
./configure --prefix=$HOME
make
make install
The configure script always tries to find a working SSL library unless
explicitly told not to. If you have OpenSSL installed in the default search
path for your compiler/linker, you don't need to do anything special. If you
have OpenSSL installed in /usr/local/ssl, you can run configure like:
./configure --with-ssl
If you have OpenSSL installed somewhere else (for example, /opt/OpenSSL) and
you have pkg-config installed, set the pkg-config path first, like this:
env PKG_CONFIG_PATH=/opt/OpenSSL/lib/pkgconfig ./configure --with-ssl
Without pkg-config installed, use this:
./configure --with-ssl=/opt/OpenSSL
If you insist on forcing a build without SSL support, even though you may
have OpenSSL installed in your system, you can run configure like this:
./configure --without-ssl
If you have OpenSSL installed, but with the libraries in one place and the
header files somewhere else, you have to set the LDFLAGS and CPPFLAGS
environment variables prior to running configure. Something like this should
work:
CPPFLAGS="-I/path/to/ssl/include" LDFLAGS="-L/path/to/ssl/lib" ./configure
If you have shared SSL libs installed in a directory where your run-time
linker doesn't find them (which usually causes configure failures), you can
provide the -R option to ld on some operating systems to set a hard-coded
path to the run-time linker:
LDFLAGS=-R/usr/local/ssl/lib ./configure --with-ssl
## More Options
To force a static library compile, disable the shared library creation by
running configure like:
./configure --disable-shared
To tell the configure script to skip searching for thread-safe functions, add
an option like:
./configure --disable-thread
If you're a curl developer and use gcc, you might want to enable more debug
options with the `--enable-debug` option.
curl can be built to use a whole range of libraries to provide various useful
services, and configure will try to auto-detect a decent default. But if you
want to alter it, you can select how to deal with each individual library.
## Select TLS backend
The default OpenSSL configure check will also detect and use BoringSSL or
libressl.
- GnuTLS: `--without-ssl --with-gnutls`.
- Cyassl: `--without-ssl --with-cyassl`
- NSS: `--without-ssl --with-nss`
- PolarSSL: `--without-ssl --with-polarssl`
- mbedTLS: `--without-ssl --with-mbedtls`
- axTLS: `--without-ssl --with-axtls`
- schannel: `--without-ssl --with-winssl`
- secure transport: `--without-ssl --with-darwinssl`
# Windows
## Building Windows DLLs and C run-time (CRT) linkage issues
As a general rule, building a DLL with static CRT linkage is highly
discouraged, and intermixing CRTs in the same app is something to avoid at
any cost.
Reading and comprehending Microsoft Knowledge Base articles KB94248 and
KB140584 is a must for any Windows developer. Especially important is full
understanding if you are not going to follow the advice given above.
- [How To Use the C Run-Time](https://support.microsoft.com/kb/94248/en-us)
- [How to link with the correct C Run-Time CRT library](https://support.microsoft.com/kb/140584/en-us)
- [Potential Errors Passing CRT Objects Across DLL Boundaries](https://msdn.microsoft.com/en-us/library/ms235460)
If your app is misbehaving in some strange way, or it is suffering from
memory corruption, before asking for further help, please try first to
rebuild every single library your app uses as well as your app using the
debug multithreaded dynamic C runtime.
If you get linkage errors read section 5.7 of the FAQ document.
## MingW32
Make sure that MinGW32's bin dir is in the search path, for example:
set PATH=c:\mingw32\bin;%PATH%
then run `mingw32-make mingw32` in the root dir. There are other
make targets available to build libcurl with more features, use:
- `mingw32-make mingw32-zlib` to build with Zlib support;
- `mingw32-make mingw32-ssl-zlib` to build with SSL and Zlib enabled;
- `mingw32-make mingw32-ssh2-ssl-zlib` to build with SSH2, SSL, Zlib;
- `mingw32-make mingw32-ssh2-ssl-sspi-zlib` to build with SSH2, SSL, Zlib
and SSPI support.
If you have any problems linking libraries or finding header files, be sure
to verify that the provided "Makefile.m32" files use the proper paths, and
adjust as necessary. It is also possible to override these paths with
environment variables, for example:
set ZLIB_PATH=c:\zlib-1.2.8
set OPENSSL_PATH=c:\openssl-1.0.2c
set LIBSSH2_PATH=c:\libssh2-1.6.0
It is also possible to build with other LDAP SDKs than MS LDAP; currently
it is possible to build with native Win32 OpenLDAP, or with the Novell CLDAP
SDK. If you want to use these you need to set these vars:
set LDAP_SDK=c:\openldap
set USE_LDAP_OPENLDAP=1
or for using the Novell SDK:
set USE_LDAP_NOVELL=1
If you want to enable LDAPS support then set LDAPS=1.
## Cygwin
Almost identical to the unix installation. Run the configure script in the
curl source tree root with `sh configure`. Make sure you have the sh
executable in /bin/ or you'll see the configure fail toward the end.
Run `make`
## Borland C++ compiler
Ensure that your build environment is properly set up to use the compiler and
associated tools. PATH environment variable must include the path to bin
subdirectory of your compiler installation, eg: `c:\Borland\BCC55\bin`
It is advisable to set environment variable BCCDIR to the base path of the
compiler installation.
set BCCDIR=c:\Borland\BCC55
In order to build a plain vanilla version of curl and libcurl run the
following command from curl's root directory:
make borland
To build curl and libcurl with zlib and OpenSSL support set environment
variables `ZLIB_PATH` and `OPENSSL_PATH` to the base subdirectories of the
already built zlib and OpenSSL libraries and from curl's root directory run
command:
make borland-ssl-zlib
libcurl library will be built in 'lib' subdirectory while curl tool is built
in 'src' subdirectory. In order to use libcurl library it is advisable to
modify compiler's configuration file bcc32.cfg located in
`c:\Borland\BCC55\bin` to reflect the location of libraries include paths for
example the '-I' line could result in something like:
-I"c:\Borland\BCC55\include;c:\curl\include;c:\openssl\inc32"
bcc3.cfg `-L` line could also be modified to reflect the location of of
libcurl library resulting for example:
-L"c:\Borland\BCC55\lib;c:\curl\lib;c:\openssl\out32"
In order to build sample program `simple.c` from the docs\examples
subdirectory run following command from mentioned subdirectory:
bcc32 simple.c libcurl.lib cw32mt.lib
In order to build sample program simplessl.c an SSL enabled libcurl is
required, as well as the OpenSSL libeay32.lib and ssleay32.lib libraries.
## Disabling Specific Protocols in Windows builds
The configure utility, unfortunately, is not available for the Windows
environment, therefore, you cannot use the various disable-protocol options of
the configure utility on this platform.
However, you can use the following defines to disable specific
protocols:
- `HTTP_ONLY` disables all protocols except HTTP
- `CURL_DISABLE_FTP` disables FTP
- `CURL_DISABLE_LDAP` disables LDAP
- `CURL_DISABLE_TELNET` disables TELNET
- `CURL_DISABLE_DICT` disables DICT
- `CURL_DISABLE_FILE` disables FILE
- `CURL_DISABLE_TFTP` disables TFTP
- `CURL_DISABLE_HTTP` disables HTTP
- `CURL_DISABLE_IMAP` disables IMAP
- `CURL_DISABLE_POP3` disables POP3
- `CURL_DISABLE_SMTP` disables SMTP
If you want to set any of these defines you have the following options:
- Modify lib/config-win32.h
- Modify lib/curl_setup.h
- Modify winbuild/Makefile.vc
- Modify the "Preprocessor Definitions" in the libcurl project
Note: The pre-processor settings can be found using the Visual Studio IDE
under "Project -> Settings -> C/C++ -> General" in VC6 and "Project ->
Properties -> Configuration Properties -> C/C++ -> Preprocessor" in later
versions.
## Using BSD-style lwIP instead of Winsock TCP/IP stack in Win32 builds
In order to compile libcurl and curl using BSD-style lwIP TCP/IP stack it is
necessary to make definition of preprocessor symbol USE_LWIPSOCK visible to
libcurl and curl compilation processes. To set this definition you have the
following alternatives:
- Modify lib/config-win32.h and src/config-win32.h
- Modify winbuild/Makefile.vc
- Modify the "Preprocessor Definitions" in the libcurl project
Note: The pre-processor settings can be found using the Visual Studio IDE
under "Project -> Settings -> C/C++ -> General" in VC6 and "Project ->
Properties -> Configuration Properties -> C/C++ -> Preprocessor" in later
versions.
Once that libcurl has been built with BSD-style lwIP TCP/IP stack support, in
order to use it with your program it is mandatory that your program includes
lwIP header file `<lwip/opt.h>` (or another lwIP header that includes this)
before including any libcurl header. Your program does not need the
`USE_LWIPSOCK` preprocessor definition which is for libcurl internals only.
Compilation has been verified with [lwIP
1.4.0](http://download.savannah.gnu.org/releases/lwip/lwip-1.4.0.zip) and
[contrib-1.4.0](http://download.savannah.gnu.org/releases/lwip/contrib-1.4.0.zip).
This BSD-style lwIP TCP/IP stack support must be considered experimental given
that it has been verified that lwIP 1.4.0 still needs some polish, and libcurl
might yet need some additional adjustment, caveat emptor.
## Important static libcurl usage note
When building an application that uses the static libcurl library on Windows,
you must add `-DCURL_STATICLIB` to your `CFLAGS`. Otherwise the linker will
look for dynamic import symbols.
## Legacy Windows and SSL
WinSSL (specifically SChannel from Windows SSPI), is the native SSL library in
Windows. However, WinSSL in Windows <= XP is unable to connect to servers that
no longer support the legacy handshakes and algorithms used by those
versions. If you will be using curl in one of those earlier versions of
Windows you should choose another SSL backend such as OpenSSL.
# Apple iOS and Mac OS X
On modern Apple operating systems, curl can be built to use Apple's SSL/TLS
implementation, Secure Transport, instead of OpenSSL. To build with Secure
Transport for SSL/TLS, use the configure option `--with-darwinssl`. (It is not
necessary to use the option `--without-ssl`.) This feature requires iOS 5.0 or
later, or OS X 10.5 ("Leopard") or later.
When Secure Transport is in use, the curl options `--cacert` and `--capath`
and their libcurl equivalents, will be ignored, because Secure Transport uses
the certificates stored in the Keychain to evaluate whether or not to trust
the server. This, of course, includes the root certificates that ship with the
OS. The `--cert` and `--engine` options, and their libcurl equivalents, are
currently unimplemented in curl with Secure Transport.
For OS X users: In OS X 10.8 ("Mountain Lion"), Apple made a major overhaul to
the Secure Transport API that, among other things, added support for the newer
TLS 1.1 and 1.2 protocols. To get curl to support TLS 1.1 and 1.2, you must
build curl on Mountain Lion or later, or by using the equivalent SDK. If you
set the `MACOSX_DEPLOYMENT_TARGET` environmental variable to an earlier
version of OS X prior to building curl, then curl will use the new Secure
Transport API on Mountain Lion and later, and fall back on the older API when
the same curl binary is executed on older cats. For example, running these
commands in curl's directory in the shell will build the code such that it
will run on cats as old as OS X 10.6 ("Snow Leopard") (using bash):
export MACOSX_DEPLOYMENT_TARGET="10.6"
./configure --with-darwinssl
make
# Cross compile
Download and unpack the curl package.
'cd' to the new directory. (e.g. `cd curl-7.12.3`)
Set environment variables to point to the cross-compile toolchain and call
configure with any options you need. Be sure and specify the `--host` and
`--build` parameters at configuration time. The following script is an
example of cross-compiling for the IBM 405GP PowerPC processor using the
toolchain from MonteVista for Hardhat Linux.
#! /bin/sh
export PATH=$PATH:/opt/hardhat/devkit/ppc/405/bin
export CPPFLAGS="-I/opt/hardhat/devkit/ppc/405/target/usr/include"
export AR=ppc_405-ar
export AS=ppc_405-as
export LD=ppc_405-ld
export RANLIB=ppc_405-ranlib
export CC=ppc_405-gcc
export NM=ppc_405-nm
./configure --target=powerpc-hardhat-linux
--host=powerpc-hardhat-linux
--build=i586-pc-linux-gnu
--prefix=/opt/hardhat/devkit/ppc/405/target/usr/local
--exec-prefix=/usr/local
You may also need to provide a parameter like `--with-random=/dev/urandom` to
configure as it cannot detect the presence of a random number generating
device for a target system. The `--prefix` parameter specifies where curl
will be installed. If `configure` completes successfully, do `make` and `make
install` as usual.
In some cases, you may be able to simplify the above commands to as little as:
./configure --host=ARCH-OS
# REDUCING SIZE
There are a number of configure options that can be used to reduce the size of
libcurl for embedded applications where binary size is an important factor.
First, be sure to set the CFLAGS variable when configuring with any relevant
compiler optimization flags to reduce the size of the binary. For gcc, this
would mean at minimum the -Os option, and potentially the `-march=X`,
`-mdynamic-no-pic` and `-flto` options as well, e.g.
./configure CFLAGS='-Os' LDFLAGS='-Wl,-Bsymbolic'...
Note that newer compilers often produce smaller code than older versions
due to improved optimization.
Be sure to specify as many `--disable-` and `--without-` flags on the
configure command-line as you can to disable all the libcurl features that you
know your application is not going to need. Besides specifying the
`--disable-PROTOCOL` flags for all the types of URLs your application will not
use, here are some other flags that can reduce the size of the library:
- `--disable-ares` (disables support for the C-ARES DNS library)
- `--disable-cookies` (disables support for HTTP cookies)
- `--disable-crypto-auth` (disables HTTP cryptographic authentication)
- `--disable-ipv6` (disables support for IPv6)
- `--disable-manual` (disables support for the built-in documentation)
- `--disable-proxy` (disables support for HTTP and SOCKS proxies)
- `--disable-unix-sockets` (disables support for UNIX sockets)
- `--disable-verbose` (eliminates debugging strings and error code strings)
- `--disable-versioned-symbols` (disables support for versioned symbols)
- `--enable-hidden-symbols` (eliminates unneeded symbols in the shared library)
- `--without-libidn` (disables support for the libidn DNS library)
- `--without-librtmp` (disables support for RTMP)
- `--without-ssl` (disables support for SSL/TLS)
- `--without-zlib` (disables support for on-the-fly decompression)
The GNU compiler and linker have a number of options that can reduce the
size of the libcurl dynamic libraries on some platforms even further.
Specify them by providing appropriate CFLAGS and LDFLAGS variables on the
configure command-line, e.g.
CFLAGS="-Os -ffunction-sections -fdata-sections
-fno-unwind-tables -fno-asynchronous-unwind-tables -flto"
LDFLAGS="-Wl,-s -Wl,-Bsymbolic -Wl,--gc-sections"
Be sure also to strip debugging symbols from your binaries after compiling
using 'strip' (or the appropriate variant if cross-compiling). If space is
really tight, you may be able to remove some unneeded sections of the shared
library using the -R option to objcopy (e.g. the .comment section).
Using these techniques it is possible to create a basic HTTP-only shared
libcurl library for i386 Linux platforms that is only 113 KiB in size, and an
FTP-only library that is 113 KiB in size (as of libcurl version 7.50.3, using
gcc 5.4.0).
You may find that statically linking libcurl to your application will result
in a lower total size than dynamically linking.
Note that the curl test harness can detect the use of some, but not all, of
the `--disable` statements suggested above. Use will cause tests relying on
those features to fail. The test harness can be manually forced to skip the
relevant tests by specifying certain key words on the runtests.pl command
line. Following is a list of appropriate key words:
- `--disable-cookies` !cookies
- `--disable-manual` !--manual
- `--disable-proxy` !HTTP\ proxy !proxytunnel !SOCKS4 !SOCKS5
# PORTS
This is a probably incomplete list of known hardware and operating systems
that curl has been compiled for. If you know a system curl compiles and
runs on, that isn't listed, please let us know!
- Alpha DEC OSF 4
- Alpha Digital UNIX v3.2
- Alpha FreeBSD 4.1, 4.5
- Alpha Linux 2.2, 2.4
- Alpha NetBSD 1.5.2
- Alpha OpenBSD 3.0
- Alpha OpenVMS V7.1-1H2
- Alpha Tru64 v5.0 5.1
- AVR32 Linux
- ARM Android 1.5, 2.1, 2.3, 3.2, 4.x
- ARM INTEGRITY
- ARM iOS
- Cell Linux
- Cell Cell OS
- HP-PA HP-UX 9.X 10.X 11.X
- HP-PA Linux
- HP3000 MPE/iX
- MicroBlaze uClinux
- MIPS IRIX 6.2, 6.5
- MIPS Linux
- OS/400
- Pocket PC/Win CE 3.0
- Power AIX 3.2.5, 4.2, 4.3.1, 4.3.2, 5.1, 5.2
- PowerPC Darwin 1.0
- PowerPC INTEGRITY
- PowerPC Linux
- PowerPC Mac OS 9
- PowerPC Mac OS X
- SH4 Linux 2.6.X
- SH4 OS21
- SINIX-Z v5
- Sparc Linux
- Sparc Solaris 2.4, 2.5, 2.5.1, 2.6, 7, 8, 9, 10
- Sparc SunOS 4.1.X
- StrongARM (and other ARM) RISC OS 3.1, 4.02
- StrongARM/ARM7/ARM9 Linux 2.4, 2.6
- StrongARM NetBSD 1.4.1
- Symbian OS (P.I.P.S.) 9.x
- TPF
- Ultrix 4.3a
- UNICOS 9.0
- i386 BeOS
- i386 DOS
- i386 eCos 1.3.1
- i386 Esix 4.1
- i386 FreeBSD
- i386 HURD
- i386 Haiku OS
- i386 Linux 1.3, 2.0, 2.2, 2.3, 2.4, 2.6
- i386 Mac OS X
- i386 MINIX 3.1
- i386 NetBSD
- i386 Novell NetWare
- i386 OS/2
- i386 OpenBSD
- i386 QNX 6
- i386 SCO unix
- i386 Solaris 2.7
- i386 Windows 95, 98, ME, NT, 2000, XP, 2003
- i486 ncr-sysv4.3.03 (NCR MP-RAS)
- ia64 Linux 2.3.99
- m68k AmigaOS 3
- m68k Linux
- m68k uClinux
- m68k OpenBSD
- m88k dg-dgux5.4R3.00
- s390 Linux
- x86_64 Linux
- XScale/PXA250 Linux 2.4
- Nios II uClinux

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,593 @@
_ _ ____ _
___| | | | _ \| |
/ __| | | | |_) | |
| (__| |_| | _ <| |___
\___|\___/|_| \_\_____|
Known Bugs
These are problems and bugs known to exist at the time of this release. Feel
free to join in and help us correct one or more of these! Also be sure to
check the changelog of the current development status, as one or more of these
problems may have been fixed or changed somewhat since this was written!
1. HTTP
1.1 CURLFORM_CONTENTLEN in an array
1.2 Disabling HTTP Pipelining
1.3 STARTTRANSFER time is wrong for HTTP POSTs
1.4 multipart formposts file name encoding
1.5 Expect-100 meets 417
1.6 Unnecessary close when 401 received waiting for 100
1.8 DNS timing is wrong for HTTP redirects
1.9 HTTP/2 frames while in the connection pool kill reuse
1.10 Strips trailing dot from host name
1.11 CURLOPT_SEEKFUNCTION not called with CURLFORM_STREAM
2. TLS
2.1 CURLINFO_SSL_VERIFYRESULT has limited support
2.2 DER in keychain
2.3 GnuTLS backend skips really long certificate fields
2.4 DarwinSSL won't import PKCS#12 client certificates without a password
3. Email protocols
3.1 IMAP SEARCH ALL truncated response
3.2 No disconnect command
3.3 SMTP to multiple recipients
3.4 POP3 expects "CRLF.CRLF" eob for some single-line responses
4. Command line
4.1 -J with %-encoded file nameas
4.2 -J with -C - fails
4.3 --retry and transfer timeouts
5. Build and portability issues
5.1 Windows Borland compiler
5.2 curl-config --libs contains private details
5.4 AIX shared build with c-ares fails
5.5 can't handle Unicode arguments in Windows
5.6 cmake support gaps
5.7 Visual Studio project gaps
5.8 configure finding libs in wrong directory
5.9 Utilize Requires.private directives in libcurl.pc
6. Authentication
6.1 NTLM authentication and unicode
6.2 MIT Kerberos for Windows build
6.3 NTLM in system context uses wrong name
6.4 Negotiate and Kerberos V5 need a fake user name
7. FTP
7.1 FTP without or slow 220 response
7.2 FTP with CONNECT and slow server
7.3 FTP with NOBODY and FAILONERROR
7.4 FTP with ACCT
7.5 ASCII FTP
7.6 FTP with NULs in URL parts
7.7 FTP and empty path parts in the URL
7.8 Premature transfer end but healthy control channel
8. TELNET
8.1 TELNET and time limtiations don't work
8.2 Microsoft telnet server
9. SFTP and SCP
9.1 SFTP doesn't do CURLOPT_POSTQUOTE correct
10. SOCKS
10.1 SOCKS proxy connections are done blocking
10.2 SOCKS don't support timeouts
10.3 FTPS over SOCKS
10.4 active FTP over a SOCKS
11. Internals
11.1 Curl leaks .onion hostnames in DNS
11.2 error buffer not set if connection to multiple addresses fails
11.3 c-ares deviates from stock resolver on http://1346569778
12. LDAP and OpenLDAP
12.1 OpenLDAP hangs after returning results
13. TCP/IP
13.1 --interface for ipv6 binds to unusable IP address
==============================================================================
1. HTTP
1.1 CURLFORM_CONTENTLEN in an array
It is not possible to pass a 64-bit value using CURLFORM_CONTENTLEN with
CURLFORM_ARRAY, when compiled on 32-bit platforms that support 64-bit
integers. This is because the underlying structure 'curl_forms' uses a dual
purpose char* for storing these values in via casting. For more information
see the now closed related issue:
https://github.com/curl/curl/issues/608
1.2 Disabling HTTP Pipelining
Disabling HTTP Pipelining when there are ongoing transfers can lead to
heap corruption and crash. https://curl.haxx.se/bug/view.cgi?id=1411
1.3 STARTTRANSFER time is wrong for HTTP POSTs
Wrong STARTTRANSFER timer accounting for POST requests Timer works fine with
GET requests, but while using POST the time for CURLINFO_STARTTRANSFER_TIME
is wrong. While using POST CURLINFO_STARTTRANSFER_TIME minus
CURLINFO_PRETRANSFER_TIME is near to zero every time.
https://github.com/curl/curl/issues/218
https://curl.haxx.se/bug/view.cgi?id=1213
1.4 multipart formposts file name encoding
When creating multipart formposts. The file name part can be encoded with
something beyond ascii but currently libcurl will only pass in the verbatim
string the app provides. There are several browsers that already do this
encoding. The key seems to be the updated draft to RFC2231:
https://tools.ietf.org/html/draft-reschke-rfc2231-in-http-02
1.5 Expect-100 meets 417
If an upload using Expect: 100-continue receives an HTTP 417 response, it
ought to be automatically resent without the Expect:. A workaround is for
the client application to redo the transfer after disabling Expect:.
https://curl.haxx.se/mail/archive-2008-02/0043.html
1.6 Unnecessary close when 401 received waiting for 100
libcurl closes the connection if an HTTP 401 reply is received while it is
waiting for the the 100-continue response.
https://curl.haxx.se/mail/lib-2008-08/0462.html
1.8 DNS timing is wrong for HTTP redirects
When extracting timing information after HTTP redirects, only the last
transfer's results are returned and not the totals:
https://github.com/curl/curl/issues/522
1.9 HTTP/2 frames while in the connection pool kill reuse
If the server sends HTTP/2 frames (like for example an HTTP/2 PING frame) to
curl while the connection is held in curl's connection pool, the socket will
be found readable when considered for reuse and that makes curl think it is
dead and then it will be closed and a new connection gets created instead.
This is *best* fixed by adding monitoring to connections while they are kept
in the pool so that pings can be responded to appropriately.
1.10 Strips trailing dot from host name
When given a URL with a trailing dot for the host name part:
"https://example.com./", libcurl will strip off the dot and use the name
without a dot internally and send it dot-less in HTTP Host: headers and in
the TLS SNI field.
The HTTP part violates RFC 7230 section 5.4 but the SNI part is accordance
with RFC 6066 section 3.
URLs using these trailing dots are very rare in the wild and we have not seen
or gotten any real-world problems with such URLs reported. The popular
browsers seem to have stayed with not stripping the dot for both uses (thus
they violate RFC 6066 instead of RFC 7230).
Daniel took the discussion to the HTTPbis mailing list in March 2016:
https://lists.w3.org/Archives/Public/ietf-http-wg/2016JanMar/0430.html but
there was not major rush or interest to fix this. The impression I get is
that most HTTP people rather not rock the boat now and instead prioritize web
compatibility rather than to strictly adhere to these RFCs.
Our current approach allows a knowing client to send a custom HTTP header
with the dot added.
It can also be noted that while adding a trailing dot to the host name in
most (all?) cases will make the name resolve to the same set of IP addresses,
many HTTP servers will not happily accept the trailing dot there unless that
has been specifically configured to be a fine virtual host.
If URLs with trailing dots for host names become more popular or even just
used more than for just plain fun experiments, I'm sure we will have reason
to go back and reconsider.
See https://github.com/curl/curl/issues/716 for the discussion.
1.11 CURLOPT_SEEKFUNCTION not called with CURLFORM_STREAM
I'm using libcurl to POST form data using a FILE* with the CURLFORM_STREAM
option of curl_formadd(). I've noticed that if the connection drops at just
the right time, the POST is reattempted without the data from the file. It
seems like the file stream position isn't getting reset to the beginning of
the file. I found the CURLOPT_SEEKFUNCTION option and set that with a
function that performs an fseek() on the FILE*. However, setting that didn't
seem to fix the issue or even get called. See
https://github.com/curl/curl/issues/768
2. TLS
2.1 CURLINFO_SSL_VERIFYRESULT has limited support
CURLINFO_SSL_VERIFYRESULT is only implemented for the OpenSSL and NSS
backends, so relying on this information in a generic app is flaky.
2.2 DER in keychain
Curl doesn't recognize certificates in DER format in keychain, but it works
with PEM. https://curl.haxx.se/bug/view.cgi?id=1065
2.3 GnuTLS backend skips really long certificate fields
libcurl calls gnutls_x509_crt_get_dn() with a fixed buffer size and if the
field is too long in the cert, it'll just return an error and the field will
be displayed blank.
2.4 DarwinSSL won't import PKCS#12 client certificates without a password
libcurl calls SecPKCS12Import with the PKCS#12 client certificate, but that
function rejects certificates that do not have a password.
https://github.com/curl/curl/issues/1308
3. Email protocols
3.1 IMAP SEARCH ALL truncated response
IMAP "SEARCH ALL" truncates output on large boxes. "A quick search of the
code reveals that pingpong.c contains some truncation code, at line 408, when
it deems the server response to be too large truncating it to 40 characters"
https://curl.haxx.se/bug/view.cgi?id=1366
3.2 No disconnect command
The disconnect commands (LOGOUT and QUIT) may not be sent by IMAP, POP3 and
SMTP if a failure occurs during the authentication phase of a connection.
3.3 SMTP to multiple recipients
When sending data to multiple recipients, curl will abort and return failure
if one of the recipients indicate failure (on the "RCPT TO"
command). Ordinary mail programs would proceed and still send to the ones
that can receive data. This is subject for change in the future.
https://curl.haxx.se/bug/view.cgi?id=1116
3.4 POP3 expects "CRLF.CRLF" eob for some single-line responses
You have to tell libcurl not to expect a body, when dealing with one line
response commands. Please see the POP3 examples and test cases which show
this for the NOOP and DELE commands. https://curl.haxx.se/bug/?i=740
4. Command line
4.1 -J with %-encoded file nameas
-J/--remote-header-name doesn't decode %-encoded file names. RFC6266 details
how it should be done. The can of worm is basically that we have no charset
handling in curl and ascii >=128 is a challenge for us. Not to mention that
decoding also means that we need to check for nastiness that is attempted,
like "../" sequences and the like. Probably everything to the left of any
embedded slashes should be cut off.
https://curl.haxx.se/bug/view.cgi?id=1294
4.2 -J with -C - fails
When using -J (with -O), automatically resumed downloading together with "-C
-" fails. Without -J the same command line works! This happens because the
resume logic is worked out before the target file name (and thus its
pre-transfer size) has been figured out!
https://curl.haxx.se/bug/view.cgi?id=1169
4.3 --retry and transfer timeouts
If using --retry and the transfer timeouts (possibly due to using -m or
-y/-Y) the next attempt doesn't resume the transfer properly from what was
downloaded in the previous attempt but will truncate and restart at the
original position where it was at before the previous failed attempt. See
https://curl.haxx.se/mail/lib-2008-01/0080.html and Mandriva bug report
https://qa.mandriva.com/show_bug.cgi?id=22565
5. Build and portability issues
5.1 Windows Borland compiler
When building with the Windows Borland compiler, it fails because the "tlib"
tool doesn't support hyphens (minus signs) in file names and we have such in
the build. https://curl.haxx.se/bug/view.cgi?id=1222
5.2 curl-config --libs contains private details
"curl-config --libs" will include details set in LDFLAGS when configure is
run that might be needed only for building libcurl. Further, curl-config
--cflags suffers from the same effects with CFLAGS/CPPFLAGS.
5.4 AIX shared build with c-ares fails
curl version 7.12.2 fails on AIX if compiled with --enable-ares. The
workaround is to combine --enable-ares with --disable-shared
5.5 can't handle Unicode arguments in Windows
If a URL or filename can't be encoded using the user's current codepage then
it can only be encoded properly in the Unicode character set. Windows uses
UTF-16 encoding for Unicode and stores it in wide characters, however curl
and libcurl are not equipped for that at the moment. And, except for Cygwin,
Windows can't use UTF-8 as a locale.
https://curl.haxx.se/bug/?i=345
https://curl.haxx.se/bug/?i=731
5.6 cmake support gaps
The cmake build setup lacks several features that the autoconf build
offers. This includes:
- symbol hiding when the shared library is built
- use of correct soname for the shared library build
- support for several TLS backends are missing
- the unit tests cause link failures in regular non-static builds
- no nghttp2 check
5.7 Visual Studio project gaps
The Visual Studio projects lack some features that the autoconf and nmake
builds offer, such as the following:
- support for zlib and nghttp2
- use of static runtime libraries
- add the test suite components
In addition to this the following could be implemented:
- support for other development IDEs
- add PATH environment variables for third-party DLLs
5.8 configure finding libs in wrong directory
When the configure script checks for third-party libraries, it adds those
directories to the LDFLAGS variable and then tries linking to see if it
works. When successful, the found directory is kept in the LDFLAGS variable
when the script continues to execute and do more tests and possibly check for
more libraries.
This can make subsequent checks for libraries wrongly detect another
installation in a directory that was previously added to LDFLAGS by another
library check!
A possibly better way to do these checks would be to keep the pristine LDFLAGS
even after successful checks and instead add those verified paths to a
separate variable that only after all library checks have been performed gets
appended to LDFLAGS.
5.9 Utilize Requires.private directives in libcurl.pc
https://github.com/curl/curl/issues/864
6. Authentication
6.1 NTLM authentication and unicode
NTLM authentication involving unicode user name or password only works
properly if built with UNICODE defined together with the WinSSL/schannel
backend. The original problem was mentioned in:
https://curl.haxx.se/mail/lib-2009-10/0024.html
https://curl.haxx.se/bug/view.cgi?id=896
The WinSSL/schannel version verified to work as mentioned in
https://curl.haxx.se/mail/lib-2012-07/0073.html
6.2 MIT Kerberos for Windows build
libcurl fails to build with MIT Kerberos for Windows (KfW) due to KfW's
library header files exporting symbols/macros that should be kept private to
the KfW library. See ticket #5601 at http://krbdev.mit.edu/rt/
6.3 NTLM in system context uses wrong name
NTLM authentication using SSPI (on Windows) when (lib)curl is running in
"system context" will make it use wrong(?) user name - at least when compared
to what winhttp does. See https://curl.haxx.se/bug/view.cgi?id=535
6.4 Negotiate and Kerberos V5 need a fake user name
In order to get Negotiate (SPNEGO) authentication to work in HTTP or Kerberos
V5 in the e-mail protocols, you need to provide a (fake) user name (this
concerns both curl and the lib) because the code wrongly only considers
authentication if there's a user name provided by setting
conn->bits.user_passwd in url.c https://curl.haxx.se/bug/view.cgi?id=440 How?
https://curl.haxx.se/mail/lib-2004-08/0182.html A possible solution is to
either modify this variable to be set or introduce a variable such as
new conn->bits.want_authentication which is set when any of the authentication
options are set.
7. FTP
7.1 FTP without or slow 220 response
If a connection is made to a FTP server but the server then just never sends
the 220 response or otherwise is dead slow, libcurl will not acknowledge the
connection timeout during that phase but only the "real" timeout - which may
surprise users as it is probably considered to be the connect phase to most
people. Brought up (and is being misunderstood) in:
https://curl.haxx.se/bug/view.cgi?id=856
7.2 FTP with CONNECT and slow server
When doing FTP over a socks proxy or CONNECT through HTTP proxy and the multi
interface is used, libcurl will fail if the (passive) TCP connection for the
data transfer isn't more or less instant as the code does not properly wait
for the connect to be confirmed. See test case 564 for a first shot at a test
case.
7.3 FTP with NOBODY and FAILONERROR
It seems sensible to be able to use CURLOPT_NOBODY and CURLOPT_FAILONERROR
with FTP to detect if a file exists or not, but it is not working:
https://curl.haxx.se/mail/lib-2008-07/0295.html
7.4 FTP with ACCT
When doing an operation over FTP that requires the ACCT command (but not when
logging in), the operation will fail since libcurl doesn't detect this and
thus fails to issue the correct command:
https://curl.haxx.se/bug/view.cgi?id=635
7.5 ASCII FTP
FTP ASCII transfers do not follow RFC959. They don't convert the data
accordingly (not for sending nor for receiving). RFC 959 section 3.1.1.1
clearly describes how this should be done:
The sender converts the data from an internal character representation to
the standard 8-bit NVT-ASCII representation (see the Telnet
specification). The receiver will convert the data from the standard
form to his own internal form.
Since 7.15.4 at least line endings are converted.
7.6 FTP with NULs in URL parts
FTP URLs passed to curl may contain NUL (0x00) in the RFC 1738 <user>,
<password>, and <fpath> components, encoded as "%00". The problem is that
curl_unescape does not detect this, but instead returns a shortened C string.
From a strict FTP protocol standpoint, NUL is a valid character within RFC
959 <string>, so the way to handle this correctly in curl would be to use a
data structure other than a plain C string, one that can handle embedded NUL
characters. From a practical standpoint, most FTP servers would not
meaningfully support NUL characters within RFC 959 <string>, anyway (e.g.,
Unix pathnames may not contain NUL).
7.7 FTP and empty path parts in the URL
libcurl ignores empty path parts in FTP URLs, whereas RFC1738 states that
such parts should be sent to the server as 'CWD ' (without an argument). The
only exception to this rule, is that we knowingly break this if the empty
part is first in the path, as then we use the double slashes to indicate that
the user wants to reach the root dir (this exception SHALL remain even when
this bug is fixed).
7.8 Premature transfer end but healthy control channel
When 'multi_done' is called before the transfer has been completed the normal
way, it is considered a "premature" transfer end. In this situation, libcurl
closes the connection assuming it doesn't know the state of the connection so
it can't be reused for subsequent requests.
With FTP however, this isn't necessarily true but there are a bunch of
situations (listed in the ftp_done code) where it *could* keep the connection
alive even in this situation - but the current code doesn't. Fixing this would
allow libcurl to reuse FTP connections better.
8. TELNET
8.1 TELNET and time limtiations don't work
When using telnet, the time limitation options don't work.
https://curl.haxx.se/bug/view.cgi?id=846
8.2 Microsoft telnet server
There seems to be a problem when connecting to the Microsoft telnet server.
https://curl.haxx.se/bug/view.cgi?id=649
9. SFTP and SCP
9.1 SFTP doesn't do CURLOPT_POSTQUOTE correct
When libcurl sends CURLOPT_POSTQUOTE commands when connected to a SFTP server
using the multi interface, the commands are not being sent correctly and
instead the connection is "cancelled" (the operation is considered done)
prematurely. There is a half-baked (busy-looping) patch provided in the bug
report but it cannot be accepted as-is. See
https://curl.haxx.se/bug/view.cgi?id=748
10. SOCKS
10.1 SOCKS proxy connections are done blocking
Both SOCKS5 and SOCKS4 proxy connections are done blocking, which is very bad
when used with the multi interface.
10.2 SOCKS don't support timeouts
The SOCKS4 connection codes don't properly acknowledge (connect) timeouts.
According to bug #1556528, even the SOCKS5 connect code does not do it right:
https://curl.haxx.se/bug/view.cgi?id=604
When connecting to a SOCK proxy, the (connect) timeout is not properly
acknowledged after the actual TCP connect (during the SOCKS "negotiate"
phase).
10.3 FTPS over SOCKS
libcurl doesn't support FTPS over a SOCKS proxy.
10.4 active FTP over a SOCKS
libcurl doesn't support active FTP over a SOCKS proxy
11. Internals
11.1 Curl leaks .onion hostnames in DNS
Curl sends DNS requests for hostnames with a .onion TLD. This leaks
information about what the user is attempting to access, and violates this
requirement of RFC7686: https://tools.ietf.org/html/rfc7686
Issue: https://github.com/curl/curl/issues/543
11.2 error buffer not set if connection to multiple addresses fails
If you ask libcurl to resolve a hostname like example.com to IPv6 addresses
only. But you only have IPv4 connectivity. libcurl will correctly fail with
CURLE_COULDNT_CONNECT. But the error buffer set by CURLOPT_ERRORBUFFER
remains empty. Issue: https://github.com/curl/curl/issues/544
11.3 c-ares deviates from stock resolver on http://1346569778
When using the socket resolvers, that URL becomes:
* Rebuilt URL to: http://1346569778/
* Trying 80.67.6.50...
but with c-ares it instead says "Could not resolve: 1346569778 (Domain name
not found)"
See https://github.com/curl/curl/issues/893
12. LDAP and OpenLDAP
12.1 OpenLDAP hangs after returning results
By configuration defaults, openldap automatically chase referrals on
secondary socket descriptors. The OpenLDAP backend is asynchronous and thus
should monitor all socket descriptors involved. Currently, these secondary
descriptors are not monitored, causing openldap library to never receive
data from them.
As a temporary workaround, disable referrals chasing by configuration.
The fix is not easy: proper automatic referrals chasing requires a
synchronous bind callback and monitoring an arbitrary number of socket
descriptors for a single easy handle (currently limited to 5).
Generic LDAP is synchronous: OK.
See https://github.com/curl/curl/issues/622 and
https://curl.haxx.se/mail/lib-2016-01/0101.html
13. TCP/IP
13.1 --interface for ipv6 binds to unusable IP address
Since IPv6 provides a lot of addresses with different scope, binding to an
IPv6 address needs to take the proper care so that it doesn't bind to a
locally scoped address as that is bound to fail.
https://github.com/curl/curl/issues/686

View File

@@ -0,0 +1,127 @@
License Mixing
==============
libcurl can be built to use a fair amount of various third party libraries,
libraries that are written and provided by other parties that are distributed
using their own licenses. Even libcurl itself contains code that may cause
problems to some. This document attempts to describe what licenses libcurl and
the other libraries use and what possible dilemmas linking and mixing them all
can lead to for end users.
I am not a lawyer and this is not legal advice!
One common dilemma is that [GPL](https://www.gnu.org/licenses/gpl.html)
licensed code is not allowed to be linked with code licensed under the
[Original BSD license](https://spdx.org/licenses/BSD-4-Clause.html) (with the
announcement clause). You may still build your own copies that use them all,
but distributing them as binaries would be to violate the GPL license - unless
you accompany your license with an
[exception](https://www.gnu.org/licenses/gpl-faq.html#GPLIncompatibleLibs). This
particular problem was addressed when the [Modified BSD
license](https://opensource.org/licenses/BSD-3-Clause) was created, which does
not have the announcement clause that collides with GPL.
## libcurl
Uses an [MIT style license](https://curl.haxx.se/docs/copyright.html) that is
very liberal.
## OpenSSL
(May be used for SSL/TLS support) Uses an Original BSD-style license with an
announcement clause that makes it "incompatible" with GPL. You are not
allowed to ship binaries that link with OpenSSL that includes GPL code
(unless that specific GPL code includes an exception for OpenSSL - a habit
that is growing more and more common). If OpenSSL's licensing is a problem
for you, consider using another TLS library.
## GnuTLS
(May be used for SSL/TLS support) Uses the
[LGPL](https://www.gnu.org/licenses/lgpl.html) license. If this is a problem
for you, consider using another TLS library. Also note that GnuTLS itself
depends on and uses other libs (libgcrypt and libgpg-error) and they too are
LGPL- or GPL-licensed.
## WolfSSL
(May be used for SSL/TLS support) Uses the GPL license or a proprietary
license. If this is a problem for you, consider using another TLS library.
## NSS
(May be used for SSL/TLS support) Is covered by the
[MPL](https://www.mozilla.org/MPL/) license, the GPL license and the LGPL
license. You may choose to license the code under MPL terms, GPL terms, or
LGPL terms. These licenses grant you different permissions and impose
different obligations. You should select the license that best meets your
needs.
## axTLS
(May be used for SSL/TLS support) Uses a Modified BSD-style license.
## mbedTLS
(May be used for SSL/TLS support) Uses the [Apache 2.0
license](https://opensource.org/licenses/Apache-2.0) or the GPL license.
You may choose to license the code under Apache 2.0 terms or GPL terms.
These licenses grant you different permissions and impose different
obligations. You should select the license that best meets your needs.
## BoringSSL
(May be used for SSL/TLS support) As an OpenSSL fork, it has the same
license as that.
## libressl
(May be used for SSL/TLS support) As an OpenSSL fork, it has the same
license as that.
## c-ares
(Used for asynchronous name resolves) Uses an MIT license that is very
liberal and imposes no restrictions on any other library or part you may link
with.
## zlib
(Used for compressed Transfer-Encoding support) Uses an MIT-style license
that shouldn't collide with any other library.
## MIT Kerberos
(May be used for GSS support) MIT licensed, that shouldn't collide with any
other parts.
## Heimdal
(May be used for GSS support) Heimdal is Original BSD licensed with the
announcement clause.
## GNU GSS
(May be used for GSS support) GNU GSS is GPL licensed. Note that you may not
distribute binary curl packages that uses this if you build curl to also link
and use any Original BSD licensed libraries!
## libidn
(Used for IDNA support) Uses the GNU Lesser General Public License [3]. LGPL
is a variation of GPL with slightly less aggressive "copyleft". This license
requires more requirements to be met when distributing binaries, see the
license for details. Also note that if you distribute a binary that includes
this library, you must also include the full LGPL license text. Please
properly point out what parts of the distributed package that the license
addresses.
## OpenLDAP
(Used for LDAP support) Uses a Modified BSD-style license. Since libcurl uses
OpenLDAP as a shared library only, I have not heard of anyone that ships
OpenLDAP linked with libcurl in an app.
## libssh2
(Used for scp and sftp support) libssh2 uses a Modified BSD-style license.

Some files were not shown because too many files have changed in this diff Show More