← Back to team overview

debcrafters-packages team mailing list archive

[Merge] ~jchittum/ubuntu/+source/valgrind:lp2116735-questing into ubuntu/+source/valgrind:ubuntu/devel

 

John Chittum has proposed merging ~jchittum/ubuntu/+source/valgrind:lp2116735-questing into ubuntu/+source/valgrind:ubuntu/devel.

Commit message:
New upstream version 3.25.1

Requested reviews:
  Debcrafters packages (debcrafters-packages)

For more details, see:
https://code.launchpad.net/~jchittum/ubuntu/+source/valgrind/+git/valgrind/+merge/490705

Method for creating this new version, ahead of Debian

# added debian salsa remote (because it's all setup)
1. git remove add salsa https://salsa.debian.org/debian/valgrind.git
2. git fetch debian
# setup magic branches and run gbp
3. git checkout -b master debian/master
4. git checkout -b upstream debian/upstream
5. gbp import-orig --uscan --merge
# setup ubuntu development merge
6. git fetch ubuntu
7. git checkout ubuntu/devel (make sure it's up to date with remote. git-ubuntu does some mean things with history, so i had to delete my local ubuntu/devel and re-pull)
8. git checkout -b lp2116735-questing ubuntu/devel
9. git cherry-pick 9c478affc89a3cfb71d7be1f02372aeedba779c5
# that's the commit that is the upstream change and _not_ the changelog commit. gbp does 2 commits, and you want the "older" one
10. tested

test-build: https://launchpad.net/~jchittum/+archive/ubuntu/valgrind-lp2116735

you can check autopkgtest results as well:

# This is the fast way i use on the CLI...
ppa tests ppa:jchittum/valgrind-lp2116735 --release questing 

also ran manual tests in a questing container. ran against

* /usr/bin/true # like the autopkgtests
* libreoffice # something weird, but tested with current valgrind and same result, which is it just _stops_
* qemu # leaky city
* JRE # crashy city, again ran with current valgrind and had the same result.
-- 
The attached diff has been truncated due to its size.
Your team Debcrafters packages is requested to review the proposed merge of ~jchittum/ubuntu/+source/valgrind:lp2116735-questing into ubuntu/+source/valgrind:ubuntu/devel.
diff --git a/FAQ.txt b/FAQ.txt
index 9341fc7..c67190c 100644
--- a/FAQ.txt
+++ b/FAQ.txt
@@ -1,7 +1,7 @@
 
 
 Valgrind FAQ
-Release 3.24.0 31 Oct 2024
+Release 3.25.1 20 May 2025
 ~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 Table of Contents
@@ -350,7 +350,7 @@ There is simply no way for Memcheck to tell which of these was
 originally used. There are a few possible workarounds. Build tcmalloc 
 with "CPPFLAGS=-DTCMALLOC_NO_ALIASES" (best).Use a debug build of 
 tcmalloc (debug builds turn off the alias micro-optimization).Do not 
-link with tcmalloc for the builds that you use for Memecheck testing. 
+link with tcmalloc for the builds that you use for Memcheck testing. 
 
 Second, if you are replacing operator new or operator delete make sure 
 that the compiler does not perform optimizations such as inlining on 
@@ -443,7 +443,23 @@ at this time.
 
 ------------------------------------------------------------------------
 
-5.4. Is it possible to attach Valgrind to a program that is already 
+5.4. I'm developing a Qt application and I get huge numbers of 
+"Conditional jump" errors. Is there anything that I can do about it? 
+
+Yes, there is a workaround. Here is an example error: 
+    Conditional jump or move depends on uninitialised value(s)
+       at 0x1051C39B: ???
+       by 0x12657AA7: ???
+    
+Qt Regular Expressions are built on the pcre2 library. pcre2 uses 
+JITting which means that the errors cannot be suppressed (no function 
+name). However, Qt provides a mechanism to turn off the use of JITting. 
+To do so, use the following environment variable: export 
+QT_ENABLE_REGEXP_JIT=0 
+
+------------------------------------------------------------------------
+
+5.5. Is it possible to attach Valgrind to a program that is already 
 running? 
 
 No. The environment that Valgrind provides for running programs is 
diff --git a/Makefile.all.am b/Makefile.all.am
index dcea269..d4f6b3f 100755
--- a/Makefile.all.am
+++ b/Makefile.all.am
@@ -104,6 +104,7 @@ AM_CFLAGS_BASE = \
 	-Wpointer-arith \
 	-Wstrict-prototypes \
 	-Wmissing-declarations \
+	-Wno-unused-result \
 	@FLAG_W_CAST_ALIGN@ \
 	@FLAG_W_CAST_QUAL@ \
 	@FLAG_W_WRITE_STRINGS@ \
@@ -291,6 +292,11 @@ AM_CFLAGS_PSO_MIPS64_LINUX = @FLAG_M64@ $(AM_CFLAGS_BASE) \
 				$(AM_CFLAGS_PSO_BASE)
 AM_CCASFLAGS_MIPS64_LINUX  = @FLAG_M64@ -g
 
+AM_FLAG_M3264_RISCV64_LINUX = @FLAG_M64@
+AM_CFLAGS_RISCV64_LINUX     = @FLAG_M64@ $(AM_CFLAGS_BASE)
+AM_CFLAGS_PSO_RISCV64_LINUX = @FLAG_M64@ $(AM_CFLAGS_BASE) $(AM_CFLAGS_PSO_BASE)
+AM_CCASFLAGS_RISCV64_LINUX  = @FLAG_M64@ -g
+
 AM_FLAG_M3264_X86_SOLARIS   = @FLAG_M32@
 AM_CFLAGS_X86_SOLARIS       = @FLAG_M32@ @PREFERRED_STACK_BOUNDARY_2@ \
 				$(AM_CFLAGS_BASE) -fomit-frame-pointer @SOLARIS_UNDEF_LARGESOURCE@
@@ -352,6 +358,7 @@ PRELOAD_LDFLAGS_S390X_LINUX    = $(PRELOAD_LDFLAGS_COMMON_LINUX) @FLAG_M64@
 PRELOAD_LDFLAGS_MIPS32_LINUX   = $(PRELOAD_LDFLAGS_COMMON_LINUX) @FLAG_M32@
 PRELOAD_LDFLAGS_NANOMIPS_LINUX = $(PRELOAD_LDFLAGS_COMMON_LINUX) @FLAG_M32@
 PRELOAD_LDFLAGS_MIPS64_LINUX   = $(PRELOAD_LDFLAGS_COMMON_LINUX) @FLAG_M64@
+PRELOAD_LDFLAGS_RISCV64_LINUX  = $(PRELOAD_LDFLAGS_COMMON_LINUX) @FLAG_M64@
 PRELOAD_LDFLAGS_X86_SOLARIS    = $(PRELOAD_LDFLAGS_COMMON_SOLARIS) @FLAG_M32@
 PRELOAD_LDFLAGS_AMD64_SOLARIS  = $(PRELOAD_LDFLAGS_COMMON_SOLARIS) @FLAG_M64@
 
diff --git a/Makefile.am b/Makefile.am
index b3e5be5..e67356b 100644
--- a/Makefile.am
+++ b/Makefile.am
@@ -1,5 +1,7 @@
 
-AUTOMAKE_OPTIONS = 1.10
+AUTOMAKE_OPTIONS = 1.13
+
+ACLOCAL_AMFLAGS = -I m4
 
 include $(top_srcdir)/Makefile.all.am 
 
@@ -31,6 +33,7 @@ SUBDIRS = \
 	perf \
 	gdbserver_tests \
 	memcheck/tests/vbit-test \
+	none/tests/s390x/disasm-test \
 	auxprogs \
 	mpi \
 	solaris \
@@ -100,6 +103,9 @@ perf: check
 auxchecks: all
 	$(MAKE) -C auxprogs auxchecks
 
+ltpchecks: all
+	$(MAKE) -C auxprogs ltpchecks
+
 # Nb: no need to include any Makefile.am files here, or files included from
 # them, as automake includes them automatically.  Also not COPYING, README
 # or NEWS.
@@ -116,13 +122,15 @@ EXTRA_DIST = \
 	README.android_emulator \
 	README.mips \
 	README.aarch64 \
+	README.riscv64 \
 	README.solaris \
 	README.freebsd \
 	NEWS.old \
 	valgrind.pc.in \
 	valgrind.spec.in \
 	valgrind.spec \
-	autogen.sh
+	autogen.sh \
+	m4
 
 dist-hook: include/vgversion.h
 	cp -p include/vgversion.h $(distdir)/include/vgversion_dist.h
diff --git a/Makefile.in b/Makefile.in
index 14c90fa..5f26a52 100644
--- a/Makefile.in
+++ b/Makefile.in
@@ -1,7 +1,7 @@
-# Makefile.in generated by automake 1.16.5 from Makefile.am.
+# Makefile.in generated by automake 1.17 from Makefile.am.
 # @configure_input@
 
-# Copyright (C) 1994-2021 Free Software Foundation, Inc.
+# Copyright (C) 1994-2024 Free Software Foundation, Inc.
 
 # This Makefile.in is free software; the Free Software Foundation
 # gives unlimited permission to copy and/or distribute it,
@@ -79,6 +79,8 @@ am__make_running_with_option = \
   test $$has_opt = yes
 am__make_dryrun = (target_option=n; $(am__make_running_with_option))
 am__make_keepgoing = (target_option=k; $(am__make_running_with_option))
+am__rm_f = rm -f $(am__rm_f_notfound)
+am__rm_rf = rm -rf $(am__rm_f_notfound)
 pkgdatadir = $(datadir)/@PACKAGE@
 pkgincludedir = $(includedir)/@PACKAGE@
 pkglibdir = $(libdir)/@PACKAGE@
@@ -176,10 +178,9 @@ am__base_list = \
   sed '$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;s/\n/ /g' | \
   sed '$$!N;$$!N;$$!N;$$!N;s/\n/ /g'
 am__uninstall_files_from_dir = { \
-  test -z "$$files" \
-    || { test ! -d "$$dir" && test ! -f "$$dir" && test ! -r "$$dir"; } \
-    || { echo " ( cd '$$dir' && rm -f" $$files ")"; \
-         $(am__cd) "$$dir" && rm -f $$files; }; \
+  { test ! -d "$$dir" && test ! -f "$$dir" && test ! -r "$$dir"; } \
+  || { echo " ( cd '$$dir' && rm -f" $$files ")"; \
+       $(am__cd) "$$dir" && echo $$files | $(am__xargs_n) 40 $(am__rm_f); }; \
   }
 am__installdirs = "$(DESTDIR)$(pkgconfigdir)" "$(DESTDIR)$(vglibdir)" \
 	"$(DESTDIR)$(pkgincludedir)"
@@ -225,8 +226,8 @@ distdir = $(PACKAGE)-$(VERSION)
 top_distdir = $(distdir)
 am__remove_distdir = \
   if test -d "$(distdir)"; then \
-    find "$(distdir)" -type d ! -perm -200 -exec chmod u+w {} ';' \
-      && rm -rf "$(distdir)" \
+    find "$(distdir)" -type d ! -perm -700 -exec chmod u+rwx {} ';' \
+      ; rm -rf "$(distdir)" \
       || { sleep 5 && rm -rf "$(distdir)"; }; \
   else :; fi
 am__post_remove_distdir = $(am__remove_distdir)
@@ -256,14 +257,16 @@ am__relativize = \
   done; \
   reldir="$$dir2"
 DIST_ARCHIVES = $(distdir).tar.gz $(distdir).tar.bz2
-GZIP_ENV = --best
+GZIP_ENV = -9
 DIST_TARGETS = dist-bzip2 dist-gzip
 # Exists only to be overridden by the user if desired.
 AM_DISTCHECK_DVI_TARGET = dvi
 distuninstallcheck_listfiles = find . -type f -print
 am__distuninstallcheck_listfiles = $(distuninstallcheck_listfiles) \
   | sed 's|^\./|$(prefix)/|' | grep -v '$(infodir)/dir$$'
-distcleancheck_listfiles = find . -type f -print
+distcleancheck_listfiles = \
+  find . \( -type f -a \! \
+            \( -name .nfs* -o -name .smb* -o -name .__afs* \) \) -print
 ACLOCAL = @ACLOCAL@
 AMTAR = @AMTAR@
 AM_DEFAULT_VERBOSITY = @AM_DEFAULT_VERBOSITY@
@@ -297,7 +300,6 @@ DIS_PATH = @DIS_PATH@
 ECHO_C = @ECHO_C@
 ECHO_N = @ECHO_N@
 ECHO_T = @ECHO_T@
-EGREP = @EGREP@
 ETAGS = @ETAGS@
 EXEEXT = @EXEEXT@
 FLAG_32ON64_GXX = @FLAG_32ON64_GXX@
@@ -315,6 +317,7 @@ FLAG_MSA = @FLAG_MSA@
 FLAG_MSSE = @FLAG_MSSE@
 FLAG_NO_BUILD_ID = @FLAG_NO_BUILD_ID@
 FLAG_NO_PIE = @FLAG_NO_PIE@
+FLAG_NO_WARN_EXECSTACK = @FLAG_NO_WARN_EXECSTACK@
 FLAG_OCTEON = @FLAG_OCTEON@
 FLAG_OCTEON2 = @FLAG_OCTEON2@
 FLAG_PIE = @FLAG_PIE@
@@ -368,7 +371,6 @@ GDB = @GDB@
 GLIBC_LIBC_PATH = @GLIBC_LIBC_PATH@
 GLIBC_LIBPTHREAD_PATH = @GLIBC_LIBPTHREAD_PATH@
 GLIBC_VERSION = @GLIBC_VERSION@
-GREP = @GREP@
 HWCAP_HAS_ALTIVEC = @HWCAP_HAS_ALTIVEC@
 HWCAP_HAS_DFP = @HWCAP_HAS_DFP@
 HWCAP_HAS_HTM = @HWCAP_HAS_HTM@
@@ -443,8 +445,10 @@ ac_ct_CXX = @ac_ct_CXX@
 am__include = @am__include@
 am__leading_dot = @am__leading_dot@
 am__quote = @am__quote@
+am__rm_f_notfound = @am__rm_f_notfound@
 am__tar = @am__tar@
 am__untar = @am__untar@
+am__xargs_n = @am__xargs_n@
 bindir = @bindir@
 build = @build@
 build_alias = @build_alias@
@@ -486,7 +490,8 @@ target_alias = @target_alias@
 top_build_prefix = @top_build_prefix@
 top_builddir = @top_builddir@
 top_srcdir = @top_srcdir@
-AUTOMAKE_OPTIONS = 1.10
+AUTOMAKE_OPTIONS = 1.13
+ACLOCAL_AMFLAGS = -I m4
 inplacedir = $(top_builddir)/.in_place
 
 #----------------------------------------------------------------------------
@@ -499,15 +504,15 @@ inplacedir = $(top_builddir)/.in_place
 # that somehow causes VG_(memset) to get into infinite recursion.
 AM_CFLAGS_BASE = -O2 -g -Wall -Wmissing-prototypes -Wshadow \
 	-Wpointer-arith -Wstrict-prototypes -Wmissing-declarations \
-	@FLAG_W_CAST_ALIGN@ @FLAG_W_CAST_QUAL@ @FLAG_W_WRITE_STRINGS@ \
-	@FLAG_W_EMPTY_BODY@ @FLAG_W_FORMAT@ @FLAG_W_FORMAT_SIGNEDNESS@ \
-	@FLAG_W_FORMAT_SECURITY@ @FLAG_W_IGNORED_QUALIFIERS@ \
-	@FLAG_W_MISSING_PARAMETER_TYPE@ @FLAG_W_LOGICAL_OP@ \
-	@FLAG_W_ENUM_CONVERSION@ @FLAG_W_IMPLICIT_FALLTHROUGH@ \
-	@FLAG_W_OLD_STYLE_DECLARATION@ @FLAG_FINLINE_FUNCTIONS@ \
-	@FLAG_FNO_STACK_PROTECTOR@ @FLAG_FSANITIZE@ \
-	-fno-strict-aliasing -fno-builtin $(am__append_1) \
-	$(am__append_2)
+	-Wno-unused-result @FLAG_W_CAST_ALIGN@ @FLAG_W_CAST_QUAL@ \
+	@FLAG_W_WRITE_STRINGS@ @FLAG_W_EMPTY_BODY@ @FLAG_W_FORMAT@ \
+	@FLAG_W_FORMAT_SIGNEDNESS@ @FLAG_W_FORMAT_SECURITY@ \
+	@FLAG_W_IGNORED_QUALIFIERS@ @FLAG_W_MISSING_PARAMETER_TYPE@ \
+	@FLAG_W_LOGICAL_OP@ @FLAG_W_ENUM_CONVERSION@ \
+	@FLAG_W_IMPLICIT_FALLTHROUGH@ @FLAG_W_OLD_STYLE_DECLARATION@ \
+	@FLAG_FINLINE_FUNCTIONS@ @FLAG_FNO_STACK_PROTECTOR@ \
+	@FLAG_FSANITIZE@ -fno-strict-aliasing -fno-builtin \
+	$(am__append_1) $(am__append_2)
 @HAS_DARN_FALSE@@HAS_XSCVHPDP_TRUE@ISA_3_0_BUILD_FLAG = -DHAS_XSCVHPDP  -DHAS_ISA_3_00
 
 # Power ISA flag for use by guest_ppc_helpers.c
@@ -638,6 +643,10 @@ AM_CFLAGS_PSO_MIPS64_LINUX = @FLAG_M64@ $(AM_CFLAGS_BASE) \
 				$(AM_CFLAGS_PSO_BASE)
 
 AM_CCASFLAGS_MIPS64_LINUX = @FLAG_M64@ -g
+AM_FLAG_M3264_RISCV64_LINUX = @FLAG_M64@
+AM_CFLAGS_RISCV64_LINUX = @FLAG_M64@ $(AM_CFLAGS_BASE)
+AM_CFLAGS_PSO_RISCV64_LINUX = @FLAG_M64@ $(AM_CFLAGS_BASE) $(AM_CFLAGS_PSO_BASE)
+AM_CCASFLAGS_RISCV64_LINUX = @FLAG_M64@ -g
 AM_FLAG_M3264_X86_SOLARIS = @FLAG_M32@
 AM_CFLAGS_X86_SOLARIS = @FLAG_M32@ @PREFERRED_STACK_BOUNDARY_2@ \
 				$(AM_CFLAGS_BASE) -fomit-frame-pointer @SOLARIS_UNDEF_LARGESOURCE@
@@ -686,6 +695,7 @@ PRELOAD_LDFLAGS_S390X_LINUX = $(PRELOAD_LDFLAGS_COMMON_LINUX) @FLAG_M64@
 PRELOAD_LDFLAGS_MIPS32_LINUX = $(PRELOAD_LDFLAGS_COMMON_LINUX) @FLAG_M32@
 PRELOAD_LDFLAGS_NANOMIPS_LINUX = $(PRELOAD_LDFLAGS_COMMON_LINUX) @FLAG_M32@
 PRELOAD_LDFLAGS_MIPS64_LINUX = $(PRELOAD_LDFLAGS_COMMON_LINUX) @FLAG_M64@
+PRELOAD_LDFLAGS_RISCV64_LINUX = $(PRELOAD_LDFLAGS_COMMON_LINUX) @FLAG_M64@
 PRELOAD_LDFLAGS_X86_SOLARIS = $(PRELOAD_LDFLAGS_COMMON_SOLARIS) @FLAG_M32@
 PRELOAD_LDFLAGS_AMD64_SOLARIS = $(PRELOAD_LDFLAGS_COMMON_SOLARIS) @FLAG_M64@
 TOOLS = \
@@ -717,6 +727,7 @@ SUBDIRS = \
 	perf \
 	gdbserver_tests \
 	memcheck/tests/vbit-test \
+	none/tests/s390x/disasm-test \
 	auxprogs \
 	mpi \
 	solaris \
@@ -767,13 +778,15 @@ EXTRA_DIST = \
 	README.android_emulator \
 	README.mips \
 	README.aarch64 \
+	README.riscv64 \
 	README.solaris \
 	README.freebsd \
 	NEWS.old \
 	valgrind.pc.in \
 	valgrind.spec.in \
 	valgrind.spec \
-	autogen.sh
+	autogen.sh \
+	m4
 
 dist_noinst_SCRIPTS = \
 	vg-in-place
@@ -825,12 +838,12 @@ config.h: stamp-h1
 	@test -f $@ || $(MAKE) $(AM_MAKEFLAGS) stamp-h1
 
 stamp-h1: $(srcdir)/config.h.in $(top_builddir)/config.status
-	@rm -f stamp-h1
-	cd $(top_builddir) && $(SHELL) ./config.status config.h
+	$(AM_V_at)rm -f stamp-h1
+	$(AM_V_GEN)cd $(top_builddir) && $(SHELL) ./config.status config.h
 $(srcdir)/config.h.in: @MAINTAINER_MODE_TRUE@ $(am__configure_deps) 
-	($(am__cd) $(top_srcdir) && $(AUTOHEADER))
-	rm -f stamp-h1
-	touch $@
+	$(AM_V_GEN)($(am__cd) $(top_srcdir) && $(AUTOHEADER))
+	$(AM_V_at)rm -f stamp-h1
+	$(AM_V_at)touch $@
 
 distclean-hdr:
 	-rm -f config.h stamp-h1
@@ -1022,7 +1035,7 @@ distdir: $(BUILT_SOURCES)
 
 distdir-am: $(DISTFILES)
 	$(am__remove_distdir)
-	test -d "$(distdir)" || mkdir "$(distdir)"
+	$(AM_V_at)$(MKDIR_P) "$(distdir)"
 	@srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \
 	topsrcdirstrip=`echo "$(top_srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \
 	list='$(DISTFILES)'; \
@@ -1135,7 +1148,7 @@ dist dist-all:
 distcheck: dist
 	case '$(DIST_ARCHIVES)' in \
 	*.tar.gz*) \
-	  eval GZIP= gzip $(GZIP_ENV) -dc $(distdir).tar.gz | $(am__untar) ;;\
+	  eval GZIP= gzip -dc $(distdir).tar.gz | $(am__untar) ;;\
 	*.tar.bz2*) \
 	  bzip2 -dc $(distdir).tar.bz2 | $(am__untar) ;;\
 	*.tar.lz*) \
@@ -1145,7 +1158,7 @@ distcheck: dist
 	*.tar.Z*) \
 	  uncompress -c $(distdir).tar.Z | $(am__untar) ;;\
 	*.shar.gz*) \
-	  eval GZIP= gzip $(GZIP_ENV) -dc $(distdir).shar.gz | unshar ;;\
+	  eval GZIP= gzip -dc $(distdir).shar.gz | unshar ;;\
 	*.zip*) \
 	  unzip $(distdir).zip ;;\
 	*.tar.zst*) \
@@ -1249,16 +1262,16 @@ install-strip:
 mostlyclean-generic:
 
 clean-generic:
-	-test -z "$(CLEANFILES)" || rm -f $(CLEANFILES)
+	-$(am__rm_f) $(CLEANFILES)
 
 distclean-generic:
-	-test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES)
-	-test . = "$(srcdir)" || test -z "$(CONFIG_CLEAN_VPATH_FILES)" || rm -f $(CONFIG_CLEAN_VPATH_FILES)
+	-$(am__rm_f) $(CONFIG_CLEAN_FILES)
+	-test . = "$(srcdir)" || $(am__rm_f) $(CONFIG_CLEAN_VPATH_FILES)
 
 maintainer-clean-generic:
 	@echo "This command is intended for maintainers to use"
 	@echo "it deletes files that may require special tools to rebuild."
-	-test -z "$(BUILT_SOURCES)" || rm -f $(BUILT_SOURCES)
+	-$(am__rm_f) $(BUILT_SOURCES)
 clean: clean-recursive
 
 clean-am: clean-generic clean-local mostlyclean-am
@@ -1464,6 +1477,9 @@ perf: check
 auxchecks: all
 	$(MAKE) -C auxprogs auxchecks
 
+ltpchecks: all
+	$(MAKE) -C auxprogs ltpchecks
+
 dist-hook: include/vgversion.h
 	cp -p include/vgversion.h $(distdir)/include/vgversion_dist.h
 
@@ -1488,3 +1504,10 @@ include/vgversion.h:
 # Tell versions [3.59,3.63) of GNU make to not export all variables.
 # Otherwise a system limit (for SysV at least) may be exceeded.
 .NOEXPORT:
+
+# Tell GNU make to disable its built-in pattern rules.
+%:: %,v
+%:: RCS/%,v
+%:: RCS/%
+%:: s.%
+%:: SCCS/s.%
diff --git a/Makefile.tool.am b/Makefile.tool.am
index c779596..7f18d25 100644
--- a/Makefile.tool.am
+++ b/Makefile.tool.am
@@ -110,6 +110,9 @@ TOOL_LDFLAGS_MIPS64_LINUX = \
 	-static -nodefaultlibs -nostartfiles -u __start @FLAG_NO_BUILD_ID@ \
 	@FLAG_M64@
 
+TOOL_LDFLAGS_RISCV64_LINUX = \
+	$(TOOL_LDFLAGS_COMMON_LINUX) @FLAG_M64@
+
 TOOL_LDFLAGS_X86_SOLARIS = \
 	$(TOOL_LDFLAGS_COMMON_SOLARIS) @FLAG_M32@
 
@@ -181,6 +184,9 @@ LIBREPLACEMALLOC_MIPS32_LINUX = \
 LIBREPLACEMALLOC_MIPS64_LINUX = \
 	$(top_builddir)/coregrind/libreplacemalloc_toolpreload-mips64-linux.a
 
+LIBREPLACEMALLOC_RISCV64_LINUX = \
+	$(top_builddir)/coregrind/libreplacemalloc_toolpreload-riscv64-linux.a
+
 LIBREPLACEMALLOC_X86_SOLARIS = \
 	$(top_builddir)/coregrind/libreplacemalloc_toolpreload-x86-solaris.a
 
@@ -258,6 +264,11 @@ LIBREPLACEMALLOC_LDFLAGS_MIPS64_LINUX = \
 	$(LIBREPLACEMALLOC_MIPS64_LINUX) \
 	-Wl,--no-whole-archive
 
+LIBREPLACEMALLOC_LDFLAGS_RISCV64_LINUX = \
+	-Wl,--whole-archive \
+	$(LIBREPLACEMALLOC_RISCV64_LINUX) \
+	-Wl,--no-whole-archive
+
 LIBREPLACEMALLOC_LDFLAGS_X86_SOLARIS = \
 	-Wl,--whole-archive \
 	$(LIBREPLACEMALLOC_X86_SOLARIS) \
diff --git a/Makefile.vex.am b/Makefile.vex.am
index c1244a6..f75e9b4 100644
--- a/Makefile.vex.am
+++ b/Makefile.vex.am
@@ -26,6 +26,7 @@ pkginclude_HEADERS = \
 	pub/libvex_guest_s390x.h \
 	pub/libvex_guest_mips32.h \
 	pub/libvex_guest_mips64.h \
+	pub/libvex_guest_riscv64.h \
 	pub/libvex_s390x_common.h \
 	pub/libvex_ir.h \
 	pub/libvex_trc_values.h \
@@ -49,6 +50,7 @@ noinst_HEADERS = \
 	priv/guest_mips_defs.h \
 	priv/mips_defs.h \
 	priv/guest_nanomips_defs.h \
+	priv/guest_riscv64_defs.h \
 	priv/host_generic_regs.h \
 	priv/host_generic_simd64.h \
 	priv/host_generic_simd128.h \
@@ -65,7 +67,8 @@ noinst_HEADERS = \
 	priv/s390_defs.h \
 	priv/host_mips_defs.h \
 	priv/host_nanomips_defs.h \
-	priv/common_nanomips_defs.h
+	priv/common_nanomips_defs.h \
+	priv/host_riscv64_defs.h
 
 BUILT_SOURCES = pub/libvex_guest_offsets.h
 CLEANFILES    = pub/libvex_guest_offsets.h
@@ -94,7 +97,8 @@ pub/libvex_guest_offsets.h: auxprogs/genoffsets.c \
 			    pub/libvex_guest_arm64.h \
 			    pub/libvex_guest_s390x.h \
 			    pub/libvex_guest_mips32.h \
-			    pub/libvex_guest_mips64.h
+			    pub/libvex_guest_mips64.h \
+			    pub/libvex_guest_riscv64.h
 	rm -f auxprogs/genoffsets.s
 	$(mkdir_p) auxprogs pub
 	$(CC) $(CFLAGS_FOR_GENOFFSETS) \
@@ -152,6 +156,8 @@ LIBVEX_SOURCES_COMMON = \
 	priv/guest_mips_toIR.c \
 	priv/guest_nanomips_helpers.c \
 	priv/guest_nanomips_toIR.c \
+	priv/guest_riscv64_helpers.c \
+	priv/guest_riscv64_toIR.c \
 	priv/host_generic_regs.c \
 	priv/host_generic_simd64.c \
 	priv/host_generic_simd128.c \
@@ -176,7 +182,9 @@ LIBVEX_SOURCES_COMMON = \
 	priv/host_mips_defs.c \
 	priv/host_nanomips_defs.c \
 	priv/host_mips_isel.c \
-	priv/host_nanomips_isel.c
+	priv/host_nanomips_isel.c \
+	priv/host_riscv64_defs.c \
+	priv/host_riscv64_isel.c
 
 LIBVEXMULTIARCH_SOURCES = priv/multiarch_main_main.c
 
diff --git a/Makefile.vex.in b/Makefile.vex.in
index 4dad853..4a59fc6 100644
--- a/Makefile.vex.in
+++ b/Makefile.vex.in
@@ -1,7 +1,7 @@
-# Makefile.vex.in generated by automake 1.16.5 from Makefile.vex.am.
+# Makefile.vex.in generated by automake 1.17 from Makefile.vex.am.
 # @configure_input@
 
-# Copyright (C) 1994-2021 Free Software Foundation, Inc.
+# Copyright (C) 1994-2024 Free Software Foundation, Inc.
 
 # This Makefile.in is free software; the Free Software Foundation
 # gives unlimited permission to copy and/or distribute it,
@@ -80,6 +80,8 @@ am__make_running_with_option = \
   test $$has_opt = yes
 am__make_dryrun = (target_option=n; $(am__make_running_with_option))
 am__make_keepgoing = (target_option=k; $(am__make_running_with_option))
+am__rm_f = rm -f $(am__rm_f_notfound)
+am__rm_rf = rm -rf $(am__rm_f_notfound)
 pkgdatadir = $(datadir)/@PACKAGE@
 pkgincludedir = $(includedir)/@PACKAGE@
 pkglibdir = $(libdir)/@PACKAGE@
@@ -146,20 +148,20 @@ am__base_list = \
   sed '$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;s/\n/ /g' | \
   sed '$$!N;$$!N;$$!N;$$!N;s/\n/ /g'
 am__uninstall_files_from_dir = { \
-  test -z "$$files" \
-    || { test ! -d "$$dir" && test ! -f "$$dir" && test ! -r "$$dir"; } \
-    || { echo " ( cd '$$dir' && rm -f" $$files ")"; \
-         $(am__cd) "$$dir" && rm -f $$files; }; \
+  { test ! -d "$$dir" && test ! -f "$$dir" && test ! -r "$$dir"; } \
+  || { echo " ( cd '$$dir' && rm -f" $$files ")"; \
+       $(am__cd) "$$dir" && echo $$files | $(am__xargs_n) 40 $(am__rm_f); }; \
   }
 am__installdirs = "$(DESTDIR)$(pkglibdir)" \
 	"$(DESTDIR)$(pkgincludedir)"
 LIBRARIES = $(pkglib_LIBRARIES)
-ARFLAGS = cru
+ARFLAGS = cr
 AM_V_AR = $(am__v_AR_@AM_V@)
 am__v_AR_ = $(am__v_AR_@AM_DEFAULT_V@)
 am__v_AR_0 = @echo "  AR      " $@;
 am__v_AR_1 = 
 libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_AR = $(AR) $(ARFLAGS)
+libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_RANLIB = $(RANLIB)
 libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_LIBADD =
 am__dirstamp = $(am__leading_dot)dirstamp
 am__objects_1 = priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-main_globals.$(OBJEXT) \
@@ -188,6 +190,8 @@ am__objects_1 = priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-main_globals.$(OBJEX
 	priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_mips_toIR.$(OBJEXT) \
 	priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_nanomips_helpers.$(OBJEXT) \
 	priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_nanomips_toIR.$(OBJEXT) \
+	priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_riscv64_helpers.$(OBJEXT) \
+	priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_riscv64_toIR.$(OBJEXT) \
 	priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_generic_regs.$(OBJEXT) \
 	priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_generic_simd64.$(OBJEXT) \
 	priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_generic_simd128.$(OBJEXT) \
@@ -212,11 +216,14 @@ am__objects_1 = priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-main_globals.$(OBJEX
 	priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_mips_defs.$(OBJEXT) \
 	priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_nanomips_defs.$(OBJEXT) \
 	priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_mips_isel.$(OBJEXT) \
-	priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_nanomips_isel.$(OBJEXT)
+	priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_nanomips_isel.$(OBJEXT) \
+	priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_riscv64_defs.$(OBJEXT) \
+	priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_riscv64_isel.$(OBJEXT)
 am_libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_OBJECTS = $(am__objects_1)
 libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_OBJECTS =  \
 	$(am_libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_OBJECTS)
 libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_AR = $(AR) $(ARFLAGS)
+libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_RANLIB = $(RANLIB)
 libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_LIBADD =
 am__libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_SOURCES_DIST =  \
 	priv/main_globals.c priv/main_main.c priv/main_util.c \
@@ -230,7 +237,8 @@ am__libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_SOURCES_DIST =  \
 	priv/guest_s390_helpers.c priv/guest_s390_toIR.c \
 	priv/guest_mips_helpers.c priv/guest_mipsdsp_toIR.c \
 	priv/guest_mips_toIR.c priv/guest_nanomips_helpers.c \
-	priv/guest_nanomips_toIR.c priv/host_generic_regs.c \
+	priv/guest_nanomips_toIR.c priv/guest_riscv64_helpers.c \
+	priv/guest_riscv64_toIR.c priv/host_generic_regs.c \
 	priv/host_generic_simd64.c priv/host_generic_simd128.c \
 	priv/host_generic_simd256.c priv/host_generic_maddf.c \
 	priv/host_amd64_maddf.c priv/host_generic_reg_alloc2.c \
@@ -241,7 +249,8 @@ am__libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_SOURCES_DIST =  \
 	priv/host_arm64_defs.c priv/host_arm64_isel.c \
 	priv/host_s390_defs.c priv/host_s390_isel.c priv/s390_disasm.c \
 	priv/host_mips_defs.c priv/host_nanomips_defs.c \
-	priv/host_mips_isel.c priv/host_nanomips_isel.c
+	priv/host_mips_isel.c priv/host_nanomips_isel.c \
+	priv/host_riscv64_defs.c priv/host_riscv64_isel.c
 am__objects_2 = priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-main_globals.$(OBJEXT) \
 	priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-main_main.$(OBJEXT) \
 	priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-main_util.$(OBJEXT) \
@@ -268,6 +277,8 @@ am__objects_2 = priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-main_globals.$(OBJEX
 	priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_mips_toIR.$(OBJEXT) \
 	priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_nanomips_helpers.$(OBJEXT) \
 	priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_nanomips_toIR.$(OBJEXT) \
+	priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_riscv64_helpers.$(OBJEXT) \
+	priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_riscv64_toIR.$(OBJEXT) \
 	priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_generic_regs.$(OBJEXT) \
 	priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_generic_simd64.$(OBJEXT) \
 	priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_generic_simd128.$(OBJEXT) \
@@ -292,12 +303,15 @@ am__objects_2 = priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-main_globals.$(OBJEX
 	priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_mips_defs.$(OBJEXT) \
 	priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_nanomips_defs.$(OBJEXT) \
 	priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_mips_isel.$(OBJEXT) \
-	priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_nanomips_isel.$(OBJEXT)
+	priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_nanomips_isel.$(OBJEXT) \
+	priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_riscv64_defs.$(OBJEXT) \
+	priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_riscv64_isel.$(OBJEXT)
 @VGCONF_HAVE_PLATFORM_SEC_TRUE@am_libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_OBJECTS =  \
 @VGCONF_HAVE_PLATFORM_SEC_TRUE@	$(am__objects_2)
 libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_OBJECTS =  \
 	$(am_libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_OBJECTS)
 libvexmultiarch_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_AR = $(AR) $(ARFLAGS)
+libvexmultiarch_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_RANLIB = $(RANLIB)
 libvexmultiarch_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_LIBADD =
 am__objects_3 = priv/libvexmultiarch_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-multiarch_main_main.$(OBJEXT)
 am_libvexmultiarch_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_OBJECTS =  \
@@ -305,6 +319,7 @@ am_libvexmultiarch_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_OBJECTS =  \
 libvexmultiarch_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_OBJECTS =  \
 	$(am_libvexmultiarch_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_OBJECTS)
 libvexmultiarch_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_AR = $(AR) $(ARFLAGS)
+libvexmultiarch_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_RANLIB = $(RANLIB)
 libvexmultiarch_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_LIBADD =
 am__libvexmultiarch_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_SOURCES_DIST =  \
 	priv/multiarch_main_main.c
@@ -343,6 +358,8 @@ am__depfiles_remade = priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-gues
 	priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_nanomips_toIR.Po \
 	priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_ppc_helpers.Po \
 	priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_ppc_toIR.Po \
+	priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_riscv64_helpers.Po \
+	priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_riscv64_toIR.Po \
 	priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_s390_helpers.Po \
 	priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_s390_toIR.Po \
 	priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_x86_helpers.Po \
@@ -367,6 +384,8 @@ am__depfiles_remade = priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-gues
 	priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_nanomips_isel.Po \
 	priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_ppc_defs.Po \
 	priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_ppc_isel.Po \
+	priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_riscv64_defs.Po \
+	priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_riscv64_isel.Po \
 	priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_s390_defs.Po \
 	priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_s390_isel.Po \
 	priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_x86_defs.Po \
@@ -394,6 +413,8 @@ am__depfiles_remade = priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-gues
 	priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_nanomips_toIR.Po \
 	priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_ppc_helpers.Po \
 	priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_ppc_toIR.Po \
+	priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_riscv64_helpers.Po \
+	priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_riscv64_toIR.Po \
 	priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_s390_helpers.Po \
 	priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_s390_toIR.Po \
 	priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_x86_helpers.Po \
@@ -418,6 +439,8 @@ am__depfiles_remade = priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-gues
 	priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_nanomips_isel.Po \
 	priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_ppc_defs.Po \
 	priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_ppc_isel.Po \
+	priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_riscv64_defs.Po \
+	priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_riscv64_isel.Po \
 	priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_s390_defs.Po \
 	priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_s390_isel.Po \
 	priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_x86_defs.Po \
@@ -518,7 +541,6 @@ DIS_PATH = @DIS_PATH@
 ECHO_C = @ECHO_C@
 ECHO_N = @ECHO_N@
 ECHO_T = @ECHO_T@
-EGREP = @EGREP@
 ETAGS = @ETAGS@
 EXEEXT = @EXEEXT@
 FLAG_32ON64_GXX = @FLAG_32ON64_GXX@
@@ -536,6 +558,7 @@ FLAG_MSA = @FLAG_MSA@
 FLAG_MSSE = @FLAG_MSSE@
 FLAG_NO_BUILD_ID = @FLAG_NO_BUILD_ID@
 FLAG_NO_PIE = @FLAG_NO_PIE@
+FLAG_NO_WARN_EXECSTACK = @FLAG_NO_WARN_EXECSTACK@
 FLAG_OCTEON = @FLAG_OCTEON@
 FLAG_OCTEON2 = @FLAG_OCTEON2@
 FLAG_PIE = @FLAG_PIE@
@@ -589,7 +612,6 @@ GDB = @GDB@
 GLIBC_LIBC_PATH = @GLIBC_LIBC_PATH@
 GLIBC_LIBPTHREAD_PATH = @GLIBC_LIBPTHREAD_PATH@
 GLIBC_VERSION = @GLIBC_VERSION@
-GREP = @GREP@
 HWCAP_HAS_ALTIVEC = @HWCAP_HAS_ALTIVEC@
 HWCAP_HAS_DFP = @HWCAP_HAS_DFP@
 HWCAP_HAS_HTM = @HWCAP_HAS_HTM@
@@ -664,8 +686,10 @@ ac_ct_CXX = @ac_ct_CXX@
 am__include = @am__include@
 am__leading_dot = @am__leading_dot@
 am__quote = @am__quote@
+am__rm_f_notfound = @am__rm_f_notfound@
 am__tar = @am__tar@
 am__untar = @am__untar@
+am__xargs_n = @am__xargs_n@
 bindir = @bindir@
 build = @build@
 build_alias = @build_alias@
@@ -719,15 +743,15 @@ inplacedir = $(top_builddir)/.in_place
 # that somehow causes VG_(memset) to get into infinite recursion.
 AM_CFLAGS_BASE = -O2 -g -Wall -Wmissing-prototypes -Wshadow \
 	-Wpointer-arith -Wstrict-prototypes -Wmissing-declarations \
-	@FLAG_W_CAST_ALIGN@ @FLAG_W_CAST_QUAL@ @FLAG_W_WRITE_STRINGS@ \
-	@FLAG_W_EMPTY_BODY@ @FLAG_W_FORMAT@ @FLAG_W_FORMAT_SIGNEDNESS@ \
-	@FLAG_W_FORMAT_SECURITY@ @FLAG_W_IGNORED_QUALIFIERS@ \
-	@FLAG_W_MISSING_PARAMETER_TYPE@ @FLAG_W_LOGICAL_OP@ \
-	@FLAG_W_ENUM_CONVERSION@ @FLAG_W_IMPLICIT_FALLTHROUGH@ \
-	@FLAG_W_OLD_STYLE_DECLARATION@ @FLAG_FINLINE_FUNCTIONS@ \
-	@FLAG_FNO_STACK_PROTECTOR@ @FLAG_FSANITIZE@ \
-	-fno-strict-aliasing -fno-builtin $(am__append_1) \
-	$(am__append_2)
+	-Wno-unused-result @FLAG_W_CAST_ALIGN@ @FLAG_W_CAST_QUAL@ \
+	@FLAG_W_WRITE_STRINGS@ @FLAG_W_EMPTY_BODY@ @FLAG_W_FORMAT@ \
+	@FLAG_W_FORMAT_SIGNEDNESS@ @FLAG_W_FORMAT_SECURITY@ \
+	@FLAG_W_IGNORED_QUALIFIERS@ @FLAG_W_MISSING_PARAMETER_TYPE@ \
+	@FLAG_W_LOGICAL_OP@ @FLAG_W_ENUM_CONVERSION@ \
+	@FLAG_W_IMPLICIT_FALLTHROUGH@ @FLAG_W_OLD_STYLE_DECLARATION@ \
+	@FLAG_FINLINE_FUNCTIONS@ @FLAG_FNO_STACK_PROTECTOR@ \
+	@FLAG_FSANITIZE@ -fno-strict-aliasing -fno-builtin \
+	$(am__append_1) $(am__append_2)
 @HAS_DARN_FALSE@@HAS_XSCVHPDP_TRUE@ISA_3_0_BUILD_FLAG = -DHAS_XSCVHPDP  -DHAS_ISA_3_00
 
 # Power ISA flag for use by guest_ppc_helpers.c
@@ -858,6 +882,10 @@ AM_CFLAGS_PSO_MIPS64_LINUX = @FLAG_M64@ $(AM_CFLAGS_BASE) \
 				$(AM_CFLAGS_PSO_BASE)
 
 AM_CCASFLAGS_MIPS64_LINUX = @FLAG_M64@ -g
+AM_FLAG_M3264_RISCV64_LINUX = @FLAG_M64@
+AM_CFLAGS_RISCV64_LINUX = @FLAG_M64@ $(AM_CFLAGS_BASE)
+AM_CFLAGS_PSO_RISCV64_LINUX = @FLAG_M64@ $(AM_CFLAGS_BASE) $(AM_CFLAGS_PSO_BASE)
+AM_CCASFLAGS_RISCV64_LINUX = @FLAG_M64@ -g
 AM_FLAG_M3264_X86_SOLARIS = @FLAG_M32@
 AM_CFLAGS_X86_SOLARIS = @FLAG_M32@ @PREFERRED_STACK_BOUNDARY_2@ \
 				$(AM_CFLAGS_BASE) -fomit-frame-pointer @SOLARIS_UNDEF_LARGESOURCE@
@@ -906,6 +934,7 @@ PRELOAD_LDFLAGS_S390X_LINUX = $(PRELOAD_LDFLAGS_COMMON_LINUX) @FLAG_M64@
 PRELOAD_LDFLAGS_MIPS32_LINUX = $(PRELOAD_LDFLAGS_COMMON_LINUX) @FLAG_M32@
 PRELOAD_LDFLAGS_NANOMIPS_LINUX = $(PRELOAD_LDFLAGS_COMMON_LINUX) @FLAG_M32@
 PRELOAD_LDFLAGS_MIPS64_LINUX = $(PRELOAD_LDFLAGS_COMMON_LINUX) @FLAG_M64@
+PRELOAD_LDFLAGS_RISCV64_LINUX = $(PRELOAD_LDFLAGS_COMMON_LINUX) @FLAG_M64@
 PRELOAD_LDFLAGS_X86_SOLARIS = $(PRELOAD_LDFLAGS_COMMON_SOLARIS) @FLAG_M32@
 PRELOAD_LDFLAGS_AMD64_SOLARIS = $(PRELOAD_LDFLAGS_COMMON_SOLARIS) @FLAG_M64@
 
@@ -932,6 +961,7 @@ pkginclude_HEADERS = \
 	pub/libvex_guest_s390x.h \
 	pub/libvex_guest_mips32.h \
 	pub/libvex_guest_mips64.h \
+	pub/libvex_guest_riscv64.h \
 	pub/libvex_s390x_common.h \
 	pub/libvex_ir.h \
 	pub/libvex_trc_values.h \
@@ -955,6 +985,7 @@ noinst_HEADERS = \
 	priv/guest_mips_defs.h \
 	priv/mips_defs.h \
 	priv/guest_nanomips_defs.h \
+	priv/guest_riscv64_defs.h \
 	priv/host_generic_regs.h \
 	priv/host_generic_simd64.h \
 	priv/host_generic_simd128.h \
@@ -971,7 +1002,8 @@ noinst_HEADERS = \
 	priv/s390_defs.h \
 	priv/host_mips_defs.h \
 	priv/host_nanomips_defs.h \
-	priv/common_nanomips_defs.h
+	priv/common_nanomips_defs.h \
+	priv/host_riscv64_defs.h
 
 BUILT_SOURCES = pub/libvex_guest_offsets.h
 CLEANFILES = pub/libvex_guest_offsets.h
@@ -1015,6 +1047,8 @@ LIBVEX_SOURCES_COMMON = \
 	priv/guest_mips_toIR.c \
 	priv/guest_nanomips_helpers.c \
 	priv/guest_nanomips_toIR.c \
+	priv/guest_riscv64_helpers.c \
+	priv/guest_riscv64_toIR.c \
 	priv/host_generic_regs.c \
 	priv/host_generic_simd64.c \
 	priv/host_generic_simd128.c \
@@ -1039,7 +1073,9 @@ LIBVEX_SOURCES_COMMON = \
 	priv/host_mips_defs.c \
 	priv/host_nanomips_defs.c \
 	priv/host_mips_isel.c \
-	priv/host_nanomips_isel.c
+	priv/host_nanomips_isel.c \
+	priv/host_riscv64_defs.c \
+	priv/host_riscv64_isel.c
 
 LIBVEXMULTIARCH_SOURCES = priv/multiarch_main_main.c
 LIBVEX_CFLAGS_NO_LTO = \
@@ -1146,13 +1182,13 @@ uninstall-pkglibLIBRARIES:
 	dir='$(DESTDIR)$(pkglibdir)'; $(am__uninstall_files_from_dir)
 
 clean-pkglibLIBRARIES:
-	-test -z "$(pkglib_LIBRARIES)" || rm -f $(pkglib_LIBRARIES)
+	-$(am__rm_f) $(pkglib_LIBRARIES)
 priv/$(am__dirstamp):
 	@$(MKDIR_P) priv
-	@: > priv/$(am__dirstamp)
+	@: >>priv/$(am__dirstamp)
 priv/$(DEPDIR)/$(am__dirstamp):
 	@$(MKDIR_P) priv/$(DEPDIR)
-	@: > priv/$(DEPDIR)/$(am__dirstamp)
+	@: >>priv/$(DEPDIR)/$(am__dirstamp)
 priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-main_globals.$(OBJEXT):  \
 	priv/$(am__dirstamp) priv/$(DEPDIR)/$(am__dirstamp)
 priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-main_main.$(OBJEXT):  \
@@ -1205,6 +1241,10 @@ priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_nanomips_helpers.$(OBJEXT):  \
 	priv/$(am__dirstamp) priv/$(DEPDIR)/$(am__dirstamp)
 priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_nanomips_toIR.$(OBJEXT):  \
 	priv/$(am__dirstamp) priv/$(DEPDIR)/$(am__dirstamp)
+priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_riscv64_helpers.$(OBJEXT):  \
+	priv/$(am__dirstamp) priv/$(DEPDIR)/$(am__dirstamp)
+priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_riscv64_toIR.$(OBJEXT):  \
+	priv/$(am__dirstamp) priv/$(DEPDIR)/$(am__dirstamp)
 priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_generic_regs.$(OBJEXT):  \
 	priv/$(am__dirstamp) priv/$(DEPDIR)/$(am__dirstamp)
 priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_generic_simd64.$(OBJEXT):  \
@@ -1255,11 +1295,15 @@ priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_mips_isel.$(OBJEXT):  \
 	priv/$(am__dirstamp) priv/$(DEPDIR)/$(am__dirstamp)
 priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_nanomips_isel.$(OBJEXT):  \
 	priv/$(am__dirstamp) priv/$(DEPDIR)/$(am__dirstamp)
+priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_riscv64_defs.$(OBJEXT):  \
+	priv/$(am__dirstamp) priv/$(DEPDIR)/$(am__dirstamp)
+priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_riscv64_isel.$(OBJEXT):  \
+	priv/$(am__dirstamp) priv/$(DEPDIR)/$(am__dirstamp)
 
 libvex-@VGCONF_ARCH_PRI@-@VGCONF_OS@.a: $(libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_OBJECTS) $(libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_DEPENDENCIES) $(EXTRA_libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_DEPENDENCIES) 
 	$(AM_V_at)-rm -f libvex-@VGCONF_ARCH_PRI@-@VGCONF_OS@.a
 	$(AM_V_AR)$(libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_AR) libvex-@VGCONF_ARCH_PRI@-@VGCONF_OS@.a $(libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_OBJECTS) $(libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_LIBADD)
-	$(AM_V_at)$(RANLIB) libvex-@VGCONF_ARCH_PRI@-@VGCONF_OS@.a
+	$(AM_V_at)$(libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_RANLIB) libvex-@VGCONF_ARCH_PRI@-@VGCONF_OS@.a
 priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-main_globals.$(OBJEXT):  \
 	priv/$(am__dirstamp) priv/$(DEPDIR)/$(am__dirstamp)
 priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-main_main.$(OBJEXT):  \
@@ -1312,6 +1356,10 @@ priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_nanomips_helpers.$(OBJEXT):  \
 	priv/$(am__dirstamp) priv/$(DEPDIR)/$(am__dirstamp)
 priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_nanomips_toIR.$(OBJEXT):  \
 	priv/$(am__dirstamp) priv/$(DEPDIR)/$(am__dirstamp)
+priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_riscv64_helpers.$(OBJEXT):  \
+	priv/$(am__dirstamp) priv/$(DEPDIR)/$(am__dirstamp)
+priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_riscv64_toIR.$(OBJEXT):  \
+	priv/$(am__dirstamp) priv/$(DEPDIR)/$(am__dirstamp)
 priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_generic_regs.$(OBJEXT):  \
 	priv/$(am__dirstamp) priv/$(DEPDIR)/$(am__dirstamp)
 priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_generic_simd64.$(OBJEXT):  \
@@ -1362,25 +1410,29 @@ priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_mips_isel.$(OBJEXT):  \
 	priv/$(am__dirstamp) priv/$(DEPDIR)/$(am__dirstamp)
 priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_nanomips_isel.$(OBJEXT):  \
 	priv/$(am__dirstamp) priv/$(DEPDIR)/$(am__dirstamp)
+priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_riscv64_defs.$(OBJEXT):  \
+	priv/$(am__dirstamp) priv/$(DEPDIR)/$(am__dirstamp)
+priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_riscv64_isel.$(OBJEXT):  \
+	priv/$(am__dirstamp) priv/$(DEPDIR)/$(am__dirstamp)
 
 libvex-@VGCONF_ARCH_SEC@-@VGCONF_OS@.a: $(libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_OBJECTS) $(libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_DEPENDENCIES) $(EXTRA_libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_DEPENDENCIES) 
 	$(AM_V_at)-rm -f libvex-@VGCONF_ARCH_SEC@-@VGCONF_OS@.a
 	$(AM_V_AR)$(libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_AR) libvex-@VGCONF_ARCH_SEC@-@VGCONF_OS@.a $(libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_OBJECTS) $(libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_LIBADD)
-	$(AM_V_at)$(RANLIB) libvex-@VGCONF_ARCH_SEC@-@VGCONF_OS@.a
+	$(AM_V_at)$(libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_RANLIB) libvex-@VGCONF_ARCH_SEC@-@VGCONF_OS@.a
 priv/libvexmultiarch_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-multiarch_main_main.$(OBJEXT):  \
 	priv/$(am__dirstamp) priv/$(DEPDIR)/$(am__dirstamp)
 
 libvexmultiarch-@VGCONF_ARCH_PRI@-@VGCONF_OS@.a: $(libvexmultiarch_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_OBJECTS) $(libvexmultiarch_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_DEPENDENCIES) $(EXTRA_libvexmultiarch_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_DEPENDENCIES) 
 	$(AM_V_at)-rm -f libvexmultiarch-@VGCONF_ARCH_PRI@-@VGCONF_OS@.a
 	$(AM_V_AR)$(libvexmultiarch_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_AR) libvexmultiarch-@VGCONF_ARCH_PRI@-@VGCONF_OS@.a $(libvexmultiarch_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_OBJECTS) $(libvexmultiarch_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_LIBADD)
-	$(AM_V_at)$(RANLIB) libvexmultiarch-@VGCONF_ARCH_PRI@-@VGCONF_OS@.a
+	$(AM_V_at)$(libvexmultiarch_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_RANLIB) libvexmultiarch-@VGCONF_ARCH_PRI@-@VGCONF_OS@.a
 priv/libvexmultiarch_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-multiarch_main_main.$(OBJEXT):  \
 	priv/$(am__dirstamp) priv/$(DEPDIR)/$(am__dirstamp)
 
 libvexmultiarch-@VGCONF_ARCH_SEC@-@VGCONF_OS@.a: $(libvexmultiarch_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_OBJECTS) $(libvexmultiarch_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_DEPENDENCIES) $(EXTRA_libvexmultiarch_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_DEPENDENCIES) 
 	$(AM_V_at)-rm -f libvexmultiarch-@VGCONF_ARCH_SEC@-@VGCONF_OS@.a
 	$(AM_V_AR)$(libvexmultiarch_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_AR) libvexmultiarch-@VGCONF_ARCH_SEC@-@VGCONF_OS@.a $(libvexmultiarch_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_OBJECTS) $(libvexmultiarch_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_LIBADD)
-	$(AM_V_at)$(RANLIB) libvexmultiarch-@VGCONF_ARCH_SEC@-@VGCONF_OS@.a
+	$(AM_V_at)$(libvexmultiarch_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_RANLIB) libvexmultiarch-@VGCONF_ARCH_SEC@-@VGCONF_OS@.a
 
 mostlyclean-compile:
 	-rm -f *.$(OBJEXT)
@@ -1404,6 +1456,8 @@ distclean-compile:
 @AMDEP_TRUE@@am__include@ @am__quote@priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_nanomips_toIR.Po@am__quote@ # am--include-marker
 @AMDEP_TRUE@@am__include@ @am__quote@priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_ppc_helpers.Po@am__quote@ # am--include-marker
 @AMDEP_TRUE@@am__include@ @am__quote@priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_ppc_toIR.Po@am__quote@ # am--include-marker
+@AMDEP_TRUE@@am__include@ @am__quote@priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_riscv64_helpers.Po@am__quote@ # am--include-marker
+@AMDEP_TRUE@@am__include@ @am__quote@priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_riscv64_toIR.Po@am__quote@ # am--include-marker
 @AMDEP_TRUE@@am__include@ @am__quote@priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_s390_helpers.Po@am__quote@ # am--include-marker
 @AMDEP_TRUE@@am__include@ @am__quote@priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_s390_toIR.Po@am__quote@ # am--include-marker
 @AMDEP_TRUE@@am__include@ @am__quote@priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_x86_helpers.Po@am__quote@ # am--include-marker
@@ -1428,6 +1482,8 @@ distclean-compile:
 @AMDEP_TRUE@@am__include@ @am__quote@priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_nanomips_isel.Po@am__quote@ # am--include-marker
 @AMDEP_TRUE@@am__include@ @am__quote@priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_ppc_defs.Po@am__quote@ # am--include-marker
 @AMDEP_TRUE@@am__include@ @am__quote@priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_ppc_isel.Po@am__quote@ # am--include-marker
+@AMDEP_TRUE@@am__include@ @am__quote@priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_riscv64_defs.Po@am__quote@ # am--include-marker
+@AMDEP_TRUE@@am__include@ @am__quote@priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_riscv64_isel.Po@am__quote@ # am--include-marker
 @AMDEP_TRUE@@am__include@ @am__quote@priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_s390_defs.Po@am__quote@ # am--include-marker
 @AMDEP_TRUE@@am__include@ @am__quote@priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_s390_isel.Po@am__quote@ # am--include-marker
 @AMDEP_TRUE@@am__include@ @am__quote@priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_x86_defs.Po@am__quote@ # am--include-marker
@@ -1455,6 +1511,8 @@ distclean-compile:
 @AMDEP_TRUE@@am__include@ @am__quote@priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_nanomips_toIR.Po@am__quote@ # am--include-marker
 @AMDEP_TRUE@@am__include@ @am__quote@priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_ppc_helpers.Po@am__quote@ # am--include-marker
 @AMDEP_TRUE@@am__include@ @am__quote@priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_ppc_toIR.Po@am__quote@ # am--include-marker
+@AMDEP_TRUE@@am__include@ @am__quote@priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_riscv64_helpers.Po@am__quote@ # am--include-marker
+@AMDEP_TRUE@@am__include@ @am__quote@priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_riscv64_toIR.Po@am__quote@ # am--include-marker
 @AMDEP_TRUE@@am__include@ @am__quote@priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_s390_helpers.Po@am__quote@ # am--include-marker
 @AMDEP_TRUE@@am__include@ @am__quote@priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_s390_toIR.Po@am__quote@ # am--include-marker
 @AMDEP_TRUE@@am__include@ @am__quote@priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_x86_helpers.Po@am__quote@ # am--include-marker
@@ -1479,6 +1537,8 @@ distclean-compile:
 @AMDEP_TRUE@@am__include@ @am__quote@priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_nanomips_isel.Po@am__quote@ # am--include-marker
 @AMDEP_TRUE@@am__include@ @am__quote@priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_ppc_defs.Po@am__quote@ # am--include-marker
 @AMDEP_TRUE@@am__include@ @am__quote@priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_ppc_isel.Po@am__quote@ # am--include-marker
+@AMDEP_TRUE@@am__include@ @am__quote@priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_riscv64_defs.Po@am__quote@ # am--include-marker
+@AMDEP_TRUE@@am__include@ @am__quote@priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_riscv64_isel.Po@am__quote@ # am--include-marker
 @AMDEP_TRUE@@am__include@ @am__quote@priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_s390_defs.Po@am__quote@ # am--include-marker
 @AMDEP_TRUE@@am__include@ @am__quote@priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_s390_isel.Po@am__quote@ # am--include-marker
 @AMDEP_TRUE@@am__include@ @am__quote@priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_x86_defs.Po@am__quote@ # am--include-marker
@@ -1496,7 +1556,7 @@ distclean-compile:
 
 $(am__depfiles_remade):
 	@$(MKDIR_P) $(@D)
-	@echo '# dummy' >$@-t && $(am__mv) $@-t $@
+	@: >>$@
 
 am--depfiles: $(am__depfiles_remade)
 
@@ -1880,6 +1940,34 @@ priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_nanomips_toIR.obj: priv/guest_
 @AMDEP_TRUE@@am__fastdepCC_FALSE@	DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
 @am__fastdepCC_FALSE@	$(AM_V_CC@am__nodep@)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_CPPFLAGS) $(CPPFLAGS) $(libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_CFLAGS) $(CFLAGS) -c -o priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_nanomips_toIR.obj `if test -f 'priv/guest_nanomips_toIR.c'; then $(CYGPATH_W) 'priv/guest_nanomips_toIR.c'; else $(CYGPATH_W) '$(srcdir)/priv/guest_nanomips_toIR.c'; fi`
 
+priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_riscv64_helpers.o: priv/guest_riscv64_helpers.c
+@am__fastdepCC_TRUE@	$(AM_V_CC)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_CPPFLAGS) $(CPPFLAGS) $(libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_CFLAGS) $(CFLAGS) -MT priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_riscv64_helpers.o -MD -MP -MF priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_riscv64_helpers.Tpo -c -o priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_riscv64_helpers.o `test -f 'priv/guest_riscv64_helpers.c' || echo '$(srcdir)/'`priv/guest_riscv64_helpers.c
+@am__fastdepCC_TRUE@	$(AM_V_at)$(am__mv) priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_riscv64_helpers.Tpo priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_riscv64_helpers.Po
+@AMDEP_TRUE@@am__fastdepCC_FALSE@	$(AM_V_CC)source='priv/guest_riscv64_helpers.c' object='priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_riscv64_helpers.o' libtool=no @AMDEPBACKSLASH@
+@AMDEP_TRUE@@am__fastdepCC_FALSE@	DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
+@am__fastdepCC_FALSE@	$(AM_V_CC@am__nodep@)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_CPPFLAGS) $(CPPFLAGS) $(libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_CFLAGS) $(CFLAGS) -c -o priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_riscv64_helpers.o `test -f 'priv/guest_riscv64_helpers.c' || echo '$(srcdir)/'`priv/guest_riscv64_helpers.c
+
+priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_riscv64_helpers.obj: priv/guest_riscv64_helpers.c
+@am__fastdepCC_TRUE@	$(AM_V_CC)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_CPPFLAGS) $(CPPFLAGS) $(libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_CFLAGS) $(CFLAGS) -MT priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_riscv64_helpers.obj -MD -MP -MF priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_riscv64_helpers.Tpo -c -o priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_riscv64_helpers.obj `if test -f 'priv/guest_riscv64_helpers.c'; then $(CYGPATH_W) 'priv/guest_riscv64_helpers.c'; else $(CYGPATH_W) '$(srcdir)/priv/guest_riscv64_helpers.c'; fi`
+@am__fastdepCC_TRUE@	$(AM_V_at)$(am__mv) priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_riscv64_helpers.Tpo priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_riscv64_helpers.Po
+@AMDEP_TRUE@@am__fastdepCC_FALSE@	$(AM_V_CC)source='priv/guest_riscv64_helpers.c' object='priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_riscv64_helpers.obj' libtool=no @AMDEPBACKSLASH@
+@AMDEP_TRUE@@am__fastdepCC_FALSE@	DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
+@am__fastdepCC_FALSE@	$(AM_V_CC@am__nodep@)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_CPPFLAGS) $(CPPFLAGS) $(libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_CFLAGS) $(CFLAGS) -c -o priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_riscv64_helpers.obj `if test -f 'priv/guest_riscv64_helpers.c'; then $(CYGPATH_W) 'priv/guest_riscv64_helpers.c'; else $(CYGPATH_W) '$(srcdir)/priv/guest_riscv64_helpers.c'; fi`
+
+priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_riscv64_toIR.o: priv/guest_riscv64_toIR.c
+@am__fastdepCC_TRUE@	$(AM_V_CC)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_CPPFLAGS) $(CPPFLAGS) $(libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_CFLAGS) $(CFLAGS) -MT priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_riscv64_toIR.o -MD -MP -MF priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_riscv64_toIR.Tpo -c -o priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_riscv64_toIR.o `test -f 'priv/guest_riscv64_toIR.c' || echo '$(srcdir)/'`priv/guest_riscv64_toIR.c
+@am__fastdepCC_TRUE@	$(AM_V_at)$(am__mv) priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_riscv64_toIR.Tpo priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_riscv64_toIR.Po
+@AMDEP_TRUE@@am__fastdepCC_FALSE@	$(AM_V_CC)source='priv/guest_riscv64_toIR.c' object='priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_riscv64_toIR.o' libtool=no @AMDEPBACKSLASH@
+@AMDEP_TRUE@@am__fastdepCC_FALSE@	DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
+@am__fastdepCC_FALSE@	$(AM_V_CC@am__nodep@)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_CPPFLAGS) $(CPPFLAGS) $(libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_CFLAGS) $(CFLAGS) -c -o priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_riscv64_toIR.o `test -f 'priv/guest_riscv64_toIR.c' || echo '$(srcdir)/'`priv/guest_riscv64_toIR.c
+
+priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_riscv64_toIR.obj: priv/guest_riscv64_toIR.c
+@am__fastdepCC_TRUE@	$(AM_V_CC)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_CPPFLAGS) $(CPPFLAGS) $(libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_CFLAGS) $(CFLAGS) -MT priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_riscv64_toIR.obj -MD -MP -MF priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_riscv64_toIR.Tpo -c -o priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_riscv64_toIR.obj `if test -f 'priv/guest_riscv64_toIR.c'; then $(CYGPATH_W) 'priv/guest_riscv64_toIR.c'; else $(CYGPATH_W) '$(srcdir)/priv/guest_riscv64_toIR.c'; fi`
+@am__fastdepCC_TRUE@	$(AM_V_at)$(am__mv) priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_riscv64_toIR.Tpo priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_riscv64_toIR.Po
+@AMDEP_TRUE@@am__fastdepCC_FALSE@	$(AM_V_CC)source='priv/guest_riscv64_toIR.c' object='priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_riscv64_toIR.obj' libtool=no @AMDEPBACKSLASH@
+@AMDEP_TRUE@@am__fastdepCC_FALSE@	DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
+@am__fastdepCC_FALSE@	$(AM_V_CC@am__nodep@)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_CPPFLAGS) $(CPPFLAGS) $(libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_CFLAGS) $(CFLAGS) -c -o priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_riscv64_toIR.obj `if test -f 'priv/guest_riscv64_toIR.c'; then $(CYGPATH_W) 'priv/guest_riscv64_toIR.c'; else $(CYGPATH_W) '$(srcdir)/priv/guest_riscv64_toIR.c'; fi`
+
 priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_generic_regs.o: priv/host_generic_regs.c
 @am__fastdepCC_TRUE@	$(AM_V_CC)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_CPPFLAGS) $(CPPFLAGS) $(libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_CFLAGS) $(CFLAGS) -MT priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_generic_regs.o -MD -MP -MF priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_generic_regs.Tpo -c -o priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_generic_regs.o `test -f 'priv/host_generic_regs.c' || echo '$(srcdir)/'`priv/host_generic_regs.c
 @am__fastdepCC_TRUE@	$(AM_V_at)$(am__mv) priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_generic_regs.Tpo priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_generic_regs.Po
@@ -2230,6 +2318,34 @@ priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_nanomips_isel.obj: priv/host_na
 @AMDEP_TRUE@@am__fastdepCC_FALSE@	DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
 @am__fastdepCC_FALSE@	$(AM_V_CC@am__nodep@)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_CPPFLAGS) $(CPPFLAGS) $(libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_CFLAGS) $(CFLAGS) -c -o priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_nanomips_isel.obj `if test -f 'priv/host_nanomips_isel.c'; then $(CYGPATH_W) 'priv/host_nanomips_isel.c'; else $(CYGPATH_W) '$(srcdir)/priv/host_nanomips_isel.c'; fi`
 
+priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_riscv64_defs.o: priv/host_riscv64_defs.c
+@am__fastdepCC_TRUE@	$(AM_V_CC)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_CPPFLAGS) $(CPPFLAGS) $(libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_CFLAGS) $(CFLAGS) -MT priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_riscv64_defs.o -MD -MP -MF priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_riscv64_defs.Tpo -c -o priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_riscv64_defs.o `test -f 'priv/host_riscv64_defs.c' || echo '$(srcdir)/'`priv/host_riscv64_defs.c
+@am__fastdepCC_TRUE@	$(AM_V_at)$(am__mv) priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_riscv64_defs.Tpo priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_riscv64_defs.Po
+@AMDEP_TRUE@@am__fastdepCC_FALSE@	$(AM_V_CC)source='priv/host_riscv64_defs.c' object='priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_riscv64_defs.o' libtool=no @AMDEPBACKSLASH@
+@AMDEP_TRUE@@am__fastdepCC_FALSE@	DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
+@am__fastdepCC_FALSE@	$(AM_V_CC@am__nodep@)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_CPPFLAGS) $(CPPFLAGS) $(libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_CFLAGS) $(CFLAGS) -c -o priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_riscv64_defs.o `test -f 'priv/host_riscv64_defs.c' || echo '$(srcdir)/'`priv/host_riscv64_defs.c
+
+priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_riscv64_defs.obj: priv/host_riscv64_defs.c
+@am__fastdepCC_TRUE@	$(AM_V_CC)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_CPPFLAGS) $(CPPFLAGS) $(libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_CFLAGS) $(CFLAGS) -MT priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_riscv64_defs.obj -MD -MP -MF priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_riscv64_defs.Tpo -c -o priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_riscv64_defs.obj `if test -f 'priv/host_riscv64_defs.c'; then $(CYGPATH_W) 'priv/host_riscv64_defs.c'; else $(CYGPATH_W) '$(srcdir)/priv/host_riscv64_defs.c'; fi`
+@am__fastdepCC_TRUE@	$(AM_V_at)$(am__mv) priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_riscv64_defs.Tpo priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_riscv64_defs.Po
+@AMDEP_TRUE@@am__fastdepCC_FALSE@	$(AM_V_CC)source='priv/host_riscv64_defs.c' object='priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_riscv64_defs.obj' libtool=no @AMDEPBACKSLASH@
+@AMDEP_TRUE@@am__fastdepCC_FALSE@	DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
+@am__fastdepCC_FALSE@	$(AM_V_CC@am__nodep@)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_CPPFLAGS) $(CPPFLAGS) $(libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_CFLAGS) $(CFLAGS) -c -o priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_riscv64_defs.obj `if test -f 'priv/host_riscv64_defs.c'; then $(CYGPATH_W) 'priv/host_riscv64_defs.c'; else $(CYGPATH_W) '$(srcdir)/priv/host_riscv64_defs.c'; fi`
+
+priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_riscv64_isel.o: priv/host_riscv64_isel.c
+@am__fastdepCC_TRUE@	$(AM_V_CC)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_CPPFLAGS) $(CPPFLAGS) $(libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_CFLAGS) $(CFLAGS) -MT priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_riscv64_isel.o -MD -MP -MF priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_riscv64_isel.Tpo -c -o priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_riscv64_isel.o `test -f 'priv/host_riscv64_isel.c' || echo '$(srcdir)/'`priv/host_riscv64_isel.c
+@am__fastdepCC_TRUE@	$(AM_V_at)$(am__mv) priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_riscv64_isel.Tpo priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_riscv64_isel.Po
+@AMDEP_TRUE@@am__fastdepCC_FALSE@	$(AM_V_CC)source='priv/host_riscv64_isel.c' object='priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_riscv64_isel.o' libtool=no @AMDEPBACKSLASH@
+@AMDEP_TRUE@@am__fastdepCC_FALSE@	DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
+@am__fastdepCC_FALSE@	$(AM_V_CC@am__nodep@)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_CPPFLAGS) $(CPPFLAGS) $(libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_CFLAGS) $(CFLAGS) -c -o priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_riscv64_isel.o `test -f 'priv/host_riscv64_isel.c' || echo '$(srcdir)/'`priv/host_riscv64_isel.c
+
+priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_riscv64_isel.obj: priv/host_riscv64_isel.c
+@am__fastdepCC_TRUE@	$(AM_V_CC)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_CPPFLAGS) $(CPPFLAGS) $(libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_CFLAGS) $(CFLAGS) -MT priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_riscv64_isel.obj -MD -MP -MF priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_riscv64_isel.Tpo -c -o priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_riscv64_isel.obj `if test -f 'priv/host_riscv64_isel.c'; then $(CYGPATH_W) 'priv/host_riscv64_isel.c'; else $(CYGPATH_W) '$(srcdir)/priv/host_riscv64_isel.c'; fi`
+@am__fastdepCC_TRUE@	$(AM_V_at)$(am__mv) priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_riscv64_isel.Tpo priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_riscv64_isel.Po
+@AMDEP_TRUE@@am__fastdepCC_FALSE@	$(AM_V_CC)source='priv/host_riscv64_isel.c' object='priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_riscv64_isel.obj' libtool=no @AMDEPBACKSLASH@
+@AMDEP_TRUE@@am__fastdepCC_FALSE@	DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
+@am__fastdepCC_FALSE@	$(AM_V_CC@am__nodep@)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_CPPFLAGS) $(CPPFLAGS) $(libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_CFLAGS) $(CFLAGS) -c -o priv/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_riscv64_isel.obj `if test -f 'priv/host_riscv64_isel.c'; then $(CYGPATH_W) 'priv/host_riscv64_isel.c'; else $(CYGPATH_W) '$(srcdir)/priv/host_riscv64_isel.c'; fi`
+
 priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-main_globals.o: priv/main_globals.c
 @am__fastdepCC_TRUE@	$(AM_V_CC)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_CPPFLAGS) $(CPPFLAGS) $(libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_CFLAGS) $(CFLAGS) -MT priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-main_globals.o -MD -MP -MF priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-main_globals.Tpo -c -o priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-main_globals.o `test -f 'priv/main_globals.c' || echo '$(srcdir)/'`priv/main_globals.c
 @am__fastdepCC_TRUE@	$(AM_V_at)$(am__mv) priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-main_globals.Tpo priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-main_globals.Po
@@ -2594,6 +2710,34 @@ priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_nanomips_toIR.obj: priv/guest_
 @AMDEP_TRUE@@am__fastdepCC_FALSE@	DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
 @am__fastdepCC_FALSE@	$(AM_V_CC@am__nodep@)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_CPPFLAGS) $(CPPFLAGS) $(libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_CFLAGS) $(CFLAGS) -c -o priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_nanomips_toIR.obj `if test -f 'priv/guest_nanomips_toIR.c'; then $(CYGPATH_W) 'priv/guest_nanomips_toIR.c'; else $(CYGPATH_W) '$(srcdir)/priv/guest_nanomips_toIR.c'; fi`
 
+priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_riscv64_helpers.o: priv/guest_riscv64_helpers.c
+@am__fastdepCC_TRUE@	$(AM_V_CC)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_CPPFLAGS) $(CPPFLAGS) $(libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_CFLAGS) $(CFLAGS) -MT priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_riscv64_helpers.o -MD -MP -MF priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_riscv64_helpers.Tpo -c -o priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_riscv64_helpers.o `test -f 'priv/guest_riscv64_helpers.c' || echo '$(srcdir)/'`priv/guest_riscv64_helpers.c
+@am__fastdepCC_TRUE@	$(AM_V_at)$(am__mv) priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_riscv64_helpers.Tpo priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_riscv64_helpers.Po
+@AMDEP_TRUE@@am__fastdepCC_FALSE@	$(AM_V_CC)source='priv/guest_riscv64_helpers.c' object='priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_riscv64_helpers.o' libtool=no @AMDEPBACKSLASH@
+@AMDEP_TRUE@@am__fastdepCC_FALSE@	DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
+@am__fastdepCC_FALSE@	$(AM_V_CC@am__nodep@)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_CPPFLAGS) $(CPPFLAGS) $(libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_CFLAGS) $(CFLAGS) -c -o priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_riscv64_helpers.o `test -f 'priv/guest_riscv64_helpers.c' || echo '$(srcdir)/'`priv/guest_riscv64_helpers.c
+
+priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_riscv64_helpers.obj: priv/guest_riscv64_helpers.c
+@am__fastdepCC_TRUE@	$(AM_V_CC)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_CPPFLAGS) $(CPPFLAGS) $(libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_CFLAGS) $(CFLAGS) -MT priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_riscv64_helpers.obj -MD -MP -MF priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_riscv64_helpers.Tpo -c -o priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_riscv64_helpers.obj `if test -f 'priv/guest_riscv64_helpers.c'; then $(CYGPATH_W) 'priv/guest_riscv64_helpers.c'; else $(CYGPATH_W) '$(srcdir)/priv/guest_riscv64_helpers.c'; fi`
+@am__fastdepCC_TRUE@	$(AM_V_at)$(am__mv) priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_riscv64_helpers.Tpo priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_riscv64_helpers.Po
+@AMDEP_TRUE@@am__fastdepCC_FALSE@	$(AM_V_CC)source='priv/guest_riscv64_helpers.c' object='priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_riscv64_helpers.obj' libtool=no @AMDEPBACKSLASH@
+@AMDEP_TRUE@@am__fastdepCC_FALSE@	DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
+@am__fastdepCC_FALSE@	$(AM_V_CC@am__nodep@)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_CPPFLAGS) $(CPPFLAGS) $(libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_CFLAGS) $(CFLAGS) -c -o priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_riscv64_helpers.obj `if test -f 'priv/guest_riscv64_helpers.c'; then $(CYGPATH_W) 'priv/guest_riscv64_helpers.c'; else $(CYGPATH_W) '$(srcdir)/priv/guest_riscv64_helpers.c'; fi`
+
+priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_riscv64_toIR.o: priv/guest_riscv64_toIR.c
+@am__fastdepCC_TRUE@	$(AM_V_CC)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_CPPFLAGS) $(CPPFLAGS) $(libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_CFLAGS) $(CFLAGS) -MT priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_riscv64_toIR.o -MD -MP -MF priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_riscv64_toIR.Tpo -c -o priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_riscv64_toIR.o `test -f 'priv/guest_riscv64_toIR.c' || echo '$(srcdir)/'`priv/guest_riscv64_toIR.c
+@am__fastdepCC_TRUE@	$(AM_V_at)$(am__mv) priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_riscv64_toIR.Tpo priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_riscv64_toIR.Po
+@AMDEP_TRUE@@am__fastdepCC_FALSE@	$(AM_V_CC)source='priv/guest_riscv64_toIR.c' object='priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_riscv64_toIR.o' libtool=no @AMDEPBACKSLASH@
+@AMDEP_TRUE@@am__fastdepCC_FALSE@	DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
+@am__fastdepCC_FALSE@	$(AM_V_CC@am__nodep@)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_CPPFLAGS) $(CPPFLAGS) $(libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_CFLAGS) $(CFLAGS) -c -o priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_riscv64_toIR.o `test -f 'priv/guest_riscv64_toIR.c' || echo '$(srcdir)/'`priv/guest_riscv64_toIR.c
+
+priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_riscv64_toIR.obj: priv/guest_riscv64_toIR.c
+@am__fastdepCC_TRUE@	$(AM_V_CC)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_CPPFLAGS) $(CPPFLAGS) $(libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_CFLAGS) $(CFLAGS) -MT priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_riscv64_toIR.obj -MD -MP -MF priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_riscv64_toIR.Tpo -c -o priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_riscv64_toIR.obj `if test -f 'priv/guest_riscv64_toIR.c'; then $(CYGPATH_W) 'priv/guest_riscv64_toIR.c'; else $(CYGPATH_W) '$(srcdir)/priv/guest_riscv64_toIR.c'; fi`
+@am__fastdepCC_TRUE@	$(AM_V_at)$(am__mv) priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_riscv64_toIR.Tpo priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_riscv64_toIR.Po
+@AMDEP_TRUE@@am__fastdepCC_FALSE@	$(AM_V_CC)source='priv/guest_riscv64_toIR.c' object='priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_riscv64_toIR.obj' libtool=no @AMDEPBACKSLASH@
+@AMDEP_TRUE@@am__fastdepCC_FALSE@	DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
+@am__fastdepCC_FALSE@	$(AM_V_CC@am__nodep@)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_CPPFLAGS) $(CPPFLAGS) $(libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_CFLAGS) $(CFLAGS) -c -o priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_riscv64_toIR.obj `if test -f 'priv/guest_riscv64_toIR.c'; then $(CYGPATH_W) 'priv/guest_riscv64_toIR.c'; else $(CYGPATH_W) '$(srcdir)/priv/guest_riscv64_toIR.c'; fi`
+
 priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_generic_regs.o: priv/host_generic_regs.c
 @am__fastdepCC_TRUE@	$(AM_V_CC)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_CPPFLAGS) $(CPPFLAGS) $(libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_CFLAGS) $(CFLAGS) -MT priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_generic_regs.o -MD -MP -MF priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_generic_regs.Tpo -c -o priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_generic_regs.o `test -f 'priv/host_generic_regs.c' || echo '$(srcdir)/'`priv/host_generic_regs.c
 @am__fastdepCC_TRUE@	$(AM_V_at)$(am__mv) priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_generic_regs.Tpo priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_generic_regs.Po
@@ -2944,6 +3088,34 @@ priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_nanomips_isel.obj: priv/host_na
 @AMDEP_TRUE@@am__fastdepCC_FALSE@	DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
 @am__fastdepCC_FALSE@	$(AM_V_CC@am__nodep@)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_CPPFLAGS) $(CPPFLAGS) $(libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_CFLAGS) $(CFLAGS) -c -o priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_nanomips_isel.obj `if test -f 'priv/host_nanomips_isel.c'; then $(CYGPATH_W) 'priv/host_nanomips_isel.c'; else $(CYGPATH_W) '$(srcdir)/priv/host_nanomips_isel.c'; fi`
 
+priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_riscv64_defs.o: priv/host_riscv64_defs.c
+@am__fastdepCC_TRUE@	$(AM_V_CC)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_CPPFLAGS) $(CPPFLAGS) $(libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_CFLAGS) $(CFLAGS) -MT priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_riscv64_defs.o -MD -MP -MF priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_riscv64_defs.Tpo -c -o priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_riscv64_defs.o `test -f 'priv/host_riscv64_defs.c' || echo '$(srcdir)/'`priv/host_riscv64_defs.c
+@am__fastdepCC_TRUE@	$(AM_V_at)$(am__mv) priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_riscv64_defs.Tpo priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_riscv64_defs.Po
+@AMDEP_TRUE@@am__fastdepCC_FALSE@	$(AM_V_CC)source='priv/host_riscv64_defs.c' object='priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_riscv64_defs.o' libtool=no @AMDEPBACKSLASH@
+@AMDEP_TRUE@@am__fastdepCC_FALSE@	DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
+@am__fastdepCC_FALSE@	$(AM_V_CC@am__nodep@)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_CPPFLAGS) $(CPPFLAGS) $(libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_CFLAGS) $(CFLAGS) -c -o priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_riscv64_defs.o `test -f 'priv/host_riscv64_defs.c' || echo '$(srcdir)/'`priv/host_riscv64_defs.c
+
+priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_riscv64_defs.obj: priv/host_riscv64_defs.c
+@am__fastdepCC_TRUE@	$(AM_V_CC)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_CPPFLAGS) $(CPPFLAGS) $(libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_CFLAGS) $(CFLAGS) -MT priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_riscv64_defs.obj -MD -MP -MF priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_riscv64_defs.Tpo -c -o priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_riscv64_defs.obj `if test -f 'priv/host_riscv64_defs.c'; then $(CYGPATH_W) 'priv/host_riscv64_defs.c'; else $(CYGPATH_W) '$(srcdir)/priv/host_riscv64_defs.c'; fi`
+@am__fastdepCC_TRUE@	$(AM_V_at)$(am__mv) priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_riscv64_defs.Tpo priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_riscv64_defs.Po
+@AMDEP_TRUE@@am__fastdepCC_FALSE@	$(AM_V_CC)source='priv/host_riscv64_defs.c' object='priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_riscv64_defs.obj' libtool=no @AMDEPBACKSLASH@
+@AMDEP_TRUE@@am__fastdepCC_FALSE@	DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
+@am__fastdepCC_FALSE@	$(AM_V_CC@am__nodep@)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_CPPFLAGS) $(CPPFLAGS) $(libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_CFLAGS) $(CFLAGS) -c -o priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_riscv64_defs.obj `if test -f 'priv/host_riscv64_defs.c'; then $(CYGPATH_W) 'priv/host_riscv64_defs.c'; else $(CYGPATH_W) '$(srcdir)/priv/host_riscv64_defs.c'; fi`
+
+priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_riscv64_isel.o: priv/host_riscv64_isel.c
+@am__fastdepCC_TRUE@	$(AM_V_CC)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_CPPFLAGS) $(CPPFLAGS) $(libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_CFLAGS) $(CFLAGS) -MT priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_riscv64_isel.o -MD -MP -MF priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_riscv64_isel.Tpo -c -o priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_riscv64_isel.o `test -f 'priv/host_riscv64_isel.c' || echo '$(srcdir)/'`priv/host_riscv64_isel.c
+@am__fastdepCC_TRUE@	$(AM_V_at)$(am__mv) priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_riscv64_isel.Tpo priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_riscv64_isel.Po
+@AMDEP_TRUE@@am__fastdepCC_FALSE@	$(AM_V_CC)source='priv/host_riscv64_isel.c' object='priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_riscv64_isel.o' libtool=no @AMDEPBACKSLASH@
+@AMDEP_TRUE@@am__fastdepCC_FALSE@	DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
+@am__fastdepCC_FALSE@	$(AM_V_CC@am__nodep@)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_CPPFLAGS) $(CPPFLAGS) $(libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_CFLAGS) $(CFLAGS) -c -o priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_riscv64_isel.o `test -f 'priv/host_riscv64_isel.c' || echo '$(srcdir)/'`priv/host_riscv64_isel.c
+
+priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_riscv64_isel.obj: priv/host_riscv64_isel.c
+@am__fastdepCC_TRUE@	$(AM_V_CC)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_CPPFLAGS) $(CPPFLAGS) $(libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_CFLAGS) $(CFLAGS) -MT priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_riscv64_isel.obj -MD -MP -MF priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_riscv64_isel.Tpo -c -o priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_riscv64_isel.obj `if test -f 'priv/host_riscv64_isel.c'; then $(CYGPATH_W) 'priv/host_riscv64_isel.c'; else $(CYGPATH_W) '$(srcdir)/priv/host_riscv64_isel.c'; fi`
+@am__fastdepCC_TRUE@	$(AM_V_at)$(am__mv) priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_riscv64_isel.Tpo priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_riscv64_isel.Po
+@AMDEP_TRUE@@am__fastdepCC_FALSE@	$(AM_V_CC)source='priv/host_riscv64_isel.c' object='priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_riscv64_isel.obj' libtool=no @AMDEPBACKSLASH@
+@AMDEP_TRUE@@am__fastdepCC_FALSE@	DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
+@am__fastdepCC_FALSE@	$(AM_V_CC@am__nodep@)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_CPPFLAGS) $(CPPFLAGS) $(libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a_CFLAGS) $(CFLAGS) -c -o priv/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_riscv64_isel.obj `if test -f 'priv/host_riscv64_isel.c'; then $(CYGPATH_W) 'priv/host_riscv64_isel.c'; else $(CYGPATH_W) '$(srcdir)/priv/host_riscv64_isel.c'; fi`
+
 priv/libvexmultiarch_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-multiarch_main_main.o: priv/multiarch_main_main.c
 @am__fastdepCC_TRUE@	$(AM_V_CC)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libvexmultiarch_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_CPPFLAGS) $(CPPFLAGS) $(libvexmultiarch_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a_CFLAGS) $(CFLAGS) -MT priv/libvexmultiarch_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-multiarch_main_main.o -MD -MP -MF priv/$(DEPDIR)/libvexmultiarch_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-multiarch_main_main.Tpo -c -o priv/libvexmultiarch_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-multiarch_main_main.o `test -f 'priv/multiarch_main_main.c' || echo '$(srcdir)/'`priv/multiarch_main_main.c
 @am__fastdepCC_TRUE@	$(AM_V_at)$(am__mv) priv/$(DEPDIR)/libvexmultiarch_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-multiarch_main_main.Tpo priv/$(DEPDIR)/libvexmultiarch_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-multiarch_main_main.Po
@@ -3109,24 +3281,24 @@ install-strip:
 mostlyclean-generic:
 
 clean-generic:
-	-test -z "$(CLEANFILES)" || rm -f $(CLEANFILES)
+	-$(am__rm_f) $(CLEANFILES)
 
 distclean-generic:
-	-test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES)
-	-test . = "$(srcdir)" || test -z "$(CONFIG_CLEAN_VPATH_FILES)" || rm -f $(CONFIG_CLEAN_VPATH_FILES)
-	-rm -f priv/$(DEPDIR)/$(am__dirstamp)
-	-rm -f priv/$(am__dirstamp)
+	-$(am__rm_f) $(CONFIG_CLEAN_FILES)
+	-test . = "$(srcdir)" || $(am__rm_f) $(CONFIG_CLEAN_VPATH_FILES)
+	-$(am__rm_f) priv/$(DEPDIR)/$(am__dirstamp)
+	-$(am__rm_f) priv/$(am__dirstamp)
 
 maintainer-clean-generic:
 	@echo "This command is intended for maintainers to use"
 	@echo "it deletes files that may require special tools to rebuild."
-	-test -z "$(BUILT_SOURCES)" || rm -f $(BUILT_SOURCES)
+	-$(am__rm_f) $(BUILT_SOURCES)
 clean: clean-am
 
 clean-am: clean-generic clean-pkglibLIBRARIES mostlyclean-am
 
 distclean: distclean-am
-		-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_amd64_helpers.Po
+	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_amd64_helpers.Po
 	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_amd64_toIR.Po
 	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_arm64_helpers.Po
 	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_arm64_toIR.Po
@@ -3141,6 +3313,8 @@ distclean: distclean-am
 	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_nanomips_toIR.Po
 	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_ppc_helpers.Po
 	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_ppc_toIR.Po
+	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_riscv64_helpers.Po
+	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_riscv64_toIR.Po
 	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_s390_helpers.Po
 	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_s390_toIR.Po
 	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_x86_helpers.Po
@@ -3165,6 +3339,8 @@ distclean: distclean-am
 	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_nanomips_isel.Po
 	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_ppc_defs.Po
 	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_ppc_isel.Po
+	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_riscv64_defs.Po
+	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_riscv64_isel.Po
 	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_s390_defs.Po
 	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_s390_isel.Po
 	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_x86_defs.Po
@@ -3192,6 +3368,8 @@ distclean: distclean-am
 	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_nanomips_toIR.Po
 	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_ppc_helpers.Po
 	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_ppc_toIR.Po
+	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_riscv64_helpers.Po
+	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_riscv64_toIR.Po
 	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_s390_helpers.Po
 	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_s390_toIR.Po
 	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_x86_helpers.Po
@@ -3216,6 +3394,8 @@ distclean: distclean-am
 	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_nanomips_isel.Po
 	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_ppc_defs.Po
 	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_ppc_isel.Po
+	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_riscv64_defs.Po
+	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_riscv64_isel.Po
 	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_s390_defs.Po
 	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_s390_isel.Po
 	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_x86_defs.Po
@@ -3275,7 +3455,7 @@ install-ps-am:
 installcheck-am:
 
 maintainer-clean: maintainer-clean-am
-		-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_amd64_helpers.Po
+	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_amd64_helpers.Po
 	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_amd64_toIR.Po
 	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_arm64_helpers.Po
 	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_arm64_toIR.Po
@@ -3290,6 +3470,8 @@ maintainer-clean: maintainer-clean-am
 	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_nanomips_toIR.Po
 	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_ppc_helpers.Po
 	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_ppc_toIR.Po
+	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_riscv64_helpers.Po
+	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_riscv64_toIR.Po
 	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_s390_helpers.Po
 	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_s390_toIR.Po
 	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-guest_x86_helpers.Po
@@ -3314,6 +3496,8 @@ maintainer-clean: maintainer-clean-am
 	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_nanomips_isel.Po
 	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_ppc_defs.Po
 	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_ppc_isel.Po
+	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_riscv64_defs.Po
+	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_riscv64_isel.Po
 	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_s390_defs.Po
 	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_s390_isel.Po
 	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_PRI@_@VGCONF_OS@_a-host_x86_defs.Po
@@ -3341,6 +3525,8 @@ maintainer-clean: maintainer-clean-am
 	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_nanomips_toIR.Po
 	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_ppc_helpers.Po
 	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_ppc_toIR.Po
+	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_riscv64_helpers.Po
+	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_riscv64_toIR.Po
 	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_s390_helpers.Po
 	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_s390_toIR.Po
 	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-guest_x86_helpers.Po
@@ -3365,6 +3551,8 @@ maintainer-clean: maintainer-clean-am
 	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_nanomips_isel.Po
 	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_ppc_defs.Po
 	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_ppc_isel.Po
+	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_riscv64_defs.Po
+	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_riscv64_isel.Po
 	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_s390_defs.Po
 	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_s390_isel.Po
 	-rm -f priv/$(DEPDIR)/libvex_@VGCONF_ARCH_SEC@_@VGCONF_OS@_a-host_x86_defs.Po
@@ -3509,7 +3697,8 @@ pub/libvex_guest_offsets.h: auxprogs/genoffsets.c \
 			    pub/libvex_guest_arm64.h \
 			    pub/libvex_guest_s390x.h \
 			    pub/libvex_guest_mips32.h \
-			    pub/libvex_guest_mips64.h
+			    pub/libvex_guest_mips64.h \
+			    pub/libvex_guest_riscv64.h
 	rm -f auxprogs/genoffsets.s
 	$(mkdir_p) auxprogs pub
 	$(CC) $(CFLAGS_FOR_GENOFFSETS) \
@@ -3528,3 +3717,10 @@ pub/libvex_guest_offsets.h: auxprogs/genoffsets.c \
 # Tell versions [3.59,3.63) of GNU make to not export all variables.
 # Otherwise a system limit (for SysV at least) may be exceeded.
 .NOEXPORT:
+
+# Tell GNU make to disable its built-in pattern rules.
+%:: %,v
+%:: RCS/%,v
+%:: RCS/%
+%:: s.%
+%:: SCCS/s.%
diff --git a/NEWS b/NEWS
index 49b4647..741329a 100644
--- a/NEWS
+++ b/NEWS
@@ -1,3 +1,154 @@
+Release 3.25.1 (20 May 2025)
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+This point release contains only bug fixes.
+
+* ==================== FIXED BUGS ====================
+
+The following bugs have been fixed or resolved in this point release.
+
+503098  Incorrect NAN-boxing for float registers in RISC-V
+503641  close_range syscalls started failing with 3.25.0
+503914  mount syscall param filesystemtype may be NULL
+504177  FILE DESCRIPTORS banner shows when closing some inherited fds
+504265  FreeBSD: missing syscall wrappers for fchroot and setcred
+504466  Double close causes SEGV
+
+To see details of a given bug, visit
+  https://bugs.kde.org/show_bug.cgi?id=XXXXXX
+where XXXXXX is the bug number as listed above.
+
+
+Release 3.25.0 (25 Apr 2025)
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+This release supports X86/Linux, AMD64/Linux, ARM32/Linux, ARM64/Linux,
+PPC32/Linux, PPC64BE/Linux, PPC64LE/Linux, S390X/Linux, MIPS32/Linux,
+MIPS64/Linux, RISCV64/Linux, ARM/Android, ARM64/Android, MIPS32/Android,
+X86/Android, X86/Solaris, AMD64/Solaris, AMD64/MacOSX 10.12, X86/FreeBSD,
+AMD64/FreeBSD and ARM64/FreeBSD There is also preliminary support for
+X86/macOS 10.13, AMD64/macOS 10.13 and nanoMIPS/Linux.
+
+* ==================== CORE CHANGES ===================
+
+* The valgrind gdbserver now supports the GDB remote protocol packet
+  'x addr,len' (available in GDB release >= 16).
+  The x packet can reduce the time taken by GDB to read memory from valgrind.
+
+* Valgrind now supports zstd compressed debug sections.
+
+* The Linux Test Project (ltp) is integrated in the testsuite try
+  'make ltpchecks' (this will take a while and will point out various
+  missing syscalls and valgrind crashes!)
+
+* ================== PLATFORM CHANGES =================
+
+* Added RISCV64 support for Linux. Specifically for the RV64GC
+  instruction set.
+
+* Numerous bug fixes for Illumos, in particular fixed a Valgrind crash
+  whenever a signal handler was called.
+
+* On FreeBSD, a change to the libc code that runs atexit handlers was
+  causing Helgrind to produce an extra error about exiting threads
+  still holding locks for. This applied to every multithreaded application.
+  The extra error is now filtered out. A syscall wrapper had been added
+  for getrlimitusage.
+
+* On Linux various new syscalls are supported (landlock*, io_pgetevents,
+  open_tree, move_mount, fsopen, fsconfig, fsmount, fspick, userfaultfd).
+
+* s390x has support for various new instructions (BPP, BPRP, PPA and NIAI).
+
+* ==================== TOOL CHANGES ===================
+
+* The --track-fds=yes and --track-fds=all options now treat all
+  inherited file descriptors the same as 0, 1, 2 (stdin/out/err).
+  And when the stdin/out/err descriptors are reassigned they are
+  now treated as normal (non-inherited) file descriptors.
+
+* A new option --modify-fds=high can be used together with
+  --track-fds=yes to create new file descriptors with the highest
+  possible number (and then decreasing) instead of always using the
+  lowest possible number (which is required by POSIX). This will help
+  catch issues where a file descriptor number might normally be reused
+  between a close and another open call.
+
+* Helgrind:
+  There is a change to warnings about calls to pthread_cond_signal and
+  pthread_cond_broadcast when the associated mutex is unlocked. Previously
+  Helgrind would always warn about this. Now this error is controlled by
+  a command line option, --check-cond-signal-mutex=yes|no. The default is
+  no. This change has been made because some C and C++ standard libraries
+  use pthread_cond_signal/pthread_cond_broadcast in this way. Users are
+  obliged to use suppressions if they wish to avoid this noise.
+
+* ==================== FIXED BUGS ====================
+
+The following bugs have been fixed or resolved.  Note that "n-i-bz"
+stands for "not in bugzilla" -- that is, a bug that was reported to us
+but never got a bugzilla entry.  We encourage you to file bugs in
+bugzilla (https://bugs.kde.org/enter_bug.cgi?product=valgrind) rather
+than mailing the developers (or mailing lists) directly -- bugs that
+are not entered into bugzilla tend to get forgotten about or ignored.
+
+290061  pie elf always loaded at 0x108000
+396415  Valgrind is not looking up $ORIGIN rpath of shebang programs
+420682  io_pgetevents is not supported
+468575  Add support for RISC-V
+469782  Valgrind does not support zstd-compressed debug sections
+487296  --track-fds=yes and --track-fds=all report erroneous information
+        when fds 0, 1, or 2 are used as non-std
+489913  WARNING: unhandled amd64-linux syscall: 444 (landlock_create_ruleset)
+493433  Add --modify-fds=[no|high] option
+494246  syscall fsopen not wrapped
+494327  Crash when running Helgrind built with #define TRACE_PTH_FNS 1
+494337  All threaded applications cause still holding lock errors
+495488  Add FreeBSD getrlimitusage syscall wrapper
+495816  s390x: Fix disassembler segfault for C[G]RT and CL[G]RT
+495817  s390x: Disassembly to match objdump -d output
+496370  Illumos: signal handling is broken
+496571  False positive for null key passed to bpf_map_get_next_key syscall.
+496950  s390x: Fix hardware capabilities and EmFail codes
+497130  Recognize new DWARF5 DW_LANG constants
+497455  Update drd/scripts/download-and-build-gcc
+497723  Enabling Ada demangling breaks callgrind differentiation between
+        overloaded functions and procedures
+498037  s390x: Add disassembly checker
+498143  False positive on EVIOCGRAB ioctl
+498317  FdBadUse is not a valid CoreError type in a suppression
+        even though it's generated by --gen-suppressions=yes
+498421  s390x: support BPP, BPRP and NIAI insns
+498422  s390x: Fix VLRL and VSTRL insns
+498492  none/tests/amd64/lzcnt64 crashes on FreeBSD compiled with clang
+498629  s390x: Fix S[L]HHHR and S[L]HHLR insns
+498632  s390x: Fix LNGFR insn
+498942  s390x: Rework s390_disasm interface
+499183  FreeBSD: differences in avx-vmovq output
+499212  mmap() with MAP_ALIGNED() returns unaligned pointer
+501119  memcheck/tests/pointer-trace fails when run on NFS filesystem
+501194  Fix ML_(check_macho_and_get_rw_loads) so that it is correct for
+        any number of segment commands
+501348  glibc built with -march=x86-64-v3 does not work due to ld.so memcmp
+501479  Illumos DRD pthread_mutex_init wrapper errors
+501365  syscall userfaultfd not wrapped
+501846  Add x86 Linux shm wrappers
+501850  FreeBSD syscall arguments 7 and 8 incorrect.
+501893  Missing suppression for __wcscat_avx2 (strcat-strlen-avx2.h.S:68)?
+502126  glibc 2.41 extra syscall_cancel frames
+502288  s390x: Memcheck false positives with NNPA last tensor dimension
+502324  s390x: Memcheck false positives with TMxx and TM/TMY
+502679  Use LTP for testing valgrind
+502871  Make Helgrind "pthread_cond_{signal,broadcast}: dubious: associated
+        lock is not held by any thread" optional
+
+To see details of a given bug, visit
+  https://bugs.kde.org/show_bug.cgi?id=XXXXXX
+where XXXXXX is the bug number as listed above.
+
+(3.25.0.RC1: 18 Apr 2025)
+(3.25.0.RC2: 23 Apr 2025)
+
 Release 3.24.0 (31 Oct 2024)
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
@@ -83,6 +234,7 @@ are not entered into bugzilla tend to get forgotten about or ignored.
         FUSE_COMPATIBLE_MAY_BLOCK
 493959  s390x: Fix regtest failure for none/tests/s390x/op00
 493970  s390x: Store/restore FPC upon helper call causes slowdown
+494218  Remove FREEBSD_VERS from configure and build
 494252  s390x: incorrect disassembly for LOCHI and friends
 494960  Fixes and tweaks for gsl19test
 495278  PowerPC instruction dcbf should allow the L field values of 4, 6 on
diff --git a/NEWS.old b/NEWS.old
index 2b43c91..919b85c 100644
--- a/NEWS.old
+++ b/NEWS.old
@@ -14,7 +14,7 @@ AMD64/macOS 10.13 and nanoMIPS/Linux.
   Rust v0 name demangling. [Update: alas, due to a bug, this support
   isn't working in 3.18.0.]
 
-* __libc_freeres isn't called anymore after the program recieves a
+* __libc_freeres isn't called anymore after the program receives a
   fatal signal. Causing some internal glibc resources to hang around,
   but preventing any crashes after the program has ended.
 
diff --git a/README.riscv64 b/README.riscv64
new file mode 100644
index 0000000..6ce2a08
--- /dev/null
+++ b/README.riscv64
@@ -0,0 +1,45 @@
+
+Status
+~~~~~~
+
+The RISC-V port targets the 64-bit RISC-V architecture and the Linux operating
+system. The port has been tested to work on real hardware and under QEMU.
+
+The following ISA base and extensions are currently supported:
+
+| Name         | Description                       | #Instrs | Notes    |
+| ------------ | --------------------------------- | ------- | -------- |
+| RV64I        | Base instruction set              |   52/52 |          |
+| RV64M        | Integer multiplication & division |   12/13 | (1)      |
+| RV64A        | Atomic                            |   22/22 | (2)      |
+| RV64F        | Single-precision floating-point   |   30/30 | (3)      |
+| RV64D        | Double-precision floating-point   |   32/32 |          |
+| RV64Zicsr    | Control & status register         |     3/6 | (4), (5) |
+| RV64Zifencei | Instruction-fetch fence           |     0/1 | (6)      |
+| RV64C        | Compressed                        |   37/37 |          |
+
+Notes:
+(1) MULHSU is not recognized.
+(2) LR and SC use the VEX "fallback" method which suffers from the ABA problem.
+(3) Operations do not check if the input operands are correctly NaN-boxed.
+(4) CSRRWI, CSRRSI and CSRRCI are not recognized.
+(5) Only registers fflags, frm and fcsr are accepted.
+(6) FENCE.I is not recognized.
+
+
+Implementation tidying-up/TODO notes
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+* Implement a proper "non-fallback" method for LR and SC instructions.
+* Add a check for correct NaN-boxing of 32-bit floating-point operands.
+* Optimize instruction selection, in particular make more use of <instr>i
+  variants.
+* Optimize handling of floating-point exceptions. Avoid helpers and calculate
+  exception flags using the same instruction which produced an actual result.
+* Review register usage by the codegen.
+* Avoid re-use of Intel-constants CFIC_IA_SPREL and CFIC_IA_BPREL. Generalize
+  them for all architectures or introduce same CFIC_RISCV64_ variants.
+* Get rid of the typedef of vki_modify_ldt_t in include/vki/vki-riscv64-linux.h.
+* Review if setup_client_stack() should expose AT_SYSINFO_EHDR to clients.
+* Make sure that the final exit sequence in run_a_thread_NORETURN() is not racy
+  in regards to accessing the thread state.
diff --git a/README.solaris b/README.solaris
index 42adb0b..1815eb4 100644
--- a/README.solaris
+++ b/README.solaris
@@ -5,6 +5,8 @@ Requirements
   Running `uname -r` has to print '5.11'.
 - Recent GCC tools are required, GCC 3 will probably not work. GCC version
   4.5 (or higher) is recommended.
+- On Illumos you can install the 'build-essential' metapackage which
+  includes GCC and many other developer tools.
 - Solaris ld has to be the first linker in the PATH. GNU ld cannot be used.
   There is currently no linker check in the configure script but the linking
   phase fails if GNU ld is used. Recent Solaris/illumos distributions are ok.
diff --git a/README_DEVELOPERS b/README_DEVELOPERS
index 37cffa2..0be02b7 100644
--- a/README_DEVELOPERS
+++ b/README_DEVELOPERS
@@ -481,3 +481,28 @@ integration with a text editor, it is possible to reformat arbitrary blocks
 of code with a single keystroke.  Refer to the upstream documentation which
 describes integration with various editors and IDEs:
 https://clang.llvm.org/docs/ClangFormat.html.
+
+Updating zstd
+~~~~~~~~~~~~~
+Similar to libiberty, we have to import a copy of zstd rather than linking
+with a library. There isn't (yet) a script to automate this, so it has to be
+done manually.
+
+The version currently in use can be seen in coregrind/m_debuginfo/zstd.h.
+Look for ZSTD_VERSION_MAJOR ZSTD_VERSION_MINOR and ZSTD_VERSION_RELEASE.
+
+ - Get the source of zstd from
+
+   https://github.com/facebook/zstd
+
+ - Checkout the latest release tag (should be vMAJ.MIN.REL)
+
+ - Copy {zstd git repo}/lib/zstd.h to coregrind/m_debuginfo/zstd.h
+ 
+ -  cd to {zstd git repo}/build/single_file_libs and run ./create_single_file_decoder.sh
+ 
+ - You cannot simply copy and use the generated zstddeclib.c!
+   All calls to libc functions in this file need replacing with VG_ versions.
+   Merge the newly generated zstddeclib.c with coregrind/m_debuginfo/zstddeclib.c.
+   Make sure to keep the copy of the BSD license in the C file.
+   
diff --git a/VEX/auxprogs/genoffsets.c b/VEX/auxprogs/genoffsets.c
index 6b70cd0..48c9723 100644
--- a/VEX/auxprogs/genoffsets.c
+++ b/VEX/auxprogs/genoffsets.c
@@ -53,6 +53,7 @@
 #include "../pub/libvex_guest_s390x.h"
 #include "../pub/libvex_guest_mips32.h"
 #include "../pub/libvex_guest_mips64.h"
+#include "../pub/libvex_guest_riscv64.h"
 
 #define VG_STRINGIFZ(__str)  #__str
 #define VG_STRINGIFY(__str)  VG_STRINGIFZ(__str)
@@ -265,6 +266,74 @@ void foo ( void )
    GENOFFSET(MIPS64,mips64,PC);
    GENOFFSET(MIPS64,mips64,HI);
    GENOFFSET(MIPS64,mips64,LO);
+
+   // riscv64
+   GENOFFSET(RISCV64,riscv64,x0);
+   GENOFFSET(RISCV64,riscv64,x1);
+   GENOFFSET(RISCV64,riscv64,x2);
+   GENOFFSET(RISCV64,riscv64,x3);
+   GENOFFSET(RISCV64,riscv64,x4);
+   GENOFFSET(RISCV64,riscv64,x5);
+   GENOFFSET(RISCV64,riscv64,x6);
+   GENOFFSET(RISCV64,riscv64,x7);
+   GENOFFSET(RISCV64,riscv64,x8);
+   GENOFFSET(RISCV64,riscv64,x9);
+   GENOFFSET(RISCV64,riscv64,x10);
+   GENOFFSET(RISCV64,riscv64,x11);
+   GENOFFSET(RISCV64,riscv64,x12);
+   GENOFFSET(RISCV64,riscv64,x13);
+   GENOFFSET(RISCV64,riscv64,x14);
+   GENOFFSET(RISCV64,riscv64,x15);
+   GENOFFSET(RISCV64,riscv64,x16);
+   GENOFFSET(RISCV64,riscv64,x17);
+   GENOFFSET(RISCV64,riscv64,x18);
+   GENOFFSET(RISCV64,riscv64,x19);
+   GENOFFSET(RISCV64,riscv64,x20);
+   GENOFFSET(RISCV64,riscv64,x21);
+   GENOFFSET(RISCV64,riscv64,x22);
+   GENOFFSET(RISCV64,riscv64,x23);
+   GENOFFSET(RISCV64,riscv64,x24);
+   GENOFFSET(RISCV64,riscv64,x25);
+   GENOFFSET(RISCV64,riscv64,x26);
+   GENOFFSET(RISCV64,riscv64,x27);
+   GENOFFSET(RISCV64,riscv64,x28);
+   GENOFFSET(RISCV64,riscv64,x29);
+   GENOFFSET(RISCV64,riscv64,x30);
+   GENOFFSET(RISCV64,riscv64,x31);
+   GENOFFSET(RISCV64,riscv64,pc);
+   GENOFFSET(RISCV64,riscv64,f0);
+   GENOFFSET(RISCV64,riscv64,f1);
+   GENOFFSET(RISCV64,riscv64,f2);
+   GENOFFSET(RISCV64,riscv64,f3);
+   GENOFFSET(RISCV64,riscv64,f4);
+   GENOFFSET(RISCV64,riscv64,f5);
+   GENOFFSET(RISCV64,riscv64,f6);
+   GENOFFSET(RISCV64,riscv64,f7);
+   GENOFFSET(RISCV64,riscv64,f8);
+   GENOFFSET(RISCV64,riscv64,f9);
+   GENOFFSET(RISCV64,riscv64,f10);
+   GENOFFSET(RISCV64,riscv64,f11);
+   GENOFFSET(RISCV64,riscv64,f12);
+   GENOFFSET(RISCV64,riscv64,f13);
+   GENOFFSET(RISCV64,riscv64,f14);
+   GENOFFSET(RISCV64,riscv64,f15);
+   GENOFFSET(RISCV64,riscv64,f16);
+   GENOFFSET(RISCV64,riscv64,f17);
+   GENOFFSET(RISCV64,riscv64,f18);
+   GENOFFSET(RISCV64,riscv64,f19);
+   GENOFFSET(RISCV64,riscv64,f20);
+   GENOFFSET(RISCV64,riscv64,f21);
+   GENOFFSET(RISCV64,riscv64,f22);
+   GENOFFSET(RISCV64,riscv64,f23);
+   GENOFFSET(RISCV64,riscv64,f24);
+   GENOFFSET(RISCV64,riscv64,f25);
+   GENOFFSET(RISCV64,riscv64,f26);
+   GENOFFSET(RISCV64,riscv64,f27);
+   GENOFFSET(RISCV64,riscv64,f28);
+   GENOFFSET(RISCV64,riscv64,f29);
+   GENOFFSET(RISCV64,riscv64,f30);
+   GENOFFSET(RISCV64,riscv64,f31);
+   GENOFFSET(RISCV64,riscv64,fcsr);
 }
 
 /*--------------------------------------------------------------------*/
diff --git a/VEX/priv/guest_amd64_defs.h b/VEX/priv/guest_amd64_defs.h
index f9a9a90..f7a4e06 100644
--- a/VEX/priv/guest_amd64_defs.h
+++ b/VEX/priv/guest_amd64_defs.h
@@ -48,15 +48,15 @@
 /* Convert one amd64 insn to IR.  See the type DisOneInstrFn in
    guest_generic_bb_to_IR.h. */
 extern
-DisResult disInstr_AMD64 ( IRSB*        irbb,
-                           const UChar* guest_code,
+DisResult disInstr_AMD64 ( IRSB*        irsb_IN,
+                           const UChar* guest_code_IN,
                            Long         delta,
                            Addr         guest_IP,
                            VexArch      guest_arch,
                            const VexArchInfo* archinfo,
                            const VexAbiInfo*  abiinfo,
-                           VexEndness   host_endness,
-                           Bool         sigill_diag );
+                           VexEndness   host_endness_IN,
+                           Bool         sigill_diag_IN );
 
 /* Used by the optimiser to specialise calls to helpers. */
 extern
@@ -108,7 +108,7 @@ extern ULong amd64g_calculate_RCL  (
                 ULong arg, ULong rot_amt, ULong rflags_in, Long sz 
              );
 
-extern ULong amd64g_calculate_pclmul(ULong s1, ULong s2, ULong which);
+extern ULong amd64g_calculate_pclmul(ULong a, ULong b, ULong which);
 
 extern ULong amd64g_check_fldcw ( ULong fpucw );
 
diff --git a/VEX/priv/guest_nanomips_toIR.c b/VEX/priv/guest_nanomips_toIR.c
old mode 100755
new mode 100644
diff --git a/VEX/priv/guest_ppc_toIR.c b/VEX/priv/guest_ppc_toIR.c
index 94930aa..18716dd 100644
--- a/VEX/priv/guest_ppc_toIR.c
+++ b/VEX/priv/guest_ppc_toIR.c
@@ -6149,13 +6149,13 @@ static IRExpr* dnorm_adj_Vector ( IRExpr* src )
  *------------------------------------------------------------*/
 
 static ULong generate_TMreason( UInt failure_code,
-                                             UInt persistant,
+                                             UInt persistent,
                                              UInt nest_overflow,
                                              UInt tm_exact )
 {
    ULong tm_err_code =
      ( (ULong) 0) << (63-6)   /* Failure code */
-     | ( (ULong) persistant) << (63-7)     /* Failure persistant */
+     | ( (ULong) persistent) << (63-7)     /* Failure persistent */
      | ( (ULong) 0) << (63-8)   /* Disallowed */
      | ( (ULong) nest_overflow) << (63-9)   /* Nesting Overflow */
      | ( (ULong) 0) << (63-10)  /* Footprint Overflow */
@@ -7691,7 +7691,7 @@ static Bool dis_int_misc ( UInt prefix, UInt theInstr )
        *
        *    0b00   Resume instruction fetching and execution when an
        *           exception or an event-based branch exception occurs,
-       *           or a resume signal from the platform is recieved.
+       *           or a resume signal from the platform is received.
        *
        *    0b01   Reserved.
        *
@@ -33735,7 +33735,7 @@ static Bool dis_transactional_memory ( UInt prefix, UInt theInstr, UInt nextInst
       UInt failure_code = 0;  /* Forcing failure, will not be due to tabort
                                * or treclaim.
                                */
-      UInt persistant = 1;    /* set persistant since we are always failing
+      UInt persistent = 1;    /* set persistent since we are always failing
                                * the tbegin.
                                */
       UInt nest_overflow = 1; /* Alowed nesting depth overflow, we use this
@@ -33759,7 +33759,7 @@ static Bool dis_transactional_memory ( UInt prefix, UInt theInstr, UInt nextInst
        */
       putCR321( 0, mkU8( 0x2 ) );
 
-      tm_reason = generate_TMreason( failure_code, persistant,
+      tm_reason = generate_TMreason( failure_code, persistent,
                                      nest_overflow, tm_exact );
 
       storeTMfailure( guest_CIA_curr_instr, tm_reason,
diff --git a/VEX/priv/guest_riscv64_defs.h b/VEX/priv/guest_riscv64_defs.h
new file mode 100644
index 0000000..ee5435e
--- /dev/null
+++ b/VEX/priv/guest_riscv64_defs.h
@@ -0,0 +1,136 @@
+
+/*--------------------------------------------------------------------*/
+/*--- begin                                   guest_riscv64_defs.h ---*/
+/*--------------------------------------------------------------------*/
+
+/*
+   This file is part of Valgrind, a dynamic binary instrumentation
+   framework.
+
+   Copyright (C) 2020-2023 Petr Pavlu
+      petr.pavlu@xxxxxxxxxx
+
+   This program is free software; you can redistribute it and/or
+   modify it under the terms of the GNU General Public License as
+   published by the Free Software Foundation; either version 2 of the
+   License, or (at your option) any later version.
+
+   This program is distributed in the hope that it will be useful, but
+   WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   General Public License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with this program; if not, see <http://www.gnu.org/licenses/>.
+
+   The GNU General Public License is contained in the file COPYING.
+
+   Neither the names of the U.S. Department of Energy nor the
+   University of California nor the names of its contributors may be
+   used to endorse or promote products derived from this software
+   without prior written permission.
+*/
+
+/* Only to be used within the guest_riscv64_* files. */
+
+#ifndef __VEX_GUEST_RISCV64_DEFS_H
+#define __VEX_GUEST_RISCV64_DEFS_H
+
+#include "libvex_basictypes.h"
+
+#include "guest_generic_bb_to_IR.h"
+
+/*------------------------------------------------------------*/
+/*--- riscv64 to IR conversion                             ---*/
+/*------------------------------------------------------------*/
+
+/* Convert one riscv64 insn to IR. See the type DisOneInstrFn in
+   guest_generic_bb_to_IR.h. */
+DisResult disInstr_RISCV64(IRSB*              irbb,
+                           const UChar*       guest_code,
+                           Long               delta,
+                           Addr               guest_IP,
+                           VexArch            guest_arch,
+                           const VexArchInfo* archinfo,
+                           const VexAbiInfo*  abiinfo,
+                           VexEndness         host_endness,
+                           Bool               sigill_diag);
+
+/* Used by the optimiser to specialise calls to helpers. */
+IRExpr* guest_riscv64_spechelper(const HChar* function_name,
+                                 IRExpr**     args,
+                                 IRStmt**     precedingStmts,
+                                 Int          n_precedingStmts);
+
+/* Describes to the optimiser which part of the guest state require precise
+   memory exceptions. This is logically part of the guest state description. */
+Bool guest_riscv64_state_requires_precise_mem_exns(
+   Int minoff, Int maxoff, VexRegisterUpdates pxControl);
+
+extern VexGuestLayout riscv64guest_layout;
+
+/*------------------------------------------------------------*/
+/*--- riscv64 guest helpers                                ---*/
+/*------------------------------------------------------------*/
+
+/* --- CLEAN HELPERS --- */
+
+/* Calculate resulting flags of a specified floating-point operation. Returns
+   a 32-bit value where bits 4:0 contain the fflags in the RISC-V native
+   format (NV DZ OF UF NX) and remaining upper bits are zero. */
+UInt riscv64g_calculate_fflags_fsqrt_s(Float a1, UInt rm_RISCV);
+UInt riscv64g_calculate_fflags_fcvt_w_s(Float a1, UInt rm_RISCV);
+UInt riscv64g_calculate_fflags_fcvt_wu_s(Float a1, UInt rm_RISCV);
+UInt riscv64g_calculate_fflags_fcvt_s_w(UInt a1, UInt rm_RISCV);
+UInt riscv64g_calculate_fflags_fcvt_s_wu(UInt a1, UInt rm_RISCV);
+UInt riscv64g_calculate_fflags_fcvt_l_s(Float a1, UInt rm_RISCV);
+UInt riscv64g_calculate_fflags_fcvt_lu_s(Float a1, UInt rm_RISCV);
+UInt riscv64g_calculate_fflags_fcvt_s_l(ULong a1, UInt rm_RISCV);
+UInt riscv64g_calculate_fflags_fcvt_s_lu(ULong a1, UInt rm_RISCV);
+UInt riscv64g_calculate_fflags_fsqrt_d(Double a1, UInt rm_RISCV);
+UInt riscv64g_calculate_fflags_fcvt_s_d(Double a1, UInt rm_RISCV);
+UInt riscv64g_calculate_fflags_fcvt_w_d(Double a1, UInt rm_RISCV);
+UInt riscv64g_calculate_fflags_fcvt_wu_d(Double a1, UInt rm_RISCV);
+UInt riscv64g_calculate_fflags_fcvt_l_d(Double a1, UInt rm_RISCV);
+UInt riscv64g_calculate_fflags_fcvt_lu_d(Double a1, UInt rm_RISCV);
+UInt riscv64g_calculate_fflags_fcvt_d_l(ULong a1, UInt rm_RISCV);
+UInt riscv64g_calculate_fflags_fcvt_d_lu(ULong a1, UInt rm_RISCV);
+
+UInt riscv64g_calculate_fflags_fadd_s(Float a1, Float a2, UInt rm_RISCV);
+UInt riscv64g_calculate_fflags_fmul_s(Float a1, Float a2, UInt rm_RISCV);
+UInt riscv64g_calculate_fflags_fdiv_s(Float a1, Float a2, UInt rm_RISCV);
+UInt riscv64g_calculate_fflags_fadd_d(Double a1, Double a2, UInt rm_RISCV);
+UInt riscv64g_calculate_fflags_fmul_d(Double a1, Double a2, UInt rm_RISCV);
+UInt riscv64g_calculate_fflags_fdiv_d(Double a1, Double a2, UInt rm_RISCV);
+
+UInt riscv64g_calculate_fflags_fmin_s(Float a1, Float a2);
+UInt riscv64g_calculate_fflags_fmax_s(Float a1, Float a2);
+UInt riscv64g_calculate_fflags_feq_s(Float a1, Float a2);
+UInt riscv64g_calculate_fflags_flt_s(Float a1, Float a2);
+UInt riscv64g_calculate_fflags_fle_s(Float a1, Float a2);
+UInt riscv64g_calculate_fflags_fmin_d(Double a1, Double a2);
+UInt riscv64g_calculate_fflags_fmax_d(Double a1, Double a2);
+UInt riscv64g_calculate_fflags_feq_d(Double a1, Double a2);
+UInt riscv64g_calculate_fflags_flt_d(Double a1, Double a2);
+UInt riscv64g_calculate_fflags_fle_d(Double a1, Double a2);
+
+UInt riscv64g_calculate_fflags_fmadd_s(Float a1,
+                                       Float a2,
+                                       Float a3,
+                                       UInt  rm_RISCV);
+UInt riscv64g_calculate_fflags_fmadd_d(Double a1,
+                                       Double a2,
+                                       Double a3,
+                                       UInt   rm_RISCV);
+
+/* Calculate floating-point class. Returns a 64-bit value where bits 9:0
+   contains the properties in the RISC-V FCLASS-instruction format and remaining
+   upper bits are zero. */
+ULong riscv64g_calculate_fclass_s(Float a1);
+ULong riscv64g_calculate_fclass_d(Double a1);
+
+#endif /* ndef __VEX_GUEST_RISCV64_DEFS_H */
+
+/*--------------------------------------------------------------------*/
+/*--- end                                     guest_riscv64_defs.h ---*/
+/*--------------------------------------------------------------------*/
diff --git a/VEX/priv/guest_riscv64_helpers.c b/VEX/priv/guest_riscv64_helpers.c
new file mode 100644
index 0000000..e7c4ed8
--- /dev/null
+++ b/VEX/priv/guest_riscv64_helpers.c
@@ -0,0 +1,481 @@
+
+/*--------------------------------------------------------------------*/
+/*--- begin                                guest_riscv64_helpers.c ---*/
+/*--------------------------------------------------------------------*/
+
+/*
+   This file is part of Valgrind, a dynamic binary instrumentation
+   framework.
+
+   Copyright (C) 2020-2023 Petr Pavlu
+      petr.pavlu@xxxxxxxxxx
+
+   This program is free software; you can redistribute it and/or
+   modify it under the terms of the GNU General Public License as
+   published by the Free Software Foundation; either version 2 of the
+   License, or (at your option) any later version.
+
+   This program is distributed in the hope that it will be useful, but
+   WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   General Public License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with this program; if not, see <http://www.gnu.org/licenses/>.
+
+   The GNU General Public License is contained in the file COPYING.
+*/
+
+#include "libvex_guest_riscv64.h"
+
+#include "guest_riscv64_defs.h"
+#include "main_util.h"
+
+/* This file contains helper functions for riscv64 guest code. Calls to these
+   functions are generated by the back end. These calls are of course in the
+   host machine code and this file will be compiled to host machine code, so
+   that all makes sense.
+
+   Only change the signatures of these helper functions very carefully. If you
+   change the signature here, you'll have to change the parameters passed to it
+   in the IR calls constructed by guest_riscv64_toIR.c.
+
+   The convention used is that all functions called from generated code are
+   named riscv64g_<something>, and any function whose name lacks that prefix is
+   not called from generated code. Note that some LibVEX_* functions can however
+   be called by VEX's client, but that is not the same as calling them from
+   VEX-generated code.
+*/
+
+#if defined(__riscv) && (__riscv_xlen == 64)
+/* clang-format off */
+#define CALCULATE_FFLAGS_UNARY64_F(inst)                                       \
+   do {                                                                        \
+      UInt res;                                                                \
+      __asm__ __volatile__(                                                    \
+         "csrr t0, fcsr\n\t"                                                   \
+         "csrw frm, %[rm]\n\t"                                                 \
+         "csrw fflags, zero\n\t"                                               \
+         inst " ft0, %[a1]\n\t"                                                \
+         "csrr %[res], fflags\n\t"                                             \
+         "csrw fcsr, t0\n\t"                                                   \
+         : [res] "=r"(res)                                                     \
+         : [a1] "f"(a1), [rm] "r"(rm_RISCV)                                    \
+         : "t0", "ft0");                                                       \
+      return res;                                                              \
+   } while (0)
+#define CALCULATE_FFLAGS_UNARY64_IF(inst)                                      \
+   do {                                                                        \
+      UInt res;                                                                \
+      __asm__ __volatile__(                                                    \
+         "csrr t0, fcsr\n\t"                                                   \
+         "csrw frm, %[rm]\n\t"                                                 \
+         "csrw fflags, zero\n\t"                                               \
+         inst " t1, %[a1]\n\t"                                                 \
+         "csrr %[res], fflags\n\t"                                             \
+         "csrw fcsr, t0\n\t"                                                   \
+         : [res] "=r"(res)                                                     \
+         : [a1] "f"(a1), [rm] "r"(rm_RISCV)                                    \
+         : "t0", "t1");                                                        \
+      return res;                                                              \
+   } while (0)
+#define CALCULATE_FFLAGS_UNARY64_FI(inst)                                      \
+   do {                                                                        \
+      UInt res;                                                                \
+      __asm__ __volatile__(                                                    \
+         "csrr t0, fcsr\n\t"                                                   \
+         "csrw frm, %[rm]\n\t"                                                 \
+         "csrw fflags, zero\n\t"                                               \
+         inst " ft0, %[a1]\n\t"                                                \
+         "csrr %[res], fflags\n\t"                                             \
+         "csrw fcsr, t0\n\t"                                                   \
+         : [res] "=r"(res)                                                     \
+         : [a1] "r"(a1), [rm] "r"(rm_RISCV)                                    \
+         : "t0", "ft0");                                                       \
+      return res;                                                              \
+   } while (0)
+/* clang-format on */
+#else
+/* No simulated version is currently implemented. */
+#define CALCULATE_FFLAGS_UNARY64_F(inst)                                       \
+   do {                                                                        \
+      (void)rm_RISCV;                                                          \
+      return 0;                                                                \
+   } while (0)
+#define CALCULATE_FFLAGS_UNARY64_IF(inst)                                      \
+   do {                                                                        \
+      (void)rm_RISCV;                                                          \
+      return 0;                                                                \
+   } while (0)
+#define CALCULATE_FFLAGS_UNARY64_FI(inst)                                      \
+   do {                                                                        \
+      (void)rm_RISCV;                                                          \
+      return 0;                                                                \
+   } while (0)
+#endif
+
+/* CALLED FROM GENERATED CODE: CLEAN HELPERS */
+UInt riscv64g_calculate_fflags_fsqrt_s(Float a1, UInt rm_RISCV)
+{
+   CALCULATE_FFLAGS_UNARY64_F("fsqrt.s");
+}
+UInt riscv64g_calculate_fflags_fcvt_w_s(Float a1, UInt rm_RISCV)
+{
+   CALCULATE_FFLAGS_UNARY64_IF("fcvt.w.s");
+}
+UInt riscv64g_calculate_fflags_fcvt_wu_s(Float a1, UInt rm_RISCV)
+{
+   CALCULATE_FFLAGS_UNARY64_IF("fcvt.wu.s");
+}
+UInt riscv64g_calculate_fflags_fcvt_s_w(UInt a1, UInt rm_RISCV)
+{
+   CALCULATE_FFLAGS_UNARY64_FI("fcvt.s.w");
+}
+UInt riscv64g_calculate_fflags_fcvt_s_wu(UInt a1, UInt rm_RISCV)
+{
+   CALCULATE_FFLAGS_UNARY64_FI("fcvt.s.wu");
+}
+UInt riscv64g_calculate_fflags_fcvt_l_s(Float a1, UInt rm_RISCV)
+{
+   CALCULATE_FFLAGS_UNARY64_IF("fcvt.l.s");
+}
+UInt riscv64g_calculate_fflags_fcvt_lu_s(Float a1, UInt rm_RISCV)
+{
+   CALCULATE_FFLAGS_UNARY64_IF("fcvt.lu.s");
+}
+UInt riscv64g_calculate_fflags_fcvt_s_l(ULong a1, UInt rm_RISCV)
+{
+   CALCULATE_FFLAGS_UNARY64_FI("fcvt.s.l");
+}
+UInt riscv64g_calculate_fflags_fcvt_s_lu(ULong a1, UInt rm_RISCV)
+{
+   CALCULATE_FFLAGS_UNARY64_FI("fcvt.s.lu");
+}
+UInt riscv64g_calculate_fflags_fsqrt_d(Double a1, UInt rm_RISCV)
+{
+   CALCULATE_FFLAGS_UNARY64_F("fsqrt.d");
+}
+UInt riscv64g_calculate_fflags_fcvt_s_d(Double a1, UInt rm_RISCV)
+{
+   CALCULATE_FFLAGS_UNARY64_F("fcvt.s.d");
+}
+UInt riscv64g_calculate_fflags_fcvt_w_d(Double a1, UInt rm_RISCV)
+{
+   CALCULATE_FFLAGS_UNARY64_IF("fcvt.w.d");
+}
+UInt riscv64g_calculate_fflags_fcvt_wu_d(Double a1, UInt rm_RISCV)
+{
+   CALCULATE_FFLAGS_UNARY64_IF("fcvt.wu.d");
+}
+UInt riscv64g_calculate_fflags_fcvt_l_d(Double a1, UInt rm_RISCV)
+{
+   CALCULATE_FFLAGS_UNARY64_IF("fcvt.l.d");
+}
+UInt riscv64g_calculate_fflags_fcvt_lu_d(Double a1, UInt rm_RISCV)
+{
+   CALCULATE_FFLAGS_UNARY64_IF("fcvt.lu.d");
+}
+UInt riscv64g_calculate_fflags_fcvt_d_l(ULong a1, UInt rm_RISCV)
+{
+   CALCULATE_FFLAGS_UNARY64_FI("fcvt.d.l");
+}
+UInt riscv64g_calculate_fflags_fcvt_d_lu(ULong a1, UInt rm_RISCV)
+{
+   CALCULATE_FFLAGS_UNARY64_FI("fcvt.d.lu");
+}
+
+#if defined(__riscv) && (__riscv_xlen == 64)
+/* clang-format off */
+#define CALCULATE_FFLAGS_BINARY64(inst)                                        \
+   do {                                                                        \
+      UInt res;                                                                \
+      __asm__ __volatile__(                                                    \
+         "csrr t0, fcsr\n\t"                                                   \
+         "csrw frm, %[rm]\n\t"                                                 \
+         "csrw fflags, zero\n\t"                                               \
+         inst " %[a1], %[a1], %[a2]\n\t"                                       \
+         "csrr %[res], fflags\n\t"                                             \
+         "csrw fcsr, t0\n\t"                                                   \
+         : [res] "=r"(res)                                                     \
+         : [a1] "f"(a1), [a2] "f"(a2), [rm] "r"(rm_RISCV)                      \
+         : "t0");                                                              \
+      return res;                                                              \
+   } while (0)
+#define CALCULATE_FFLAGS_BINARY64_IFF(inst)                                    \
+   do {                                                                        \
+      UInt res;                                                                \
+      __asm__ __volatile__(                                                    \
+         "csrr t0, fcsr\n\t"                                                   \
+         "csrw frm, %[rm]\n\t"                                                 \
+         "csrw fflags, zero\n\t"                                               \
+         inst " t1, %[a1], %[a2]\n\t"                                          \
+         "csrr %[res], fflags\n\t"                                             \
+         "csrw fcsr, t0\n\t"                                                   \
+         : [res] "=r"(res)                                                     \
+         : [a1] "f"(a1), [a2] "f"(a2), [rm] "r"(rm_RISCV)                      \
+         : "t0", "t1");                                                        \
+      return res;                                                              \
+   } while (0)
+/* clang-format on */
+#else
+/* No simulated version is currently implemented. */
+#define CALCULATE_FFLAGS_BINARY64(inst)                                        \
+   do {                                                                        \
+      (void)rm_RISCV;                                                          \
+      return 0;                                                                \
+   } while (0)
+#define CALCULATE_FFLAGS_BINARY64_IFF(inst)                                    \
+   do {                                                                        \
+      (void)rm_RISCV;                                                          \
+      return 0;                                                                \
+   } while (0)
+#endif
+
+/* CALLED FROM GENERATED CODE: CLEAN HELPERS */
+UInt riscv64g_calculate_fflags_fadd_s(Float a1, Float a2, UInt rm_RISCV)
+{
+   CALCULATE_FFLAGS_BINARY64("fadd.s");
+}
+UInt riscv64g_calculate_fflags_fmul_s(Float a1, Float a2, UInt rm_RISCV)
+{
+   CALCULATE_FFLAGS_BINARY64("fmul.s");
+}
+UInt riscv64g_calculate_fflags_fdiv_s(Float a1, Float a2, UInt rm_RISCV)
+{
+   CALCULATE_FFLAGS_BINARY64("fdiv.s");
+}
+UInt riscv64g_calculate_fflags_fadd_d(Double a1, Double a2, UInt rm_RISCV)
+{
+   CALCULATE_FFLAGS_BINARY64("fadd.d");
+}
+UInt riscv64g_calculate_fflags_fmul_d(Double a1, Double a2, UInt rm_RISCV)
+{
+   CALCULATE_FFLAGS_BINARY64("fmul.d");
+}
+UInt riscv64g_calculate_fflags_fdiv_d(Double a1, Double a2, UInt rm_RISCV)
+{
+   CALCULATE_FFLAGS_BINARY64("fdiv.d");
+}
+UInt riscv64g_calculate_fflags_fmin_s(Float a1, Float a2)
+{
+   UInt rm_RISCV = 0; /* unused */
+   CALCULATE_FFLAGS_BINARY64("fmin.s");
+}
+UInt riscv64g_calculate_fflags_fmax_s(Float a1, Float a2)
+{
+   UInt rm_RISCV = 0; /* unused */
+   CALCULATE_FFLAGS_BINARY64("fmax.s");
+}
+UInt riscv64g_calculate_fflags_feq_s(Float a1, Float a2)
+{
+   UInt rm_RISCV = 0; /* unused */
+   CALCULATE_FFLAGS_BINARY64_IFF("feq.s");
+}
+UInt riscv64g_calculate_fflags_flt_s(Float a1, Float a2)
+{
+   UInt rm_RISCV = 0; /* unused */
+   CALCULATE_FFLAGS_BINARY64_IFF("flt.s");
+}
+UInt riscv64g_calculate_fflags_fle_s(Float a1, Float a2)
+{
+   UInt rm_RISCV = 0; /* unused */
+   CALCULATE_FFLAGS_BINARY64_IFF("fle.s");
+}
+UInt riscv64g_calculate_fflags_fmin_d(Double a1, Double a2)
+{
+   UInt rm_RISCV = 0; /* unused */
+   CALCULATE_FFLAGS_BINARY64("fmin.d");
+}
+UInt riscv64g_calculate_fflags_fmax_d(Double a1, Double a2)
+{
+   UInt rm_RISCV = 0; /* unused */
+   CALCULATE_FFLAGS_BINARY64("fmax.d");
+}
+UInt riscv64g_calculate_fflags_feq_d(Double a1, Double a2)
+{
+   UInt rm_RISCV = 0; /* unused */
+   CALCULATE_FFLAGS_BINARY64_IFF("feq.d");
+}
+UInt riscv64g_calculate_fflags_flt_d(Double a1, Double a2)
+{
+   UInt rm_RISCV = 0; /* unused */
+   CALCULATE_FFLAGS_BINARY64_IFF("flt.d");
+}
+UInt riscv64g_calculate_fflags_fle_d(Double a1, Double a2)
+{
+   UInt rm_RISCV = 0; /* unused */
+   CALCULATE_FFLAGS_BINARY64_IFF("fle.d");
+}
+
+#if defined(__riscv) && (__riscv_xlen == 64)
+/* clang-format off */
+#define CALCULATE_FFLAGS_TERNARY64(inst)                                       \
+   do {                                                                        \
+      UInt res;                                                                \
+      __asm__ __volatile__(                                                    \
+         "csrr t0, fcsr\n\t"                                                   \
+         "csrw frm, %[rm]\n\t"                                                 \
+         "csrw fflags, zero\n\t"                                               \
+         inst " %[a1], %[a1], %[a2], %[a3]\n\t"                                \
+         "csrr %[res], fflags\n\t"                                             \
+         "csrw fcsr, t0\n\t"                                                   \
+         : [res] "=r"(res)                                                     \
+         : [a1] "f"(a1), [a2] "f"(a2), [a3] "f"(a3), [rm] "r"(rm_RISCV)        \
+         : "t0");                                                              \
+      return res;                                                              \
+   } while (0)
+/* clang-format on */
+#else
+/* No simulated version is currently implemented. */
+#define CALCULATE_FFLAGS_TERNARY64(inst)                                       \
+   do {                                                                        \
+      (void)rm_RISCV;                                                          \
+      return 0;                                                                \
+   } while (0)
+#endif
+
+/* CALLED FROM GENERATED CODE: CLEAN HELPERS */
+UInt riscv64g_calculate_fflags_fmadd_s(Float a1,
+                                       Float a2,
+                                       Float a3,
+                                       UInt  rm_RISCV)
+{
+   CALCULATE_FFLAGS_TERNARY64("fmadd.s");
+}
+UInt riscv64g_calculate_fflags_fmadd_d(Double a1,
+                                       Double a2,
+                                       Double a3,
+                                       UInt   rm_RISCV)
+{
+   CALCULATE_FFLAGS_TERNARY64("fmadd.d");
+}
+
+#if defined(__riscv) && (__riscv_xlen == 64)
+/* clang-format off */
+#define CALCULATE_FCLASS(inst)                                                 \
+   do {                                                                        \
+      ULong res;                                                               \
+      __asm__ __volatile__(                                                    \
+         inst " %[res], %[a1]\n\t"                                             \
+         : [res] "=r"(res)                                                     \
+         : [a1] "f"(a1));                                                      \
+      return res;                                                              \
+   } while (0)
+/* clang-format on */
+#else
+/* No simulated version is currently implemented. */
+#define CALCULATE_FCLASS(inst)                                                 \
+   do {                                                                        \
+      return 0;                                                                \
+   } while (0)
+#endif
+
+/* CALLED FROM GENERATED CODE: CLEAN HELPERS */
+ULong riscv64g_calculate_fclass_s(Float a1) { CALCULATE_FCLASS("fclass.s"); }
+ULong riscv64g_calculate_fclass_d(Double a1) { CALCULATE_FCLASS("fclass.d"); }
+
+/*------------------------------------------------------------*/
+/*--- Flag-helpers translation-time function specialisers. ---*/
+/*--- These help iropt specialise calls the above run-time ---*/
+/*--- flags functions.                                     ---*/
+/*------------------------------------------------------------*/
+
+IRExpr* guest_riscv64_spechelper(const HChar* function_name,
+                                 IRExpr**     args,
+                                 IRStmt**     precedingStmts,
+                                 Int          n_precedingStmts)
+{
+   return NULL;
+}
+
+/*------------------------------------------------------------*/
+/*--- Helpers for dealing with, and describing, guest      ---*/
+/*--- state as a whole.                                    ---*/
+/*------------------------------------------------------------*/
+
+/* Initialise the entire riscv64 guest state. */
+/* VISIBLE TO LIBVEX CLIENT */
+void LibVEX_GuestRISCV64_initialise(/*OUT*/ VexGuestRISCV64State* vex_state)
+{
+   vex_bzero(vex_state, sizeof(*vex_state));
+}
+
+/* Figure out if any part of the guest state contained in minoff .. maxoff
+   requires precise memory exceptions. If in doubt return True (but this
+   generates significantly slower code).
+
+   By default we enforce precise exns for guest x2 (sp), x8 (fp) and pc only.
+   These are the minimum needed to extract correct stack backtraces from riscv64
+   code.
+
+   Only x2 (sp) is needed in mode VexRegUpdSpAtMemAccess.
+*/
+Bool guest_riscv64_state_requires_precise_mem_exns(Int                minoff,
+                                                   Int                maxoff,
+                                                   VexRegisterUpdates pxControl)
+{
+   Int fp_min = offsetof(VexGuestRISCV64State, guest_x8);
+   Int fp_max = fp_min + 8 - 1;
+   Int sp_min = offsetof(VexGuestRISCV64State, guest_x2);
+   Int sp_max = sp_min + 8 - 1;
+   Int pc_min = offsetof(VexGuestRISCV64State, guest_pc);
+   Int pc_max = pc_min + 8 - 1;
+
+   if (maxoff < sp_min || minoff > sp_max) {
+      /* No overlap with sp. */
+      if (pxControl == VexRegUpdSpAtMemAccess)
+         return False; /* We only need to check stack pointer. */
+   } else
+      return True;
+
+   if (maxoff < fp_min || minoff > fp_max) {
+      /* No overlap with fp. */
+   } else
+      return True;
+
+   if (maxoff < pc_min || minoff > pc_max) {
+      /* No overlap with pc. */
+   } else
+      return True;
+
+   return False;
+}
+
+#define ALWAYSDEFD(field)                                                      \
+   {                                                                           \
+      offsetof(VexGuestRISCV64State, field),                                   \
+         (sizeof((VexGuestRISCV64State*)0)->field)                             \
+   }
+
+VexGuestLayout riscv64guest_layout = {
+   /* Total size of the guest state, in bytes. */
+   .total_sizeB = sizeof(VexGuestRISCV64State),
+
+   /* Describe the stack pointer. */
+   .offset_SP = offsetof(VexGuestRISCV64State, guest_x2),
+   .sizeof_SP = 8,
+
+   /* Describe the frame pointer. */
+   .offset_FP = offsetof(VexGuestRISCV64State, guest_x8),
+   .sizeof_FP = 8,
+
+   /* Describe the instruction pointer. */
+   .offset_IP = offsetof(VexGuestRISCV64State, guest_pc),
+   .sizeof_IP = 8,
+
+   /* Describe any sections to be regarded by Memcheck as 'always-defined'. */
+   .n_alwaysDefd = 6,
+
+   .alwaysDefd = {
+      /* 0 */ ALWAYSDEFD(guest_x0),
+      /* 1 */ ALWAYSDEFD(guest_pc),
+      /* 2 */ ALWAYSDEFD(guest_EMNOTE),
+      /* 3 */ ALWAYSDEFD(guest_CMSTART),
+      /* 4 */ ALWAYSDEFD(guest_CMLEN),
+      /* 5 */ ALWAYSDEFD(guest_NRADDR),
+   },
+};
+
+/*--------------------------------------------------------------------*/
+/*--- end                                  guest_riscv64_helpers.c ---*/
+/*--------------------------------------------------------------------*/
diff --git a/VEX/priv/guest_riscv64_toIR.c b/VEX/priv/guest_riscv64_toIR.c
new file mode 100644
index 0000000..ee95805
--- /dev/null
+++ b/VEX/priv/guest_riscv64_toIR.c
@@ -0,0 +1,3550 @@
+
+/*--------------------------------------------------------------------*/
+/*--- begin                                   guest_riscv64_toIR.c ---*/
+/*--------------------------------------------------------------------*/
+
+/*
+   This file is part of Valgrind, a dynamic binary instrumentation
+   framework.
+
+   Copyright (C) 2020-2023 Petr Pavlu
+      petr.pavlu@xxxxxxxxxx
+
+   This program is free software; you can redistribute it and/or
+   modify it under the terms of the GNU General Public License as
+   published by the Free Software Foundation; either version 2 of the
+   License, or (at your option) any later version.
+
+   This program is distributed in the hope that it will be useful, but
+   WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   General Public License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with this program; if not, see <http://www.gnu.org/licenses/>.
+
+   The GNU General Public License is contained in the file COPYING.
+*/
+
+/* Translates riscv64 code to IR. */
+
+/* "Special" instructions.
+
+   This instruction decoder can decode four special instructions which mean
+   nothing natively (are no-ops as far as regs/mem are concerned) but have
+   meaning for supporting Valgrind. A special instruction is flagged by
+   a 16-byte preamble:
+
+      00305013 00d05013 03305013 03d05013
+      (srli zero, zero, 3;   srli zero, zero, 13
+       srli zero, zero, 51;  srli zero, zero, 61)
+
+   Following that, one of the following 4 are allowed (standard interpretation
+   in parentheses):
+
+      00a56533 (or a0, a0, a0)   a3 = client_request ( a4 )
+      00b5e5b3 (or a1, a1, a1)   a3 = guest_NRADDR
+      00c66633 (or a2, a2, a2)   branch-and-link-to-noredir t0
+      00d6e6b3 (or a3, a3, a3)   IR injection
+
+   Any other bytes following the 16-byte preamble are illegal and constitute
+   a failure in instruction decoding. This all assumes that the preamble will
+   never occur except in specific code fragments designed for Valgrind to catch.
+*/
+
+#include "libvex_guest_riscv64.h"
+
+#include "guest_riscv64_defs.h"
+#include "main_globals.h"
+#include "main_util.h"
+
+/*------------------------------------------------------------*/
+/*--- Debugging output                                     ---*/
+/*------------------------------------------------------------*/
+
+#define DIP(format, args...)                                                   \
+   do {                                                                        \
+      if (vex_traceflags & VEX_TRACE_FE)                                       \
+         vex_printf(format, ##args);                                           \
+   } while (0)
+
+#define DIS(buf, format, args...)                                              \
+   do {                                                                        \
+      if (vex_traceflags & VEX_TRACE_FE)                                       \
+         vex_sprintf(buf, format, ##args);                                     \
+   } while (0)
+
+/*------------------------------------------------------------*/
+/*--- Helper bits and pieces for deconstructing the        ---*/
+/*--- riscv64 insn stream.                                 ---*/
+/*------------------------------------------------------------*/
+
+/* Do a little-endian load of a 32-bit word, regardless of the endianness of the
+   underlying host. */
+static inline UInt getUIntLittleEndianly(const UChar* p)
+{
+   UInt w = 0;
+   w      = (w << 8) | p[3];
+   w      = (w << 8) | p[2];
+   w      = (w << 8) | p[1];
+   w      = (w << 8) | p[0];
+   return w;
+}
+
+/* Do read of an instruction, which can be 16-bit (compressed) or 32-bit in
+   size. */
+static inline UInt getInsn(const UChar* p)
+{
+   Bool is_compressed = (p[0] & 0x3) != 0x3;
+   UInt w             = 0;
+   if (!is_compressed) {
+      w = (w << 8) | p[3];
+      w = (w << 8) | p[2];
+   }
+   w = (w << 8) | p[1];
+   w = (w << 8) | p[0];
+   return w;
+}
+
+/* Produce _uint[_bMax:_bMin]. */
+#define SLICE_UInt(_uint, _bMax, _bMin)                                        \
+   ((((UInt)(_uint)) >> (_bMin)) &                                             \
+    (UInt)((1ULL << ((_bMax) - (_bMin) + 1)) - 1ULL))
+
+/*------------------------------------------------------------*/
+/*--- Helpers for constructing IR.                         ---*/
+/*------------------------------------------------------------*/
+
+/* Create an expression to produce a 64-bit constant. */
+static IRExpr* mkU64(ULong i) { return IRExpr_Const(IRConst_U64(i)); }
+
+/* Create an expression to produce a 32-bit constant. */
+static IRExpr* mkU32(UInt i) { return IRExpr_Const(IRConst_U32(i)); }
+
+/* Create an expression to produce an 8-bit constant. */
+static IRExpr* mkU8(UInt i)
+{
+   vassert(i < 256);
+   return IRExpr_Const(IRConst_U8((UChar)i));
+}
+
+/* Create an expression to read a temporary. */
+static IRExpr* mkexpr(IRTemp tmp) { return IRExpr_RdTmp(tmp); }
+
+/* Create an unary-operation expression. */
+static IRExpr* unop(IROp op, IRExpr* a) { return IRExpr_Unop(op, a); }
+
+/* Create a binary-operation expression. */
+static IRExpr* binop(IROp op, IRExpr* a1, IRExpr* a2)
+{
+   return IRExpr_Binop(op, a1, a2);
+}
+
+/* Create a ternary-operation expression. */
+static IRExpr* triop(IROp op, IRExpr* a1, IRExpr* a2, IRExpr* a3)
+{
+   return IRExpr_Triop(op, a1, a2, a3);
+}
+
+/* Create a quaternary-operation expression. */
+static IRExpr* qop(IROp op, IRExpr* a1, IRExpr* a2, IRExpr* a3, IRExpr* a4)
+{
+   return IRExpr_Qop(op, a1, a2, a3, a4);
+}
+
+/* Create an expression to load a value from memory (in the little-endian
+   order). */
+static IRExpr* loadLE(IRType ty, IRExpr* addr)
+{
+   return IRExpr_Load(Iend_LE, ty, addr);
+}
+
+/* Add a statement to the list held by irsb. */
+static void stmt(/*MOD*/ IRSB* irsb, IRStmt* st) { addStmtToIRSB(irsb, st); }
+
+/* Add a statement to assign a value to a temporary. */
+static void assign(/*MOD*/ IRSB* irsb, IRTemp dst, IRExpr* e)
+{
+   stmt(irsb, IRStmt_WrTmp(dst, e));
+}
+
+/* Generate a statement to store a value in memory (in the little-endian
+   order). */
+static void storeLE(/*MOD*/ IRSB* irsb, IRExpr* addr, IRExpr* data)
+{
+   stmt(irsb, IRStmt_Store(Iend_LE, addr, data));
+}
+
+/* Generate a new temporary of the given type. */
+static IRTemp newTemp(/*MOD*/ IRSB* irsb, IRType ty)
+{
+   vassert(isPlausibleIRType(ty));
+   return newIRTemp(irsb->tyenv, ty);
+}
+
+/* Sign-extend a 32/64-bit integer expression to 64 bits. */
+static IRExpr* widenSto64(IRType srcTy, IRExpr* e)
+{
+   switch (srcTy) {
+   case Ity_I64:
+      return e;
+   case Ity_I32:
+      return unop(Iop_32Sto64, e);
+   default:
+      vpanic("widenSto64(riscv64)");
+   }
+}
+
+/* Narrow a 64-bit integer expression to 32/64 bits. */
+static IRExpr* narrowFrom64(IRType dstTy, IRExpr* e)
+{
+   switch (dstTy) {
+   case Ity_I64:
+      return e;
+   case Ity_I32:
+      return unop(Iop_64to32, e);
+   default:
+      vpanic("narrowFrom64(riscv64)");
+   }
+}
+
+/*------------------------------------------------------------*/
+/*--- Offsets of various parts of the riscv64 guest state  ---*/
+/*------------------------------------------------------------*/
+
+#define OFFB_X0  offsetof(VexGuestRISCV64State, guest_x0)
+#define OFFB_X1  offsetof(VexGuestRISCV64State, guest_x1)
+#define OFFB_X2  offsetof(VexGuestRISCV64State, guest_x2)
+#define OFFB_X3  offsetof(VexGuestRISCV64State, guest_x3)
+#define OFFB_X4  offsetof(VexGuestRISCV64State, guest_x4)
+#define OFFB_X5  offsetof(VexGuestRISCV64State, guest_x5)
+#define OFFB_X6  offsetof(VexGuestRISCV64State, guest_x6)
+#define OFFB_X7  offsetof(VexGuestRISCV64State, guest_x7)
+#define OFFB_X8  offsetof(VexGuestRISCV64State, guest_x8)
+#define OFFB_X9  offsetof(VexGuestRISCV64State, guest_x9)
+#define OFFB_X10 offsetof(VexGuestRISCV64State, guest_x10)
+#define OFFB_X11 offsetof(VexGuestRISCV64State, guest_x11)
+#define OFFB_X12 offsetof(VexGuestRISCV64State, guest_x12)
+#define OFFB_X13 offsetof(VexGuestRISCV64State, guest_x13)
+#define OFFB_X14 offsetof(VexGuestRISCV64State, guest_x14)
+#define OFFB_X15 offsetof(VexGuestRISCV64State, guest_x15)
+#define OFFB_X16 offsetof(VexGuestRISCV64State, guest_x16)
+#define OFFB_X17 offsetof(VexGuestRISCV64State, guest_x17)
+#define OFFB_X18 offsetof(VexGuestRISCV64State, guest_x18)
+#define OFFB_X19 offsetof(VexGuestRISCV64State, guest_x19)
+#define OFFB_X20 offsetof(VexGuestRISCV64State, guest_x20)
+#define OFFB_X21 offsetof(VexGuestRISCV64State, guest_x21)
+#define OFFB_X22 offsetof(VexGuestRISCV64State, guest_x22)
+#define OFFB_X23 offsetof(VexGuestRISCV64State, guest_x23)
+#define OFFB_X24 offsetof(VexGuestRISCV64State, guest_x24)
+#define OFFB_X25 offsetof(VexGuestRISCV64State, guest_x25)
+#define OFFB_X26 offsetof(VexGuestRISCV64State, guest_x26)
+#define OFFB_X27 offsetof(VexGuestRISCV64State, guest_x27)
+#define OFFB_X28 offsetof(VexGuestRISCV64State, guest_x28)
+#define OFFB_X29 offsetof(VexGuestRISCV64State, guest_x29)
+#define OFFB_X30 offsetof(VexGuestRISCV64State, guest_x30)
+#define OFFB_X31 offsetof(VexGuestRISCV64State, guest_x31)
+#define OFFB_PC  offsetof(VexGuestRISCV64State, guest_pc)
+
+#define OFFB_F0   offsetof(VexGuestRISCV64State, guest_f0)
+#define OFFB_F1   offsetof(VexGuestRISCV64State, guest_f1)
+#define OFFB_F2   offsetof(VexGuestRISCV64State, guest_f2)
+#define OFFB_F3   offsetof(VexGuestRISCV64State, guest_f3)
+#define OFFB_F4   offsetof(VexGuestRISCV64State, guest_f4)
+#define OFFB_F5   offsetof(VexGuestRISCV64State, guest_f5)
+#define OFFB_F6   offsetof(VexGuestRISCV64State, guest_f6)
+#define OFFB_F7   offsetof(VexGuestRISCV64State, guest_f7)
+#define OFFB_F8   offsetof(VexGuestRISCV64State, guest_f8)
+#define OFFB_F9   offsetof(VexGuestRISCV64State, guest_f9)
+#define OFFB_F10  offsetof(VexGuestRISCV64State, guest_f10)
+#define OFFB_F11  offsetof(VexGuestRISCV64State, guest_f11)
+#define OFFB_F12  offsetof(VexGuestRISCV64State, guest_f12)
+#define OFFB_F13  offsetof(VexGuestRISCV64State, guest_f13)
+#define OFFB_F14  offsetof(VexGuestRISCV64State, guest_f14)
+#define OFFB_F15  offsetof(VexGuestRISCV64State, guest_f15)
+#define OFFB_F16  offsetof(VexGuestRISCV64State, guest_f16)
+#define OFFB_F17  offsetof(VexGuestRISCV64State, guest_f17)
+#define OFFB_F18  offsetof(VexGuestRISCV64State, guest_f18)
+#define OFFB_F19  offsetof(VexGuestRISCV64State, guest_f19)
+#define OFFB_F20  offsetof(VexGuestRISCV64State, guest_f20)
+#define OFFB_F21  offsetof(VexGuestRISCV64State, guest_f21)
+#define OFFB_F22  offsetof(VexGuestRISCV64State, guest_f22)
+#define OFFB_F23  offsetof(VexGuestRISCV64State, guest_f23)
+#define OFFB_F24  offsetof(VexGuestRISCV64State, guest_f24)
+#define OFFB_F25  offsetof(VexGuestRISCV64State, guest_f25)
+#define OFFB_F26  offsetof(VexGuestRISCV64State, guest_f26)
+#define OFFB_F27  offsetof(VexGuestRISCV64State, guest_f27)
+#define OFFB_F28  offsetof(VexGuestRISCV64State, guest_f28)
+#define OFFB_F29  offsetof(VexGuestRISCV64State, guest_f29)
+#define OFFB_F30  offsetof(VexGuestRISCV64State, guest_f30)
+#define OFFB_F31  offsetof(VexGuestRISCV64State, guest_f31)
+#define OFFB_FCSR offsetof(VexGuestRISCV64State, guest_fcsr)
+
+#define OFFB_EMNOTE  offsetof(VexGuestRISCV64State, guest_EMNOTE)
+#define OFFB_CMSTART offsetof(VexGuestRISCV64State, guest_CMSTART)
+#define OFFB_CMLEN   offsetof(VexGuestRISCV64State, guest_CMLEN)
+#define OFFB_NRADDR  offsetof(VexGuestRISCV64State, guest_NRADDR)
+
+#define OFFB_LLSC_SIZE offsetof(VexGuestRISCV64State, guest_LLSC_SIZE)
+#define OFFB_LLSC_ADDR offsetof(VexGuestRISCV64State, guest_LLSC_ADDR)
+#define OFFB_LLSC_DATA offsetof(VexGuestRISCV64State, guest_LLSC_DATA)
+
+/*------------------------------------------------------------*/
+/*--- Integer registers                                    ---*/
+/*------------------------------------------------------------*/
+
+static Int offsetIReg64(UInt iregNo)
+{
+   switch (iregNo) {
+   case 0:
+      return OFFB_X0;
+   case 1:
+      return OFFB_X1;
+   case 2:
+      return OFFB_X2;
+   case 3:
+      return OFFB_X3;
+   case 4:
+      return OFFB_X4;
+   case 5:
+      return OFFB_X5;
+   case 6:
+      return OFFB_X6;
+   case 7:
+      return OFFB_X7;
+   case 8:
+      return OFFB_X8;
+   case 9:
+      return OFFB_X9;
+   case 10:
+      return OFFB_X10;
+   case 11:
+      return OFFB_X11;
+   case 12:
+      return OFFB_X12;
+   case 13:
+      return OFFB_X13;
+   case 14:
+      return OFFB_X14;
+   case 15:
+      return OFFB_X15;
+   case 16:
+      return OFFB_X16;
+   case 17:
+      return OFFB_X17;
+   case 18:
+      return OFFB_X18;
+   case 19:
+      return OFFB_X19;
+   case 20:
+      return OFFB_X20;
+   case 21:
+      return OFFB_X21;
+   case 22:
+      return OFFB_X22;
+   case 23:
+      return OFFB_X23;
+   case 24:
+      return OFFB_X24;
+   case 25:
+      return OFFB_X25;
+   case 26:
+      return OFFB_X26;
+   case 27:
+      return OFFB_X27;
+   case 28:
+      return OFFB_X28;
+   case 29:
+      return OFFB_X29;
+   case 30:
+      return OFFB_X30;
+   case 31:
+      return OFFB_X31;
+   default:
+      vassert(0);
+   }
+}
+
+/* Obtain ABI name of a register. */
+static const HChar* nameIReg(UInt iregNo)
+{
+   vassert(iregNo < 32);
+   static const HChar* names[32] = {
+      "zero", "ra", "sp", "gp", "tp",  "t0",  "t1", "t2", "s0", "s1", "a0",
+      "a1",   "a2", "a3", "a4", "a5",  "a6",  "a7", "s2", "s3", "s4", "s5",
+      "s6",   "s7", "s8", "s9", "s10", "s11", "t3", "t4", "t5", "t6"};
+   return names[iregNo];
+}
+
+/* Read a 64-bit value from a guest integer register. */
+static IRExpr* getIReg64(UInt iregNo)
+{
+   vassert(iregNo < 32);
+   return IRExpr_Get(offsetIReg64(iregNo), Ity_I64);
+}
+
+/* Write a 64-bit value into a guest integer register. */
+static void putIReg64(/*OUT*/ IRSB* irsb, UInt iregNo, /*IN*/ IRExpr* e)
+{
+   vassert(iregNo > 0 && iregNo < 32);
+   vassert(typeOfIRExpr(irsb->tyenv, e) == Ity_I64);
+   stmt(irsb, IRStmt_Put(offsetIReg64(iregNo), e));
+}
+
+/* Read a 32-bit value from a guest integer register. */
+static IRExpr* getIReg32(UInt iregNo)
+{
+   vassert(iregNo < 32);
+   return unop(Iop_64to32, IRExpr_Get(offsetIReg64(iregNo), Ity_I64));
+}
+
+/* Write a 32-bit value into a guest integer register. */
+static void putIReg32(/*OUT*/ IRSB* irsb, UInt iregNo, /*IN*/ IRExpr* e)
+{
+   vassert(iregNo > 0 && iregNo < 32);
+   vassert(typeOfIRExpr(irsb->tyenv, e) == Ity_I32);
+   stmt(irsb, IRStmt_Put(offsetIReg64(iregNo), unop(Iop_32Sto64, e)));
+}
+
+/* Write an address into the guest pc. */
+static void putPC(/*OUT*/ IRSB* irsb, /*IN*/ IRExpr* e)
+{
+   vassert(typeOfIRExpr(irsb->tyenv, e) == Ity_I64);
+   stmt(irsb, IRStmt_Put(OFFB_PC, e));
+}
+
+/*------------------------------------------------------------*/
+/*--- Floating-point registers                             ---*/
+/*------------------------------------------------------------*/
+
+static Int offsetFReg(UInt fregNo)
+{
+   switch (fregNo) {
+   case 0:
+      return OFFB_F0;
+   case 1:
+      return OFFB_F1;
+   case 2:
+      return OFFB_F2;
+   case 3:
+      return OFFB_F3;
+   case 4:
+      return OFFB_F4;
+   case 5:
+      return OFFB_F5;
+   case 6:
+      return OFFB_F6;
+   case 7:
+      return OFFB_F7;
+   case 8:
+      return OFFB_F8;
+   case 9:
+      return OFFB_F9;
+   case 10:
+      return OFFB_F10;
+   case 11:
+      return OFFB_F11;
+   case 12:
+      return OFFB_F12;
+   case 13:
+      return OFFB_F13;
+   case 14:
+      return OFFB_F14;
+   case 15:
+      return OFFB_F15;
+   case 16:
+      return OFFB_F16;
+   case 17:
+      return OFFB_F17;
+   case 18:
+      return OFFB_F18;
+   case 19:
+      return OFFB_F19;
+   case 20:
+      return OFFB_F20;
+   case 21:
+      return OFFB_F21;
+   case 22:
+      return OFFB_F22;
+   case 23:
+      return OFFB_F23;
+   case 24:
+      return OFFB_F24;
+   case 25:
+      return OFFB_F25;
+   case 26:
+      return OFFB_F26;
+   case 27:
+      return OFFB_F27;
+   case 28:
+      return OFFB_F28;
+   case 29:
+      return OFFB_F29;
+   case 30:
+      return OFFB_F30;
+   case 31:
+      return OFFB_F31;
+   default:
+      vassert(0);
+   }
+}
+
+/* Obtain ABI name of a register. */
+static const HChar* nameFReg(UInt fregNo)
+{
+   vassert(fregNo < 32);
+   static const HChar* names[32] = {
+      "ft0", "ft1", "ft2",  "ft3",  "ft4", "ft5", "ft6",  "ft7",
+      "fs0", "fs1", "fa0",  "fa1",  "fa2", "fa3", "fa4",  "fa5",
+      "fa6", "fa7", "fs2",  "fs3",  "fs4", "fs5", "fs6",  "fs7",
+      "fs8", "fs9", "fs10", "fs11", "ft8", "ft9", "ft10", "ft11"};
+   return names[fregNo];
+}
+
+/* Read a 64-bit value from a guest floating-point register. */
+static IRExpr* getFReg64(UInt fregNo)
+{
+   vassert(fregNo < 32);
+   return IRExpr_Get(offsetFReg(fregNo), Ity_F64);
+}
+
+/* Write a 64-bit value into a guest floating-point register. */
+static void putFReg64(/*OUT*/ IRSB* irsb, UInt fregNo, /*IN*/ IRExpr* e)
+{
+   vassert(fregNo < 32);
+   vassert(typeOfIRExpr(irsb->tyenv, e) == Ity_F64);
+   stmt(irsb, IRStmt_Put(offsetFReg(fregNo), e));
+}
+
+/* Read a 32-bit value from a guest floating-point register. */
+static IRExpr* getFReg32(UInt fregNo)
+{
+   vassert(fregNo < 32);
+   /* Note that the following access depends on the host being little-endian
+      which is checked in disInstr_RISCV64(). */
+   IRExpr* f64       = getFReg64(fregNo);
+   IRExpr* high_half = unop(Iop_64HIto32, unop(Iop_ReinterpF64asI64, f64));
+   IRExpr* cond      = binop(Iop_CmpEQ32, high_half, mkU32(0xffffffff));
+   IRExpr* res       = IRExpr_ITE(
+      cond, IRExpr_Get(offsetFReg(fregNo), Ity_F32),
+      /* canonical nan */ unop(Iop_ReinterpI32asF32, mkU32(0x7fc00000)));
+   return res;
+}
+
+/* Write a 32-bit value into a guest floating-point register. */
+static void putFReg32(/*OUT*/ IRSB* irsb, UInt fregNo, /*IN*/ IRExpr* e)
+{
+   vassert(fregNo < 32);
+   vassert(typeOfIRExpr(irsb->tyenv, e) == Ity_F32);
+   /* Note that the following access depends on the host being little-endian
+      which is checked in disInstr_RISCV64(). */
+   Int offset = offsetFReg(fregNo);
+   stmt(irsb, IRStmt_Put(offset, e));
+   /* Write 1's in the upper bits of the target 64-bit register to create
+      a NaN-boxed value, as mandated by the RISC-V ISA. */
+   stmt(irsb, IRStmt_Put(offset + 4, mkU32(0xffffffff)));
+   /* TODO Check that this works with Memcheck. */
+}
+
+/* Read a 32-bit value from the fcsr. */
+static IRExpr* getFCSR(void) { return IRExpr_Get(OFFB_FCSR, Ity_I32); }
+
+/* Write a 32-bit value into the fcsr. */
+static void putFCSR(/*OUT*/ IRSB* irsb, /*IN*/ IRExpr* e)
+{
+   vassert(typeOfIRExpr(irsb->tyenv, e) == Ity_I32);
+   stmt(irsb, IRStmt_Put(OFFB_FCSR, e));
+}
+
+/* Accumulate exception flags in fcsr. */
+static void accumulateFFLAGS(/*OUT*/ IRSB* irsb, /*IN*/ IRExpr* e)
+{
+   vassert(typeOfIRExpr(irsb->tyenv, e) == Ity_I32);
+   putFCSR(irsb, binop(Iop_Or32, getFCSR(), binop(Iop_And32, e, mkU32(0x1f))));
+}
+
+/* Generate IR to get hold of the rounding mode in both RISC-V and IR
+   formats. A floating-point operation can use either a static rounding mode
+   encoded in the instruction, or a dynamic rounding mode held in fcsr. Bind the
+   final result to the passed temporaries (which are allocated by the function).
+ */
+static void mk_get_rounding_mode(/*MOD*/ IRSB*   irsb,
+                                 /*OUT*/ IRTemp* rm_RISCV,
+                                 /*OUT*/ IRTemp* rm_IR,
+                                 UInt            inst_rm_RISCV)
+{
+   /*
+      rounding mode                | RISC-V |  IR
+      --------------------------------------------
+      to nearest, ties to even     |   000  | 0000
+      to zero                      |   001  | 0011
+      to +infinity                 |   010  | 0010
+      to -infinity                 |   011  | 0001
+      to nearest, ties away from 0 |   100  | 0100
+      invalid                      |   101  | 1000
+      invalid                      |   110  | 1000
+      dynamic                      |   111  | 1000
+
+      The 'dynamic' value selects the mode from fcsr. Its value is valid when
+      encoded in the instruction but naturally invalid when found in fcsr.
+
+      Static mode is known at the decode time and can be directly expressed by
+      a respective rounding mode IR constant.
+
+      Dynamic mode requires a runtime mapping from the RISC-V to the IR mode.
+      It can be implemented using the following transformation:
+         t0 = fcsr_rm_RISCV - 20
+         t1 = t0 >> 2
+         t2 = fcsr_rm_RISCV + 3
+         t3 = t2 ^ 3
+         rm_IR = t1 & t3
+   */
+   *rm_RISCV = newTemp(irsb, Ity_I32);
+   *rm_IR    = newTemp(irsb, Ity_I32);
+   switch (inst_rm_RISCV) {
+   case 0b000:
+      assign(irsb, *rm_RISCV, mkU32(0));
+      assign(irsb, *rm_IR, mkU32(Irrm_NEAREST));
+      break;
+   case 0b001:
+      assign(irsb, *rm_RISCV, mkU32(1));
+      assign(irsb, *rm_IR, mkU32(Irrm_ZERO));
+      break;
+   case 0b010:
+      assign(irsb, *rm_RISCV, mkU32(2));
+      assign(irsb, *rm_IR, mkU32(Irrm_PosINF));
+      break;
+   case 0b011:
+      assign(irsb, *rm_RISCV, mkU32(3));
+      assign(irsb, *rm_IR, mkU32(Irrm_NegINF));
+      break;
+   case 0b100:
+      assign(irsb, *rm_RISCV, mkU32(4));
+      assign(irsb, *rm_IR, mkU32(Irrm_NEAREST_TIE_AWAY_0));
+      break;
+   case 0b101:
+      assign(irsb, *rm_RISCV, mkU32(5));
+      assign(irsb, *rm_IR, mkU32(Irrm_INVALID));
+      break;
+   case 0b110:
+      assign(irsb, *rm_RISCV, mkU32(6));
+      assign(irsb, *rm_IR, mkU32(Irrm_INVALID));
+      break;
+   case 0b111: {
+      assign(irsb, *rm_RISCV,
+             binop(Iop_And32, binop(Iop_Shr32, getFCSR(), mkU8(5)), mkU32(7)));
+      IRTemp t0 = newTemp(irsb, Ity_I32);
+      assign(irsb, t0, binop(Iop_Sub32, mkexpr(*rm_RISCV), mkU32(20)));
+      IRTemp t1 = newTemp(irsb, Ity_I32);
+      assign(irsb, t1, binop(Iop_Shr32, mkexpr(t0), mkU8(2)));
+      IRTemp t2 = newTemp(irsb, Ity_I32);
+      assign(irsb, t2, binop(Iop_Add32, mkexpr(*rm_RISCV), mkU32(3)));
+      IRTemp t3 = newTemp(irsb, Ity_I32);
+      assign(irsb, t3, binop(Iop_Xor32, mkexpr(t2), mkU32(3)));
+      assign(irsb, *rm_IR, binop(Iop_And32, mkexpr(t1), mkexpr(t3)));
+      break;
+   }
+   default:
+      vassert(0);
+   }
+}
+
+/*------------------------------------------------------------*/
+/*--- Name helpers                                         ---*/
+/*------------------------------------------------------------*/
+
+/* Obtain an acquire/release atomic-instruction suffix. */
+static const HChar* nameAqRlSuffix(UInt aqrl)
+{
+   switch (aqrl) {
+   case 0b00:
+      return "";
+   case 0b01:
+      return ".rl";
+   case 0b10:
+      return ".aq";
+   case 0b11:
+      return ".aqrl";
+   default:
+      vpanic("nameAqRlSuffix(riscv64)");
+   }
+}
+
+/* Obtain a control/status register name. */
+static const HChar* nameCSR(UInt csr)
+{
+   switch (csr) {
+   case 0x001:
+      return "fflags";
+   case 0x002:
+      return "frm";
+   case 0x003:
+      return "fcsr";
+   default:
+      vpanic("nameCSR(riscv64)");
+   }
+}
+
+/* Obtain a floating-point rounding-mode operand string. */
+static const HChar* nameRMOperand(UInt rm)
+{
+   switch (rm) {
+   case 0b000:
+      return ", rne";
+   case 0b001:
+      return ", rtz";
+   case 0b010:
+      return ", rdn";
+   case 0b011:
+      return ", rup";
+   case 0b100:
+      return ", rmm";
+   case 0b101:
+      return ", <invalid>";
+   case 0b110:
+      return ", <invalid>";
+   case 0b111:
+      return ""; /* dyn */
+   default:
+      vpanic("nameRMOperand(riscv64)");
+   }
+}
+
+/*------------------------------------------------------------*/
+/*--- Disassemble a single instruction                     ---*/
+/*------------------------------------------------------------*/
+
+/* A macro to fish bits out of 'insn' which is a local variable to all
+   disassembly functions. */
+#define INSN(_bMax, _bMin) SLICE_UInt(insn, (_bMax), (_bMin))
+
+static Bool dis_RV64C(/*MB_OUT*/ DisResult* dres,
+                      /*OUT*/ IRSB*         irsb,
+                      UInt                  insn,
+                      Addr                  guest_pc_curr_instr,
+                      Bool                  sigill_diag)
+{
+   vassert(INSN(1, 0) == 0b00 || INSN(1, 0) == 0b01 || INSN(1, 0) == 0b10);
+
+   /* ---- RV64C compressed instruction set, quadrant 0 ----- */
+
+   /* ------------- c.addi4spn rd, nzuimm[9:2] -------------- */
+   if (INSN(1, 0) == 0b00 && INSN(15, 13) == 0b000) {
+      UInt rd = INSN(4, 2) + 8;
+      UInt nzuimm9_2 =
+         INSN(10, 7) << 4 | INSN(12, 11) << 2 | INSN(5, 5) << 1 | INSN(6, 6);
+      if (nzuimm9_2 == 0) {
+         /* Invalid C.ADDI4SPN, fall through. */
+      } else {
+         ULong uimm = nzuimm9_2 << 2;
+         putIReg64(irsb, rd,
+                   binop(Iop_Add64, getIReg64(2 /*x2/sp*/), mkU64(uimm)));
+         DIP("c.addi4spn %s, %llu\n", nameIReg(rd), uimm);
+         return True;
+      }
+   }
+
+   /* -------------- c.fld rd, uimm[7:3](rs1) --------------- */
+   if (INSN(1, 0) == 0b00 && INSN(15, 13) == 0b001) {
+      UInt  rd      = INSN(4, 2) + 8;
+      UInt  rs1     = INSN(9, 7) + 8;
+      UInt  uimm7_3 = INSN(6, 5) << 3 | INSN(12, 10);
+      ULong uimm    = uimm7_3 << 3;
+      putFReg64(irsb, rd,
+                loadLE(Ity_F64, binop(Iop_Add64, getIReg64(rs1), mkU64(uimm))));
+      DIP("c.fld %s, %llu(%s)\n", nameFReg(rd), uimm, nameIReg(rs1));
+      return True;
+   }
+
+   /* --------------- c.lw rd, uimm[6:2](rs1) --------------- */
+   if (INSN(1, 0) == 0b00 && INSN(15, 13) == 0b010) {
+      UInt  rd      = INSN(4, 2) + 8;
+      UInt  rs1     = INSN(9, 7) + 8;
+      UInt  uimm6_2 = INSN(5, 5) << 4 | INSN(12, 10) << 1 | INSN(6, 6);
+      ULong uimm    = uimm6_2 << 2;
+      putIReg64(
+         irsb, rd,
+         unop(Iop_32Sto64,
+              loadLE(Ity_I32, binop(Iop_Add64, getIReg64(rs1), mkU64(uimm)))));
+      DIP("c.lw %s, %llu(%s)\n", nameIReg(rd), uimm, nameIReg(rs1));
+      return True;
+   }
+
+   /* --------------- c.ld rd, uimm[7:3](rs1) --------------- */
+   if (INSN(1, 0) == 0b00 && INSN(15, 13) == 0b011) {
+      UInt  rd      = INSN(4, 2) + 8;
+      UInt  rs1     = INSN(9, 7) + 8;
+      UInt  uimm7_3 = INSN(6, 5) << 3 | INSN(12, 10);
+      ULong uimm    = uimm7_3 << 3;
+      putIReg64(irsb, rd,
+                loadLE(Ity_I64, binop(Iop_Add64, getIReg64(rs1), mkU64(uimm))));
+      DIP("c.ld %s, %llu(%s)\n", nameIReg(rd), uimm, nameIReg(rs1));
+      return True;
+   }
+
+   /* -------------- c.fsd rs2, uimm[7:3](rs1) -------------- */
+   if (INSN(1, 0) == 0b00 && INSN(15, 13) == 0b101) {
+      UInt  rs1     = INSN(9, 7) + 8;
+      UInt  rs2     = INSN(4, 2) + 8;
+      UInt  uimm7_3 = INSN(6, 5) << 3 | INSN(12, 10);
+      ULong uimm    = uimm7_3 << 3;
+      storeLE(irsb, binop(Iop_Add64, getIReg64(rs1), mkU64(uimm)),
+              getFReg64(rs2));
+      DIP("c.fsd %s, %llu(%s)\n", nameFReg(rs2), uimm, nameIReg(rs1));
+      return True;
+   }
+
+   /* -------------- c.sw rs2, uimm[6:2](rs1) --------------- */
+   if (INSN(1, 0) == 0b00 && INSN(15, 13) == 0b110) {
+      UInt  rs1     = INSN(9, 7) + 8;
+      UInt  rs2     = INSN(4, 2) + 8;
+      UInt  uimm6_2 = INSN(5, 5) << 4 | INSN(12, 10) << 1 | INSN(6, 6);
+      ULong uimm    = uimm6_2 << 2;
+      storeLE(irsb, binop(Iop_Add64, getIReg64(rs1), mkU64(uimm)),
+              unop(Iop_64to32, getIReg64(rs2)));
+      DIP("c.sw %s, %llu(%s)\n", nameIReg(rs2), uimm, nameIReg(rs1));
+      return True;
+   }
+
+   /* -------------- c.sd rs2, uimm[7:3](rs1) --------------- */
+   if (INSN(1, 0) == 0b00 && INSN(15, 13) == 0b111) {
+      UInt  rs1     = INSN(9, 7) + 8;
+      UInt  rs2     = INSN(4, 2) + 8;
+      UInt  uimm7_3 = INSN(6, 5) << 3 | INSN(12, 10);
+      ULong uimm    = uimm7_3 << 3;
+      storeLE(irsb, binop(Iop_Add64, getIReg64(rs1), mkU64(uimm)),
+              getIReg64(rs2));
+      DIP("c.sd %s, %llu(%s)\n", nameIReg(rs2), uimm, nameIReg(rs1));
+      return True;
+   }
+
+   /* ---- RV64C compressed instruction set, quadrant 1 ----- */
+
+   /* ------------------------ c.nop ------------------------ */
+   if (INSN(15, 0) == 0b0000000000000001) {
+      DIP("c.nop\n");
+      return True;
+   }
+
+   /* -------------- c.addi rd_rs1, nzimm[5:0] -------------- */
+   if (INSN(1, 0) == 0b01 && INSN(15, 13) == 0b000) {
+      UInt rd_rs1   = INSN(11, 7);
+      UInt nzimm5_0 = INSN(12, 12) << 5 | INSN(6, 2);
+      if (rd_rs1 == 0 || nzimm5_0 == 0) {
+         /* Invalid C.ADDI, fall through. */
+      } else {
+         ULong simm = vex_sx_to_64(nzimm5_0, 6);
+         putIReg64(irsb, rd_rs1,
+                   binop(Iop_Add64, getIReg64(rd_rs1), mkU64(simm)));
+         DIP("c.addi %s, %lld\n", nameIReg(rd_rs1), (Long)simm);
+         return True;
+      }
+   }
+
+   /* -------------- c.addiw rd_rs1, imm[5:0] --------------- */
+   if (INSN(1, 0) == 0b01 && INSN(15, 13) == 0b001) {
+      UInt rd_rs1 = INSN(11, 7);
+      UInt imm5_0 = INSN(12, 12) << 5 | INSN(6, 2);
+      if (rd_rs1 == 0) {
+         /* Invalid C.ADDIW, fall through. */
+      } else {
+         UInt simm = (UInt)vex_sx_to_64(imm5_0, 6);
+         putIReg32(irsb, rd_rs1,
+                   binop(Iop_Add32, getIReg32(rd_rs1), mkU32(simm)));
+         DIP("c.addiw %s, %d\n", nameIReg(rd_rs1), (Int)simm);
+         return True;
+      }
+   }
+
+   /* ------------------ c.li rd, imm[5:0] ------------------ */
+   if (INSN(1, 0) == 0b01 && INSN(15, 13) == 0b010) {
+      UInt rd     = INSN(11, 7);
+      UInt imm5_0 = INSN(12, 12) << 5 | INSN(6, 2);
+      if (rd == 0) {
+         /* Invalid C.LI, fall through. */
+      } else {
+         ULong simm = vex_sx_to_64(imm5_0, 6);
+         putIReg64(irsb, rd, mkU64(simm));
+         DIP("c.li %s, %lld\n", nameIReg(rd), (Long)simm);
+         return True;
+      }
+   }
+
+   /* ---------------- c.addi16sp nzimm[9:4] ---------------- */
+   if (INSN(1, 0) == 0b01 && INSN(15, 13) == 0b011) {
+      UInt rd_rs1   = INSN(11, 7);
+      UInt nzimm9_4 = INSN(12, 12) << 5 | INSN(4, 3) << 3 | INSN(5, 5) << 2 |
+                      INSN(2, 2) << 1 | INSN(6, 6);
+      if (rd_rs1 != 2 || nzimm9_4 == 0) {
+         /* Invalid C.ADDI16SP, fall through. */
+      } else {
+         ULong simm = vex_sx_to_64(nzimm9_4 << 4, 10);
+         putIReg64(irsb, rd_rs1,
+                   binop(Iop_Add64, getIReg64(rd_rs1), mkU64(simm)));
+         DIP("c.addi16sp %lld\n", (Long)simm);
+         return True;
+      }
+   }
+
+   /* --------------- c.lui rd, nzimm[17:12] ---------------- */
+   if (INSN(1, 0) == 0b01 && INSN(15, 13) == 0b011) {
+      UInt rd         = INSN(11, 7);
+      UInt nzimm17_12 = INSN(12, 12) << 5 | INSN(6, 2);
+      if (rd == 0 || rd == 2 || nzimm17_12 == 0) {
+         /* Invalid C.LUI, fall through. */
+      } else {
+         putIReg64(irsb, rd, mkU64(vex_sx_to_64(nzimm17_12 << 12, 18)));
+         DIP("c.lui %s, 0x%x\n", nameIReg(rd), nzimm17_12);
+         return True;
+      }
+   }
+
+   /* ---------- c.{srli,srai} rd_rs1, nzuimm[5:0] ---------- */
+   if (INSN(1, 0) == 0b01 && INSN(11, 11) == 0b0 && INSN(15, 13) == 0b100) {
+      Bool is_log    = INSN(10, 10) == 0b0;
+      UInt rd_rs1    = INSN(9, 7) + 8;
+      UInt nzuimm5_0 = INSN(12, 12) << 5 | INSN(6, 2);
+      if (nzuimm5_0 == 0) {
+         /* Invalid C.{SRLI,SRAI}, fall through. */
+      } else {
+         putIReg64(irsb, rd_rs1,
+                   binop(is_log ? Iop_Shr64 : Iop_Sar64, getIReg64(rd_rs1),
+                         mkU8(nzuimm5_0)));
+         DIP("c.%s %s, %u\n", is_log ? "srli" : "srai", nameIReg(rd_rs1),
+             nzuimm5_0);
+         return True;
+      }
+   }
+
+   /* --------------- c.andi rd_rs1, imm[5:0] --------------- */
+   if (INSN(1, 0) == 0b01 && INSN(11, 10) == 0b10 && INSN(15, 13) == 0b100) {
+      UInt rd_rs1 = INSN(9, 7) + 8;
+      UInt imm5_0 = INSN(12, 12) << 5 | INSN(6, 2);
+      if (rd_rs1 == 0) {
+         /* Invalid C.ANDI, fall through. */
+      } else {
+         ULong simm = vex_sx_to_64(imm5_0, 6);
+         putIReg64(irsb, rd_rs1,
+                   binop(Iop_And64, getIReg64(rd_rs1), mkU64(simm)));
+         DIP("c.andi %s, 0x%llx\n", nameIReg(rd_rs1), simm);
+         return True;
+      }
+   }
+
+   /* ----------- c.{sub,xor,or,and} rd_rs1, rs2 ----------- */
+   if (INSN(1, 0) == 0b01 && INSN(15, 10) == 0b100011) {
+      UInt         funct2 = INSN(6, 5);
+      UInt         rd_rs1 = INSN(9, 7) + 8;
+      UInt         rs2    = INSN(4, 2) + 8;
+      const HChar* name;
+      IROp         op;
+      switch (funct2) {
+      case 0b00:
+         name = "sub";
+         op   = Iop_Sub64;
+         break;
+      case 0b01:
+         name = "xor";
+         op   = Iop_Xor64;
+         break;
+      case 0b10:
+         name = "or";
+         op   = Iop_Or64;
+         break;
+      case 0b11:
+         name = "and";
+         op   = Iop_And64;
+         break;
+      default:
+         vassert(0);
+      }
+      putIReg64(irsb, rd_rs1, binop(op, getIReg64(rd_rs1), getIReg64(rs2)));
+      DIP("c.%s %s, %s\n", name, nameIReg(rd_rs1), nameIReg(rs2));
+      return True;
+   }
+
+   /* -------------- c.{subw,addw} rd_rs1, rs2 -------------- */
+   if (INSN(1, 0) == 0b01 && INSN(6, 6) == 0b0 && INSN(15, 10) == 0b100111) {
+      Bool is_sub = INSN(5, 5) == 0b0;
+      UInt rd_rs1 = INSN(9, 7) + 8;
+      UInt rs2    = INSN(4, 2) + 8;
+      putIReg32(irsb, rd_rs1,
+                binop(is_sub ? Iop_Sub32 : Iop_Add32, getIReg32(rd_rs1),
+                      getIReg32(rs2)));
+      DIP("c.%s %s, %s\n", is_sub ? "subw" : "addw", nameIReg(rd_rs1),
+          nameIReg(rs2));
+      return True;
+   }
+
+   /* -------------------- c.j imm[11:1] -------------------- */
+   if (INSN(1, 0) == 0b01 && INSN(15, 13) == 0b101) {
+      UInt imm11_1 = INSN(12, 12) << 10 | INSN(8, 8) << 9 | INSN(10, 9) << 7 |
+                     INSN(6, 6) << 6 | INSN(7, 7) << 5 | INSN(2, 2) << 4 |
+                     INSN(11, 11) << 3 | INSN(5, 3);
+      ULong simm   = vex_sx_to_64(imm11_1 << 1, 12);
+      ULong dst_pc = guest_pc_curr_instr + simm;
+      putPC(irsb, mkU64(dst_pc));
+      dres->whatNext    = Dis_StopHere;
+      dres->jk_StopHere = Ijk_Boring;
+      DIP("c.j 0x%llx\n", dst_pc);
+      return True;
+   }
+
+   /* ------------- c.{beqz,bnez} rs1, imm[8:1] ------------- */
+   if (INSN(1, 0) == 0b01 && INSN(15, 14) == 0b11) {
+      Bool is_eq  = INSN(13, 13) == 0b0;
+      UInt rs1    = INSN(9, 7) + 8;
+      UInt imm8_1 = INSN(12, 12) << 7 | INSN(6, 5) << 5 | INSN(2, 2) << 4 |
+                    INSN(11, 10) << 2 | INSN(4, 3);
+      ULong simm   = vex_sx_to_64(imm8_1 << 1, 9);
+      ULong dst_pc = guest_pc_curr_instr + simm;
+      stmt(irsb, IRStmt_Exit(binop(is_eq ? Iop_CmpEQ64 : Iop_CmpNE64,
+                                   getIReg64(rs1), mkU64(0)),
+                             Ijk_Boring, IRConst_U64(dst_pc), OFFB_PC));
+      putPC(irsb, mkU64(guest_pc_curr_instr + 2));
+      dres->whatNext    = Dis_StopHere;
+      dres->jk_StopHere = Ijk_Boring;
+      DIP("c.%s %s, 0x%llx\n", is_eq ? "beqz" : "bnez", nameIReg(rs1), dst_pc);
+      return True;
+   }
+
+   /* ---- RV64C compressed instruction set, quadrant 2 ----- */
+
+   /* ------------- c.slli rd_rs1, nzuimm[5:0] -------------- */
+   if (INSN(1, 0) == 0b10 && INSN(15, 13) == 0b000) {
+      UInt rd_rs1    = INSN(11, 7);
+      UInt nzuimm5_0 = INSN(12, 12) << 5 | INSN(6, 2);
+      if (rd_rs1 == 0 || nzuimm5_0 == 0) {
+         /* Invalid C.SLLI, fall through. */
+      } else {
+         putIReg64(irsb, rd_rs1,
+                   binop(Iop_Shl64, getIReg64(rd_rs1), mkU8(nzuimm5_0)));
+         DIP("c.slli %s, %u\n", nameIReg(rd_rs1), nzuimm5_0);
+         return True;
+      }
+   }
+
+   /* -------------- c.fldsp rd, uimm[8:3](x2) -------------- */
+   if (INSN(1, 0) == 0b10 && INSN(15, 13) == 0b001) {
+      UInt  rd      = INSN(11, 7);
+      UInt  rs1     = 2; /* base=x2/sp */
+      UInt  uimm8_3 = INSN(4, 2) << 3 | INSN(12, 12) << 2 | INSN(6, 5);
+      ULong uimm    = uimm8_3 << 3;
+      putFReg64(irsb, rd,
+                loadLE(Ity_F64, binop(Iop_Add64, getIReg64(rs1), mkU64(uimm))));
+      DIP("c.fldsp %s, %llu(%s)\n", nameFReg(rd), uimm, nameIReg(rs1));
+      return True;
+   }
+
+   /* -------------- c.lwsp rd, uimm[7:2](x2) --------------- */
+   if (INSN(1, 0) == 0b10 && INSN(15, 13) == 0b010) {
+      UInt rd      = INSN(11, 7);
+      UInt rs1     = 2; /* base=x2/sp */
+      UInt uimm7_2 = INSN(3, 2) << 4 | INSN(12, 12) << 3 | INSN(6, 4);
+      if (rd == 0) {
+         /* Invalid C.LWSP, fall through. */
+      } else {
+         ULong uimm = uimm7_2 << 2;
+         putIReg64(irsb, rd,
+                   unop(Iop_32Sto64,
+                        loadLE(Ity_I32,
+                               binop(Iop_Add64, getIReg64(rs1), mkU64(uimm)))));
+         DIP("c.lwsp %s, %llu(%s)\n", nameIReg(rd), uimm, nameIReg(rs1));
+         return True;
+      }
+   }
+
+   /* -------------- c.ldsp rd, uimm[8:3](x2) --------------- */
+   if (INSN(1, 0) == 0b10 && INSN(15, 13) == 0b011) {
+      UInt rd      = INSN(11, 7);
+      UInt rs1     = 2; /* base=x2/sp */
+      UInt uimm8_3 = INSN(4, 2) << 3 | INSN(12, 12) << 2 | INSN(6, 5);
+      if (rd == 0) {
+         /* Invalid C.LDSP, fall through. */
+      } else {
+         ULong uimm = uimm8_3 << 3;
+         putIReg64(
+            irsb, rd,
+            loadLE(Ity_I64, binop(Iop_Add64, getIReg64(rs1), mkU64(uimm))));
+         DIP("c.ldsp %s, %llu(%s)\n", nameIReg(rd), uimm, nameIReg(rs1));
+         return True;
+      }
+   }
+
+   /* ---------------------- c.jr rs1 ----------------------- */
+   if (INSN(1, 0) == 0b10 && INSN(15, 12) == 0b1000) {
+      UInt rs1 = INSN(11, 7);
+      UInt rs2 = INSN(6, 2);
+      if (rs1 == 0 || rs2 != 0) {
+         /* Invalid C.JR, fall through. */
+      } else {
+         putPC(irsb, getIReg64(rs1));
+         dres->whatNext = Dis_StopHere;
+         if (rs1 == 1 /*x1/ra*/) {
+            dres->jk_StopHere = Ijk_Ret;
+            DIP("c.ret\n");
+         } else {
+            dres->jk_StopHere = Ijk_Boring;
+            DIP("c.jr %s\n", nameIReg(rs1));
+         }
+         return True;
+      }
+   }
+
+   /* -------------------- c.mv rd, rs2 --------------------- */
+   if (INSN(1, 0) == 0b10 && INSN(15, 12) == 0b1000) {
+      UInt rd  = INSN(11, 7);
+      UInt rs2 = INSN(6, 2);
+      if (rd == 0 || rs2 == 0) {
+         /* Invalid C.MV, fall through. */
+      } else {
+         putIReg64(irsb, rd, getIReg64(rs2));
+         DIP("c.mv %s, %s\n", nameIReg(rd), nameIReg(rs2));
+         return True;
+      }
+   }
+
+   /* --------------------- c.ebreak ------------------------ */
+   if (INSN(15, 0) == 0b1001000000000010) {
+      putPC(irsb, mkU64(guest_pc_curr_instr + 2));
+      dres->whatNext    = Dis_StopHere;
+      dres->jk_StopHere = Ijk_SigTRAP;
+      DIP("c.ebreak\n");
+      return True;
+   }
+
+   /* --------------------- c.jalr rs1 ---------------------- */
+   if (INSN(1, 0) == 0b10 && INSN(15, 12) == 0b1001) {
+      UInt rs1 = INSN(11, 7);
+      UInt rs2 = INSN(6, 2);
+      if (rs1 == 0 || rs2 != 0) {
+         /* Invalid C.JALR, fall through. */
+      } else {
+         putIReg64(irsb, 1 /*x1/ra*/, mkU64(guest_pc_curr_instr + 2));
+         putPC(irsb, getIReg64(rs1));
+         dres->whatNext    = Dis_StopHere;
+         dres->jk_StopHere = Ijk_Call;
+         DIP("c.jalr %s\n", nameIReg(rs1));
+         return True;
+      }
+   }
+
+   /* ------------------ c.add rd_rs1, rs2 ------------------ */
+   if (INSN(1, 0) == 0b10 && INSN(15, 12) == 0b1001) {
+      UInt rd_rs1 = INSN(11, 7);
+      UInt rs2    = INSN(6, 2);
+      if (rd_rs1 == 0 || rs2 == 0) {
+         /* Invalid C.ADD, fall through. */
+      } else {
+         putIReg64(irsb, rd_rs1,
+                   binop(Iop_Add64, getIReg64(rd_rs1), getIReg64(rs2)));
+         DIP("c.add %s, %s\n", nameIReg(rd_rs1), nameIReg(rs2));
+         return True;
+      }
+   }
+
+   /* ------------- c.fsdsp rs2, uimm[8:3](x2) -------------- */
+   if (INSN(1, 0) == 0b10 && INSN(15, 13) == 0b101) {
+      UInt  rs1     = 2; /* base=x2/sp */
+      UInt  rs2     = INSN(6, 2);
+      UInt  uimm8_3 = INSN(9, 7) << 3 | INSN(12, 10);
+      ULong uimm    = uimm8_3 << 3;
+      storeLE(irsb, binop(Iop_Add64, getIReg64(rs1), mkU64(uimm)),
+              getFReg64(rs2));
+      DIP("c.fsdsp %s, %llu(%s)\n", nameFReg(rs2), uimm, nameIReg(rs1));
+      return True;
+   }
+
+   /* -------------- c.swsp rs2, uimm[7:2](x2) -------------- */
+   if (INSN(1, 0) == 0b10 && INSN(15, 13) == 0b110) {
+      UInt  rs1     = 2; /* base=x2/sp */
+      UInt  rs2     = INSN(6, 2);
+      UInt  uimm7_2 = INSN(8, 7) << 4 | INSN(12, 9);
+      ULong uimm    = uimm7_2 << 2;
+      storeLE(irsb, binop(Iop_Add64, getIReg64(rs1), mkU64(uimm)),
+              unop(Iop_64to32, getIReg64(rs2)));
+      DIP("c.swsp %s, %llu(%s)\n", nameIReg(rs2), uimm, nameIReg(rs1));
+      return True;
+   }
+
+   /* -------------- c.sdsp rs2, uimm[8:3](x2) -------------- */
+   if (INSN(1, 0) == 0b10 && INSN(15, 13) == 0b111) {
+      UInt  rs1     = 2; /* base=x2/sp */
+      UInt  rs2     = INSN(6, 2);
+      UInt  uimm8_3 = INSN(9, 7) << 3 | INSN(12, 10);
+      ULong uimm    = uimm8_3 << 3;
+      storeLE(irsb, binop(Iop_Add64, getIReg64(rs1), mkU64(uimm)),
+              getIReg64(rs2));
+      DIP("c.sdsp %s, %llu(%s)\n", nameIReg(rs2), uimm, nameIReg(rs1));
+      return True;
+   }
+
+   if (sigill_diag)
+      vex_printf("RISCV64 front end: compressed\n");
+   return False;
+}
+
+static Bool dis_RV64I(/*MB_OUT*/ DisResult* dres,
+                      /*OUT*/ IRSB*         irsb,
+                      UInt                  insn,
+                      Addr                  guest_pc_curr_instr)
+{
+   /* ------------- RV64I base instruction set -------------- */
+
+   /* ----------------- lui rd, imm[31:12] ------------------ */
+   if (INSN(6, 0) == 0b0110111) {
+      UInt rd       = INSN(11, 7);
+      UInt imm31_12 = INSN(31, 12);
+      if (rd != 0)
+         putIReg64(irsb, rd, mkU64(vex_sx_to_64(imm31_12 << 12, 32)));
+      DIP("lui %s, 0x%x\n", nameIReg(rd), imm31_12);
+      return True;
+   }
+
+   /* ---------------- auipc rd, imm[31:12] ----------------- */
+   if (INSN(6, 0) == 0b0010111) {
+      UInt rd       = INSN(11, 7);
+      UInt imm31_12 = INSN(31, 12);
+      if (rd != 0)
+         putIReg64(
+            irsb, rd,
+            mkU64(guest_pc_curr_instr + vex_sx_to_64(imm31_12 << 12, 32)));
+      DIP("auipc %s, 0x%x\n", nameIReg(rd), imm31_12);
+      return True;
+   }
+
+   /* ------------------ jal rd, imm[20:1] ------------------ */
+   if (INSN(6, 0) == 0b1101111) {
+      UInt rd      = INSN(11, 7);
+      UInt imm20_1 = INSN(31, 31) << 19 | INSN(19, 12) << 11 |
+                     INSN(20, 20) << 10 | INSN(30, 21);
+      ULong simm   = vex_sx_to_64(imm20_1 << 1, 21);
+      ULong dst_pc = guest_pc_curr_instr + simm;
+      if (rd != 0)
+         putIReg64(irsb, rd, mkU64(guest_pc_curr_instr + 4));
+      putPC(irsb, mkU64(dst_pc));
+      dres->whatNext = Dis_StopHere;
+      if (rd != 0) {
+         dres->jk_StopHere = Ijk_Call;
+         DIP("jal %s, 0x%llx\n", nameIReg(rd), dst_pc);
+      } else {
+         dres->jk_StopHere = Ijk_Boring;
+         DIP("j 0x%llx\n", dst_pc);
+      }
+      return True;
+   }
+
+   /* --------------- jalr rd, imm[11:0](rs1) --------------- */
+   if (INSN(6, 0) == 0b1100111 && INSN(14, 12) == 0b000) {
+      UInt   rd      = INSN(11, 7);
+      UInt   rs1     = INSN(19, 15);
+      UInt   imm11_0 = INSN(31, 20);
+      ULong  simm    = vex_sx_to_64(imm11_0, 12);
+      IRTemp dst_pc  = newTemp(irsb, Ity_I64);
+      assign(irsb, dst_pc, binop(Iop_Add64, getIReg64(rs1), mkU64(simm)));
+      if (rd != 0)
+         putIReg64(irsb, rd, mkU64(guest_pc_curr_instr + 4));
+      putPC(irsb, mkexpr(dst_pc));
+      dres->whatNext = Dis_StopHere;
+      if (rd == 0) {
+         if (rs1 == 1 /*x1/ra*/ && simm == 0) {
+            dres->jk_StopHere = Ijk_Ret;
+            DIP("ret\n");
+         } else {
+            dres->jk_StopHere = Ijk_Boring;
+            DIP("jr %lld(%s)\n", (Long)simm, nameIReg(rs1));
+         }
+      } else {
+         dres->jk_StopHere = Ijk_Call;
+         DIP("jalr %s, %lld(%s)\n", nameIReg(rd), (Long)simm, nameIReg(rs1));
+      }
+      return True;
+   }
+
+   /* ------------ {beq,bne} rs1, rs2, imm[12:1] ------------ */
+   /* ------------ {blt,bge} rs1, rs2, imm[12:1] ------------ */
+   /* ----------- {bltu,bgeu} rs1, rs2, imm[12:1] ----------- */
+   if (INSN(6, 0) == 0b1100011) {
+      UInt funct3  = INSN(14, 12);
+      UInt rs1     = INSN(19, 15);
+      UInt rs2     = INSN(24, 20);
+      UInt imm12_1 = INSN(31, 31) << 11 | INSN(7, 7) << 10 | INSN(30, 25) << 4 |
+                     INSN(11, 8);
+      if (funct3 == 0b010 || funct3 == 0b011) {
+         /* Invalid B<x>, fall through. */
+      } else {
+         ULong        simm   = vex_sx_to_64(imm12_1 << 1, 13);
+         ULong        dst_pc = guest_pc_curr_instr + simm;
+         const HChar* name;
+         IRExpr*      cond;
+         switch (funct3) {
+         case 0b000:
+            name = "beq";
+            cond = binop(Iop_CmpEQ64, getIReg64(rs1), getIReg64(rs2));
+            break;
+         case 0b001:
+            name = "bne";
+            cond = binop(Iop_CmpNE64, getIReg64(rs1), getIReg64(rs2));
+            break;
+         case 0b100:
+            name = "blt";
+            cond = binop(Iop_CmpLT64S, getIReg64(rs1), getIReg64(rs2));
+            break;
+         case 0b101:
+            name = "bge";
+            cond = binop(Iop_CmpLE64S, getIReg64(rs2), getIReg64(rs1));
+            break;
+         case 0b110:
+            name = "bltu";
+            cond = binop(Iop_CmpLT64U, getIReg64(rs1), getIReg64(rs2));
+            break;
+         case 0b111:
+            name = "bgeu";
+            cond = binop(Iop_CmpLE64U, getIReg64(rs2), getIReg64(rs1));
+            break;
+         default:
+            vassert(0);
+         }
+         stmt(irsb,
+              IRStmt_Exit(cond, Ijk_Boring, IRConst_U64(dst_pc), OFFB_PC));
+         putPC(irsb, mkU64(guest_pc_curr_instr + 4));
+         dres->whatNext    = Dis_StopHere;
+         dres->jk_StopHere = Ijk_Boring;
+         DIP("%s %s, %s, 0x%llx\n", name, nameIReg(rs1), nameIReg(rs2), dst_pc);
+         return True;
+      }
+   }
+
+   /* ---------- {lb,lh,lw,ld} rd, imm[11:0](rs1) ----------- */
+   /* ---------- {lbu,lhu,lwu} rd, imm[11:0](rs1) ----------- */
+   if (INSN(6, 0) == 0b0000011) {
+      UInt funct3  = INSN(14, 12);
+      UInt rd      = INSN(11, 7);
+      UInt rs1     = INSN(19, 15);
+      UInt imm11_0 = INSN(31, 20);
+      if (funct3 == 0b111) {
+         /* Invalid L<x>, fall through. */
+      } else {
+         ULong simm = vex_sx_to_64(imm11_0, 12);
+         if (rd != 0) {
+            IRExpr* ea = binop(Iop_Add64, getIReg64(rs1), mkU64(simm));
+            IRExpr* expr;
+            switch (funct3) {
+            case 0b000:
+               expr = unop(Iop_8Sto64, loadLE(Ity_I8, ea));
+               break;
+            case 0b001:
+               expr = unop(Iop_16Sto64, loadLE(Ity_I16, ea));
+               break;
+            case 0b010:
+               expr = unop(Iop_32Sto64, loadLE(Ity_I32, ea));
+               break;
+            case 0b011:
+               expr = loadLE(Ity_I64, ea);
+               break;
+            case 0b100:
+               expr = unop(Iop_8Uto64, loadLE(Ity_I8, ea));
+               break;
+            case 0b101:
+               expr = unop(Iop_16Uto64, loadLE(Ity_I16, ea));
+               break;
+            case 0b110:
+               expr = unop(Iop_32Uto64, loadLE(Ity_I32, ea));
+               break;
+            default:
+               vassert(0);
+            }
+            putIReg64(irsb, rd, expr);
+         }
+         const HChar* name;
+         switch (funct3) {
+         case 0b000:
+            name = "lb";
+            break;
+         case 0b001:
+            name = "lh";
+            break;
+         case 0b010:
+            name = "lw";
+            break;
+         case 0b011:
+            name = "ld";
+            break;
+         case 0b100:
+            name = "lbu";
+            break;
+         case 0b101:
+            name = "lhu";
+            break;
+         case 0b110:
+            name = "lwu";
+            break;
+         default:
+            vassert(0);
+         }
+         DIP("%s %s, %lld(%s)\n", name, nameIReg(rd), (Long)simm,
+             nameIReg(rs1));
+         return True;
+      }
+   }
+
+   /* ---------- {sb,sh,sw,sd} rs2, imm[11:0](rs1) ---------- */
+   if (INSN(6, 0) == 0b0100011) {
+      UInt funct3  = INSN(14, 12);
+      UInt rs1     = INSN(19, 15);
+      UInt rs2     = INSN(24, 20);
+      UInt imm11_0 = INSN(31, 25) << 5 | INSN(11, 7);
+      if (funct3 == 0b100 || funct3 == 0b101 || funct3 == 0b110 ||
+          funct3 == 0b111) {
+         /* Invalid S<x>, fall through. */
+      } else {
+         ULong        simm = vex_sx_to_64(imm11_0, 12);
+         IRExpr*      ea   = binop(Iop_Add64, getIReg64(rs1), mkU64(simm));
+         const HChar* name;
+         IRExpr*      expr;
+         switch (funct3) {
+         case 0b000:
+            name = "sb";
+            expr = unop(Iop_64to8, getIReg64(rs2));
+            break;
+         case 0b001:
+            name = "sh";
+            expr = unop(Iop_64to16, getIReg64(rs2));
+            break;
+         case 0b010:
+            name = "sw";
+            expr = unop(Iop_64to32, getIReg64(rs2));
+            break;
+         case 0b011:
+            name = "sd";
+            expr = getIReg64(rs2);
+            break;
+         default:
+            vassert(0);
+         }
+         storeLE(irsb, ea, expr);
+         DIP("%s %s, %lld(%s)\n", name, nameIReg(rs2), (Long)simm,
+             nameIReg(rs1));
+         return True;
+      }
+   }
+
+   /* -------- {addi,slti,sltiu} rd, rs1, imm[11:0] --------- */
+   /* --------- {xori,ori,andi} rd, rs1, imm[11:0] ---------- */
+   if (INSN(6, 0) == 0b0010011) {
+      UInt funct3  = INSN(14, 12);
+      UInt rd      = INSN(11, 7);
+      UInt rs1     = INSN(19, 15);
+      UInt imm11_0 = INSN(31, 20);
+      if (funct3 == 0b001 || funct3 == 0b101) {
+         /* Invalid <x>I, fall through. */
+      } else {
+         ULong simm = vex_sx_to_64(imm11_0, 12);
+         if (rd != 0) {
+            IRExpr* expr;
+            switch (funct3) {
+            case 0b000:
+               expr = binop(Iop_Add64, getIReg64(rs1), mkU64(simm));
+               break;
+            case 0b010:
+               expr = unop(Iop_1Uto64,
+                           binop(Iop_CmpLT64S, getIReg64(rs1), mkU64(simm)));
+               break;
+            case 0b011:
+               /* Note that the comparison itself is unsigned but the immediate
+                  is sign-extended. */
+               expr = unop(Iop_1Uto64,
+                           binop(Iop_CmpLT64U, getIReg64(rs1), mkU64(simm)));
+               break;
+            case 0b100:
+               expr = binop(Iop_Xor64, getIReg64(rs1), mkU64(simm));
+               break;
+            case 0b110:
+               expr = binop(Iop_Or64, getIReg64(rs1), mkU64(simm));
+               break;
+            case 0b111:
+               expr = binop(Iop_And64, getIReg64(rs1), mkU64(simm));
+               break;
+            default:
+               vassert(0);
+            }
+            putIReg64(irsb, rd, expr);
+         }
+         const HChar* name;
+         switch (funct3) {
+         case 0b000:
+            name = "addi";
+            break;
+         case 0b010:
+            name = "slti";
+            break;
+         case 0b011:
+            name = "sltiu";
+            break;
+         case 0b100:
+            name = "xori";
+            break;
+         case 0b110:
+            name = "ori";
+            break;
+         case 0b111:
+            name = "andi";
+            break;
+         default:
+            vassert(0);
+         }
+         DIP("%s %s, %s, %lld\n", name, nameIReg(rd), nameIReg(rs1),
+             (Long)simm);
+         return True;
+      }
+   }
+
+   /* --------------- slli rd, rs1, uimm[5:0] --------------- */
+   if (INSN(6, 0) == 0b0010011 && INSN(14, 12) == 0b001 &&
+       INSN(31, 26) == 0b000000) {
+      UInt rd      = INSN(11, 7);
+      UInt rs1     = INSN(19, 15);
+      UInt uimm5_0 = INSN(25, 20);
+      if (rd != 0)
+         putIReg64(irsb, rd, binop(Iop_Shl64, getIReg64(rs1), mkU8(uimm5_0)));
+      DIP("slli %s, %s, %u\n", nameIReg(rd), nameIReg(rs1), uimm5_0);
+      return True;
+   }
+
+   /* ----------- {srli,srai} rd, rs1, uimm[5:0] ----------=- */
+   if (INSN(6, 0) == 0b0010011 && INSN(14, 12) == 0b101 &&
+       INSN(29, 26) == 0b0000 && INSN(31, 31) == 0b0) {
+      Bool is_log  = INSN(30, 30) == 0b0;
+      UInt rd      = INSN(11, 7);
+      UInt rs1     = INSN(19, 15);
+      UInt uimm5_0 = INSN(25, 20);
+      if (rd != 0)
+         putIReg64(irsb, rd,
+                   binop(is_log ? Iop_Shr64 : Iop_Sar64, getIReg64(rs1),
+                         mkU8(uimm5_0)));
+      DIP("%s %s, %s, %u\n", is_log ? "srli" : "srai", nameIReg(rd),
+          nameIReg(rs1), uimm5_0);
+      return True;
+   }
+
+   /* --------------- {add,sub} rd, rs1, rs2 ---------------- */
+   /* ------------- {sll,srl,sra} rd, rs1, rs2 -------------- */
+   /* --------------- {slt,sltu} rd, rs1, rs2 --------------- */
+   /* -------------- {xor,or,and} rd, rs1, rs2 -------------- */
+   if (INSN(6, 0) == 0b0110011 && INSN(29, 25) == 0b00000 &&
+       INSN(31, 31) == 0b0) {
+      UInt funct3  = INSN(14, 12);
+      Bool is_base = INSN(30, 30) == 0b0;
+      UInt rd      = INSN(11, 7);
+      UInt rs1     = INSN(19, 15);
+      UInt rs2     = INSN(24, 20);
+      if (!is_base && funct3 != 0b000 && funct3 != 0b101) {
+         /* Invalid <x>, fall through. */
+      } else {
+         if (rd != 0) {
+            IRExpr* expr;
+            switch (funct3) {
+            case 0b000: /* sll */
+               expr = binop(is_base ? Iop_Add64 : Iop_Sub64, getIReg64(rs1),
+                            getIReg64(rs2));
+               break;
+            case 0b001:
+               expr = binop(Iop_Shl64, getIReg64(rs1),
+                            unop(Iop_64to8, getIReg64(rs2)));
+               break;
+            case 0b010:
+               expr = unop(Iop_1Uto64,
+                           binop(Iop_CmpLT64S, getIReg64(rs1), getIReg64(rs2)));
+               break;
+            case 0b011:
+               expr = unop(Iop_1Uto64,
+                           binop(Iop_CmpLT64U, getIReg64(rs1), getIReg64(rs2)));
+               break;
+            case 0b100:
+               expr = binop(Iop_Xor64, getIReg64(rs1), getIReg64(rs2));
+               break;
+            case 0b101:
+               expr = binop(is_base ? Iop_Shr64 : Iop_Sar64, getIReg64(rs1),
+                            unop(Iop_64to8, getIReg64(rs2)));
+               break;
+            case 0b110:
+               expr = binop(Iop_Or64, getIReg64(rs1), getIReg64(rs2));
+               break;
+            case 0b111:
+               expr = binop(Iop_And64, getIReg64(rs1), getIReg64(rs2));
+               break;
+            default:
+               vassert(0);
+            }
+            putIReg64(irsb, rd, expr);
+         }
+         const HChar* name;
+         switch (funct3) {
+         case 0b000:
+            name = is_base ? "add" : "sub";
+            break;
+         case 0b001:
+            name = "sll";
+            break;
+         case 0b010:
+            name = "slt";
+            break;
+         case 0b011:
+            name = "sltu";
+            break;
+         case 0b100:
+            name = "xor";
+            break;
+         case 0b101:
+            name = is_base ? "srl" : "sra";
+            break;
+         case 0b110:
+            name = "or";
+            break;
+         case 0b111:
+            name = "and";
+            break;
+         default:
+            vassert(0);
+         }
+         DIP("%s %s, %s, %s\n", name, nameIReg(rd), nameIReg(rs1),
+             nameIReg(rs2));
+         return True;
+      }
+   }
+
+   /* ------------------------ fence ------------------------ */
+   if (INSN(19, 0) == 0b00000000000000001111) {
+      UInt   fm = INSN(31, 28);
+      UInt succ = INSN(23, 20);
+      UInt pred = INSN(27, 24);
+      if ((fm == 0b1000 && pred == 0b0011 && succ == 0b0011)
+          || fm == 0b0000)
+      {
+         if (fm == 0b1000)
+            DIP("fence.tso\n");
+         else if (pred == 0b1111 && succ == 0b1111)
+            DIP("fence\n");
+         else
+            DIP("fence %s%s%s%s,%s%s%s%s\n", (pred & 0x8) ? "i" : "",
+                (pred & 0x4) ? "o" : "", (pred & 0x2) ? "r" : "",
+                (pred & 0x1) ? "w" : "", (succ & 0x8) ? "i" : "",
+                (succ & 0x4) ? "o" : "", (succ & 0x2) ? "r" : "",
+                (succ & 0x1) ? "w" : "");
+         stmt(irsb, IRStmt_MBE(Imbe_Fence));
+         return True;
+      }
+   }
+
+   /* ------------------------ ecall ------------------------ */
+   if (INSN(31, 0) == 0b00000000000000000000000001110011) {
+      putPC(irsb, mkU64(guest_pc_curr_instr + 4));
+      dres->whatNext    = Dis_StopHere;
+      dres->jk_StopHere = Ijk_Sys_syscall;
+      DIP("ecall\n");
+      return True;
+   }
+
+   /* ------------------------ ebreak ------------------------ */
+   if (INSN(31, 0) == 0b00000000000100000000000001110011) {
+      putPC(irsb, mkU64(guest_pc_curr_instr + 4));
+      dres->whatNext    = Dis_StopHere;
+      dres->jk_StopHere = Ijk_SigTRAP;
+      DIP("ebreak\n");
+      return True;
+   }
+
+   /* -------------- addiw rd, rs1, imm[11:0] --------------- */
+   if (INSN(6, 0) == 0b0011011 && INSN(14, 12) == 0b000) {
+      UInt rd      = INSN(11, 7);
+      UInt rs1     = INSN(19, 15);
+      UInt imm11_0 = INSN(31, 20);
+      UInt simm    = (UInt)vex_sx_to_64(imm11_0, 12);
+      if (rd != 0)
+         putIReg32(irsb, rd, binop(Iop_Add32, getIReg32(rs1), mkU32(simm)));
+      DIP("addiw %s, %s, %d\n", nameIReg(rd), nameIReg(rs1), (Int)simm);
+      return True;
+   }
+
+   /* -------------- slliw rd, rs1, uimm[4:0] --------------- */
+   if (INSN(6, 0) == 0b0011011 && INSN(14, 12) == 0b001 &&
+       INSN(31, 25) == 0b0000000) {
+      UInt rd      = INSN(11, 7);
+      UInt rs1     = INSN(19, 15);
+      UInt uimm4_0 = INSN(24, 20);
+      if (rd != 0)
+         putIReg32(irsb, rd, binop(Iop_Shl32, getIReg32(rs1), mkU8(uimm4_0)));
+      DIP("slliw %s, %s, %u\n", nameIReg(rd), nameIReg(rs1), uimm4_0);
+      return True;
+   }
+
+   /* ---------- {srliw,sraiw} rd, rs1, uimm[4:0] ----------- */
+   if (INSN(6, 0) == 0b0011011 && INSN(14, 12) == 0b101 &&
+       INSN(29, 25) == 0b00000 && INSN(31, 31) == 0b0) {
+      Bool is_log  = INSN(30, 30) == 0b0;
+      UInt rd      = INSN(11, 7);
+      UInt rs1     = INSN(19, 15);
+      UInt uimm4_0 = INSN(24, 20);
+      if (rd != 0)
+         putIReg32(irsb, rd,
+                   binop(is_log ? Iop_Shr32 : Iop_Sar32, getIReg32(rs1),
+                         mkU8(uimm4_0)));
+      DIP("%s %s, %s, %u\n", is_log ? "srliw" : "sraiw", nameIReg(rd),
+          nameIReg(rs1), uimm4_0);
+      return True;
+   }
+
+   /* -------------- {addw,subw} rd, rs1, rs2 --------------- */
+   if (INSN(6, 0) == 0b0111011 && INSN(14, 12) == 0b000 &&
+       INSN(29, 25) == 0b00000 && INSN(31, 31) == 0b0) {
+      Bool is_add = INSN(30, 30) == 0b0;
+      UInt rd     = INSN(11, 7);
+      UInt rs1    = INSN(19, 15);
+      UInt rs2    = INSN(24, 20);
+      if (rd != 0)
+         putIReg32(irsb, rd,
+                   binop(is_add ? Iop_Add32 : Iop_Sub32, getIReg32(rs1),
+                         getIReg32(rs2)));
+      DIP("%s %s, %s, %s\n", is_add ? "addw" : "subw", nameIReg(rd),
+          nameIReg(rs1), nameIReg(rs2));
+      return True;
+   }
+
+   /* ------------------ sllw rd, rs1, rs2 ------------------ */
+   if (INSN(6, 0) == 0b0111011 && INSN(14, 12) == 0b001 &&
+       INSN(31, 25) == 0b0000000) {
+      UInt rd  = INSN(11, 7);
+      UInt rs1 = INSN(19, 15);
+      UInt rs2 = INSN(24, 20);
+      if (rd != 0)
+         putIReg32(
+            irsb, rd,
+            binop(Iop_Shl32, getIReg32(rs1), unop(Iop_64to8, getIReg64(rs2))));
+      DIP("sllw %s, %s, %s\n", nameIReg(rd), nameIReg(rs1), nameIReg(rs2));
+      return True;
+   }
+
+   /* -------------- {srlw,sraw} rd, rs1, rs2 --------------- */
+   if (INSN(6, 0) == 0b0111011 && INSN(14, 12) == 0b101 &&
+       INSN(29, 25) == 0b00000 && INSN(31, 31) == 0b0) {
+      Bool is_log = INSN(30, 30) == 0b0;
+      UInt rd     = INSN(11, 7);
+      UInt rs1    = INSN(19, 15);
+      UInt rs2    = INSN(24, 20);
+      if (rd != 0)
+         putIReg32(irsb, rd,
+                   binop(is_log ? Iop_Shr32 : Iop_Sar32, getIReg32(rs1),
+                         unop(Iop_64to8, getIReg64(rs2))));
+      DIP("%s %s, %s, %s\n", is_log ? "srlw" : "sraw", nameIReg(rd),
+          nameIReg(rs1), nameIReg(rs2));
+      return True;
+   }
+
+   return False;
+}
+
+static Bool dis_RV64M(/*MB_OUT*/ DisResult* dres,
+                      /*OUT*/ IRSB*         irsb,
+                      UInt                  insn)
+{
+   /* -------------- RV64M standard extension --------------- */
+
+   /* -------- {mul,mulh,mulhsu,mulhu} rd, rs1, rs2 --------- */
+   /* --------------- {div,divu} rd, rs1, rs2 --------------- */
+   /* --------------- {rem,remu} rd, rs1, rs2 --------------- */
+   if (INSN(6, 0) == 0b0110011 && INSN(31, 25) == 0b0000001) {
+      UInt rd     = INSN(11, 7);
+      UInt funct3 = INSN(14, 12);
+      UInt rs1    = INSN(19, 15);
+      UInt rs2    = INSN(24, 20);
+      if (funct3 == 0b010) {
+         /* Invalid {MUL,DIV,REM}<x>, fall through. */
+      } else if (funct3 == 0b010) {
+         /* MULHSU, not currently handled, fall through. */
+      } else {
+         if (rd != 0) {
+            IRExpr* expr;
+            switch (funct3) {
+            case 0b000:
+               expr = binop(Iop_Mul64, getIReg64(rs1), getIReg64(rs2));
+               break;
+            case 0b001:
+               expr = unop(Iop_128HIto64,
+                           binop(Iop_MullS64, getIReg64(rs1), getIReg64(rs2)));
+               break;
+            case 0b011:
+               expr = unop(Iop_128HIto64,
+                           binop(Iop_MullU64, getIReg64(rs1), getIReg64(rs2)));
+               break;
+            case 0b100:
+               expr = binop(Iop_DivS64, getIReg64(rs1), getIReg64(rs2));
+               break;
+            case 0b101:
+               expr = binop(Iop_DivU64, getIReg64(rs1), getIReg64(rs2));
+               break;
+            case 0b110:
+               expr =
+                  unop(Iop_128HIto64, binop(Iop_DivModS64to64, getIReg64(rs1),
+                                            getIReg64(rs2)));
+               break;
+            case 0b111:
+               expr =
+                  unop(Iop_128HIto64, binop(Iop_DivModU64to64, getIReg64(rs1),
+                                            getIReg64(rs2)));
+               break;
+            default:
+               vassert(0);
+            }
+            putIReg64(irsb, rd, expr);
+         }
+         const HChar* name;
+         switch (funct3) {
+         case 0b000:
+            name = "mul";
+            break;
+         case 0b001:
+            name = "mulh";
+            break;
+         case 0b011:
+            name = "mulhu";
+            break;
+         case 0b100:
+            name = "div";
+            break;
+         case 0b101:
+            name = "divu";
+            break;
+         case 0b110:
+            name = "rem";
+            break;
+         case 0b111:
+            name = "remu";
+            break;
+         default:
+            vassert(0);
+         }
+         DIP("%s %s, %s, %s\n", name, nameIReg(rd), nameIReg(rs1),
+             nameIReg(rs2));
+         return True;
+      }
+   }
+
+   /* ------------------ mulw rd, rs1, rs2 ------------------ */
+   /* -------------- {divw,divuw} rd, rs1, rs2 -------------- */
+   /* -------------- {remw,remuw} rd, rs1, rs2 -------------- */
+   if (INSN(6, 0) == 0b0111011 && INSN(31, 25) == 0b0000001) {
+      UInt rd     = INSN(11, 7);
+      UInt funct3 = INSN(14, 12);
+      UInt rs1    = INSN(19, 15);
+      UInt rs2    = INSN(24, 20);
+      if (funct3 == 0b001 || funct3 == 0b010 || funct3 == 0b011) {
+         /* Invalid {MUL,DIV,REM}<x>W, fall through. */
+      } else {
+         if (rd != 0) {
+            IRExpr* expr;
+            switch (funct3) {
+            case 0b000:
+               expr = binop(Iop_Mul32, getIReg32(rs1), getIReg32(rs2));
+               break;
+            case 0b100:
+               expr = binop(Iop_DivS32, getIReg32(rs1), getIReg32(rs2));
+               break;
+            case 0b101:
+               expr = binop(Iop_DivU32, getIReg32(rs1), getIReg32(rs2));
+               break;
+            case 0b110:
+               expr = unop(Iop_64HIto32, binop(Iop_DivModS32to32,
+                                               getIReg32(rs1), getIReg32(rs2)));
+               break;
+            case 0b111:
+               expr = unop(Iop_64HIto32, binop(Iop_DivModU32to32,
+                                               getIReg32(rs1), getIReg32(rs2)));
+               break;
+            default:
+               vassert(0);
+            }
+            putIReg32(irsb, rd, expr);
+         }
+         const HChar* name;
+         switch (funct3) {
+         case 0b000:
+            name = "mulw";
+            break;
+         case 0b100:
+            name = "divw";
+            break;
+         case 0b101:
+            name = "divuw";
+            break;
+         case 0b110:
+            name = "remw";
+            break;
+         case 0b111:
+            name = "remuw";
+            break;
+         default:
+            vassert(0);
+         }
+         DIP("%s %s, %s, %s\n", name, nameIReg(rd), nameIReg(rs1),
+             nameIReg(rs2));
+         return True;
+      }
+   }
+
+   return False;
+}
+
+static Bool dis_RV64A(/*MB_OUT*/ DisResult* dres,
+                      /*OUT*/ IRSB*         irsb,
+                      UInt                  insn,
+                      Addr                  guest_pc_curr_instr,
+                      const VexAbiInfo*     abiinfo)
+{
+   /* -------------- RV64A standard extension --------------- */
+
+   /* ----------------- lr.{w,d} rd, (rs1) ------------------ */
+   if (INSN(6, 0) == 0b0101111 && INSN(14, 13) == 0b01 &&
+       INSN(24, 20) == 0b00000 && INSN(31, 27) == 0b00010) {
+      UInt rd    = INSN(11, 7);
+      Bool is_32 = INSN(12, 12) == 0b0;
+      UInt rs1   = INSN(19, 15);
+      UInt aqrl  = INSN(26, 25);
+
+      if (aqrl & 0x1)
+         stmt(irsb, IRStmt_MBE(Imbe_Fence));
+
+      IRType ty = is_32 ? Ity_I32 : Ity_I64;
+      if (abiinfo->guest__use_fallback_LLSC) {
+         /* Get address of the load. */
+         IRTemp ea = newTemp(irsb, Ity_I64);
+         assign(irsb, ea, getIReg64(rs1));
+
+         /* Load the value. */
+         IRTemp res = newTemp(irsb, Ity_I64);
+         assign(irsb, res, widenSto64(ty, loadLE(ty, mkexpr(ea))));
+
+         /* Set up the LLSC fallback data. */
+         stmt(irsb, IRStmt_Put(OFFB_LLSC_DATA, mkexpr(res)));
+         stmt(irsb, IRStmt_Put(OFFB_LLSC_ADDR, mkexpr(ea)));
+         stmt(irsb, IRStmt_Put(OFFB_LLSC_SIZE, mkU64(4)));
+
+         /* Write the result to the destination register. */
+         if (rd != 0)
+            putIReg64(irsb, rd, mkexpr(res));
+      } else {
+         /* TODO Rework the non-fallback mode by recognizing common LR+SC
+            sequences and simulating them as one. */
+         IRTemp res = newTemp(irsb, ty);
+         stmt(irsb, IRStmt_LLSC(Iend_LE, res, getIReg64(rs1), NULL /*LL*/));
+         if (rd != 0)
+            putIReg64(irsb, rd, widenSto64(ty, mkexpr(res)));
+      }
+
+      if (aqrl & 0x2)
+         stmt(irsb, IRStmt_MBE(Imbe_Fence));
+
+      DIP("lr.%s%s %s, (%s)%s\n", is_32 ? "w" : "d", nameAqRlSuffix(aqrl),
+          nameIReg(rd), nameIReg(rs1),
+          abiinfo->guest__use_fallback_LLSC ? " (fallback implementation)"
+                                            : "");
+      return True;
+   }
+
+   /* --------------- sc.{w,d} rd, rs2, (rs1) --------------- */
+   if (INSN(6, 0) == 0b0101111 && INSN(14, 13) == 0b01 &&
+       INSN(31, 27) == 0b00011) {
+      UInt rd    = INSN(11, 7);
+      Bool is_32 = INSN(12, 12) == 0b0;
+      UInt rs1   = INSN(19, 15);
+      UInt rs2   = INSN(24, 20);
+      UInt aqrl  = INSN(26, 25);
+
+      if (aqrl & 0x1)
+         stmt(irsb, IRStmt_MBE(Imbe_Fence));
+
+      IRType ty = is_32 ? Ity_I32 : Ity_I64;
+      if (abiinfo->guest__use_fallback_LLSC) {
+         /* Get address of the load. */
+         IRTemp ea = newTemp(irsb, Ity_I64);
+         assign(irsb, ea, getIReg64(rs1));
+
+         /* Get the continuation address. */
+         IRConst* nia = IRConst_U64(guest_pc_curr_instr + 4);
+
+         /* Mark the SC initially as failed. */
+         if (rd != 0)
+            putIReg64(irsb, rd, mkU64(1));
+
+         /* Set that no transaction is in progress. */
+         IRTemp size = newTemp(irsb, Ity_I64);
+         assign(irsb, size, IRExpr_Get(OFFB_LLSC_SIZE, Ity_I64));
+         stmt(irsb,
+              IRStmt_Put(OFFB_LLSC_SIZE, mkU64(0) /* "no transaction" */));
+
+         /* Fail if no or wrong-size transaction. */
+         stmt(irsb, IRStmt_Exit(binop(Iop_CmpNE64, mkexpr(size), mkU64(4)),
+                                Ijk_Boring, nia, OFFB_PC));
+
+         /* Fail if the address doesn't match the LL address. */
+         stmt(irsb, IRStmt_Exit(binop(Iop_CmpNE64, mkexpr(ea),
+                                      IRExpr_Get(OFFB_LLSC_ADDR, Ity_I64)),
+                                Ijk_Boring, nia, OFFB_PC));
+
+         /* Fail if the data doesn't match the LL data. */
+         IRTemp data = newTemp(irsb, Ity_I64);
+         assign(irsb, data, IRExpr_Get(OFFB_LLSC_DATA, Ity_I64));
+         stmt(irsb, IRStmt_Exit(binop(Iop_CmpNE64,
+                                      widenSto64(ty, loadLE(ty, mkexpr(ea))),
+                                      mkexpr(data)),
+                                Ijk_Boring, nia, OFFB_PC));
+
+         /* Try to CAS the new value in. */
+         IRTemp old  = newTemp(irsb, ty);
+         IRTemp expd = newTemp(irsb, ty);
+         assign(irsb, expd, narrowFrom64(ty, mkexpr(data)));
+         stmt(irsb, IRStmt_CAS(mkIRCAS(
+                       /*oldHi*/ IRTemp_INVALID, old, Iend_LE, mkexpr(ea),
+                       /*expdHi*/ NULL, mkexpr(expd),
+                       /*dataHi*/ NULL, narrowFrom64(ty, getIReg64(rs2)))));
+
+         /* Fail if the CAS failed (old != expd). */
+         stmt(irsb, IRStmt_Exit(binop(is_32 ? Iop_CmpNE32 : Iop_CmpNE64,
+                                      mkexpr(old), mkexpr(expd)),
+                                Ijk_Boring, nia, OFFB_PC));
+
+         /* Otherwise mark the operation as successful. */
+         if (rd != 0)
+            putIReg64(irsb, rd, mkU64(0));
+      } else {
+         IRTemp res = newTemp(irsb, Ity_I1);
+         stmt(irsb, IRStmt_LLSC(Iend_LE, res, getIReg64(rs1),
+                                narrowFrom64(ty, getIReg64(rs2))));
+         /* IR semantics: res is 1 if store succeeds, 0 if it fails. Need to set
+            rd to 1 on failure, 0 on success. */
+         if (rd != 0)
+            putIReg64(
+               irsb, rd,
+               binop(Iop_Xor64, unop(Iop_1Uto64, mkexpr(res)), mkU64(1)));
+      }
+
+      if (aqrl & 0x2)
+         stmt(irsb, IRStmt_MBE(Imbe_Fence));
+
+      DIP("sc.%s%s %s, %s, (%s)%s\n", is_32 ? "w" : "d", nameAqRlSuffix(aqrl),
+          nameIReg(rd), nameIReg(rs2), nameIReg(rs1),
+          abiinfo->guest__use_fallback_LLSC ? " (fallback implementation)"
+                                            : "");
+      return True;
+   }
+
+   /* --------- amo{swap,add}.{w,d} rd, rs2, (rs1) ---------- */
+   /* -------- amo{xor,and,or}.{w,d} rd, rs2, (rs1) --------- */
+   /* ---------- amo{min,max}.{w,d} rd, rs2, (rs1) ---------- */
+   /* --------- amo{minu,maxu}.{w,d} rd, rs2, (rs1) --------- */
+   if (INSN(6, 0) == 0b0101111 && INSN(14, 13) == 0b01) {
+      UInt rd     = INSN(11, 7);
+      Bool is_32  = INSN(12, 12) == 0b0;
+      UInt rs1    = INSN(19, 15);
+      UInt rs2    = INSN(24, 20);
+      UInt aqrl   = INSN(26, 25);
+      UInt funct5 = INSN(31, 27);
+      if ((funct5 & 0b00010) || funct5 == 0b00101 || funct5 == 0b01001 ||
+          funct5 == 0b01101 || funct5 == 0b10001 || funct5 == 0b10101 ||
+          funct5 == 0b11001 || funct5 == 0b11101) {
+         /* Invalid AMO<x>, fall through. */
+      } else {
+         if (aqrl & 0x1)
+            stmt(irsb, IRStmt_MBE(Imbe_Fence));
+
+         IRTemp addr = newTemp(irsb, Ity_I64);
+         assign(irsb, addr, getIReg64(rs1));
+
+         IRType ty   = is_32 ? Ity_I32 : Ity_I64;
+         IRTemp orig = newTemp(irsb, ty);
+         assign(irsb, orig, loadLE(ty, mkexpr(addr)));
+         IRExpr* lhs = mkexpr(orig);
+         IRExpr* rhs = narrowFrom64(ty, getIReg64(rs2));
+
+         /* Perform the operation. */
+         const HChar* name;
+         IRExpr*      res;
+         switch (funct5) {
+         case 0b00001:
+            name = "amoswap";
+            res  = rhs;
+            break;
+         case 0b00000:
+            name = "amoadd";
+            res  = binop(is_32 ? Iop_Add32 : Iop_Add64, lhs, rhs);
+            break;
+         case 0b00100:
+            name = "amoxor";
+            res  = binop(is_32 ? Iop_Xor32 : Iop_Xor64, lhs, rhs);
+            break;
+         case 0b01100:
+            name = "amoand";
+            res  = binop(is_32 ? Iop_And32 : Iop_And64, lhs, rhs);
+            break;
+         case 0b01000:
+            name = "amoor";
+            res  = binop(is_32 ? Iop_Or32 : Iop_Or64, lhs, rhs);
+            break;
+         case 0b10000:
+            name = "amomin";
+            res  = IRExpr_ITE(
+                binop(is_32 ? Iop_CmpLT32S : Iop_CmpLT64S, lhs, rhs), lhs, rhs);
+            break;
+         case 0b10100:
+            name = "amomax";
+            res  = IRExpr_ITE(
+                binop(is_32 ? Iop_CmpLT32S : Iop_CmpLT64S, lhs, rhs), rhs, lhs);
+            break;
+         case 0b11000:
+            name = "amominu";
+            res  = IRExpr_ITE(
+                binop(is_32 ? Iop_CmpLT32U : Iop_CmpLT64U, lhs, rhs), lhs, rhs);
+            break;
+         case 0b11100:
+            name = "amomaxu";
+            res  = IRExpr_ITE(
+                binop(is_32 ? Iop_CmpLT32U : Iop_CmpLT64U, lhs, rhs), rhs, lhs);
+            break;
+         default:
+            vassert(0);
+         }
+
+         /* Store the result back if the original value remains unchanged in
+            memory. */
+         IRTemp old = newTemp(irsb, ty);
+         stmt(irsb, IRStmt_CAS(mkIRCAS(/*oldHi*/ IRTemp_INVALID, old, Iend_LE,
+                                       mkexpr(addr),
+                                       /*expdHi*/ NULL, mkexpr(orig),
+                                       /*dataHi*/ NULL, res)));
+
+         if (aqrl & 0x2)
+            stmt(irsb, IRStmt_MBE(Imbe_Fence));
+
+         /* Retry if the CAS failed (i.e. when old != orig). */
+         stmt(irsb, IRStmt_Exit(binop(is_32 ? Iop_CasCmpNE32 : Iop_CasCmpNE64,
+                                      mkexpr(old), mkexpr(orig)),
+                                Ijk_Boring, IRConst_U64(guest_pc_curr_instr),
+                                OFFB_PC));
+         /* Otherwise we succeeded. */
+         if (rd != 0)
+            putIReg64(irsb, rd, widenSto64(ty, mkexpr(old)));
+
+         DIP("%s.%s%s %s, %s, (%s)\n", name, is_32 ? "w" : "d",
+             nameAqRlSuffix(aqrl), nameIReg(rd), nameIReg(rs2), nameIReg(rs1));
+         return True;
+      }
+   }
+
+   return False;
+}
+
+static Bool dis_RV64F(/*MB_OUT*/ DisResult* dres,
+                      /*OUT*/ IRSB*         irsb,
+                      UInt                  insn)
+{
+   /* -------------- RV64F standard extension --------------- */
+
+   /* --------------- flw rd, imm[11:0](rs1) ---------------- */
+   if (INSN(6, 0) == 0b0000111 && INSN(14, 12) == 0b010) {
+      UInt  rd      = INSN(11, 7);
+      UInt  rs1     = INSN(19, 15);
+      UInt  imm11_0 = INSN(31, 20);
+      ULong simm    = vex_sx_to_64(imm11_0, 12);
+      putFReg32(irsb, rd,
+                loadLE(Ity_F32, binop(Iop_Add64, getIReg64(rs1), mkU64(simm))));
+      DIP("flw %s, %lld(%s)\n", nameFReg(rd), (Long)simm, nameIReg(rs1));
+      return True;
+   }
+
+   /* --------------- fsw rs2, imm[11:0](rs1) --------------- */
+   if (INSN(6, 0) == 0b0100111 && INSN(14, 12) == 0b010) {
+      UInt  rs1     = INSN(19, 15);
+      UInt  rs2     = INSN(24, 20);
+      UInt  imm11_0 = INSN(31, 25) << 5 | INSN(11, 7);
+      ULong simm    = vex_sx_to_64(imm11_0, 12);
+      // do not modify the bits being transferred;
+      IRExpr* f64 = getFReg64(rs2);
+      IRExpr* i32 = unop(Iop_64to32, unop(Iop_ReinterpF64asI64, f64));
+      storeLE(irsb, binop(Iop_Add64, getIReg64(rs1), mkU64(simm)), i32);
+      DIP("fsw %s, %lld(%s)\n", nameFReg(rs2), (Long)simm, nameIReg(rs1));
+      return True;
+   }
+
+   /* -------- f{madd,msub}.s rd, rs1, rs2, rs3, rm --------- */
+   /* ------- f{nmsub,nmadd}.s rd, rs1, rs2, rs3, rm -------- */
+   if (INSN(1, 0) == 0b11 && INSN(6, 4) == 0b100 && INSN(26, 25) == 0b00) {
+      UInt   opcode = INSN(6, 0);
+      UInt   rd     = INSN(11, 7);
+      UInt   rm     = INSN(14, 12);
+      UInt   rs1    = INSN(19, 15);
+      UInt   rs2    = INSN(24, 20);
+      UInt   rs3    = INSN(31, 27);
+      IRTemp rm_RISCV, rm_IR;
+      mk_get_rounding_mode(irsb, &rm_RISCV, &rm_IR, rm);
+      const HChar* name;
+      IRTemp       a1 = newTemp(irsb, Ity_F32);
+      IRTemp       a2 = newTemp(irsb, Ity_F32);
+      IRTemp       a3 = newTemp(irsb, Ity_F32);
+      switch (opcode) {
+      case 0b1000011:
+         name = "fmadd";
+         assign(irsb, a1, getFReg32(rs1));
+         assign(irsb, a2, getFReg32(rs2));
+         assign(irsb, a3, getFReg32(rs3));
+         break;
+      case 0b1000111:
+         name = "fmsub";
+         assign(irsb, a1, getFReg32(rs1));
+         assign(irsb, a2, getFReg32(rs2));
+         assign(irsb, a3, unop(Iop_NegF32, getFReg32(rs3)));
+         break;
+      case 0b1001011:
+         name = "fnmsub";
+         assign(irsb, a1, unop(Iop_NegF32, getFReg32(rs1)));
+         assign(irsb, a2, getFReg32(rs2));
+         assign(irsb, a3, getFReg32(rs3));
+         break;
+      case 0b1001111:
+         name = "fnmadd";
+         assign(irsb, a1, unop(Iop_NegF32, getFReg32(rs1)));
+         assign(irsb, a2, getFReg32(rs2));
+         assign(irsb, a3, unop(Iop_NegF32, getFReg32(rs3)));
+         break;
+      default:
+         vassert(0);
+      }
+      putFReg32(
+         irsb, rd,
+         qop(Iop_MAddF32, mkexpr(rm_IR), mkexpr(a1), mkexpr(a2), mkexpr(a3)));
+      accumulateFFLAGS(
+         irsb, mkIRExprCCall(Ity_I32, 0 /*regparms*/,
+                             "riscv64g_calculate_fflags_fmadd_s",
+                             riscv64g_calculate_fflags_fmadd_s,
+                             mkIRExprVec_4(mkexpr(a1), mkexpr(a2), mkexpr(a3),
+                                           mkexpr(rm_RISCV))));
+      DIP("%s.s %s, %s, %s, %s%s\n", name, nameFReg(rd), nameFReg(rs1),
+          nameFReg(rs2), nameFReg(rs3), nameRMOperand(rm));
+      return True;
+   }
+
+   /* ------------ f{add,sub}.s rd, rs1, rs2, rm ------------ */
+   /* ------------ f{mul,div}.s rd, rs1, rs2, rm ------------ */
+   if (INSN(6, 0) == 0b1010011 && INSN(26, 25) == 0b00 &&
+       INSN(31, 29) == 0b000) {
+      UInt   rd     = INSN(11, 7);
+      UInt   rm     = INSN(14, 12);
+      UInt   rs1    = INSN(19, 15);
+      UInt   rs2    = INSN(24, 20);
+      UInt   funct7 = INSN(31, 25);
+      IRTemp rm_RISCV, rm_IR;
+      mk_get_rounding_mode(irsb, &rm_RISCV, &rm_IR, rm);
+      const HChar* name;
+      IROp         op;
+      IRTemp       a1 = newTemp(irsb, Ity_F32);
+      IRTemp       a2 = newTemp(irsb, Ity_F32);
+      const HChar* helper_name;
+      void*        helper_addr;
+      switch (funct7) {
+      case 0b0000000:
+         name = "fadd";
+         op   = Iop_AddF32;
+         assign(irsb, a1, getFReg32(rs1));
+         assign(irsb, a2, getFReg32(rs2));
+         helper_name = "riscv64g_calculate_fflags_fadd_s";
+         helper_addr = riscv64g_calculate_fflags_fadd_s;
+         break;
+      case 0b0000100:
+         name = "fsub";
+         op   = Iop_AddF32;
+         assign(irsb, a1, getFReg32(rs1));
+         assign(irsb, a2, unop(Iop_NegF32, getFReg32(rs2)));
+         helper_name = "riscv64g_calculate_fflags_fadd_s";
+         helper_addr = riscv64g_calculate_fflags_fadd_s;
+         break;
+      case 0b0001000:
+         name = "fmul";
+         op   = Iop_MulF32;
+         assign(irsb, a1, getFReg32(rs1));
+         assign(irsb, a2, getFReg32(rs2));
+         helper_name = "riscv64g_calculate_fflags_fmul_s";
+         helper_addr = riscv64g_calculate_fflags_fmul_s;
+         break;
+      case 0b0001100:
+         name = "fdiv";
+         op   = Iop_DivF32;
+         assign(irsb, a1, getFReg32(rs1));
+         assign(irsb, a2, getFReg32(rs2));
+         helper_name = "riscv64g_calculate_fflags_fdiv_s";
+         helper_addr = riscv64g_calculate_fflags_fdiv_s;
+         break;
+      default:
+         vassert(0);
+      }
+      putFReg32(irsb, rd, triop(op, mkexpr(rm_IR), mkexpr(a1), mkexpr(a2)));
+      accumulateFFLAGS(irsb, mkIRExprCCall(Ity_I32, 0 /*regparms*/, helper_name,
+                                           helper_addr,
+                                           mkIRExprVec_3(mkexpr(a1), mkexpr(a2),
+                                                         mkexpr(rm_RISCV))));
+      DIP("%s.s %s, %s, %s%s\n", name, nameFReg(rd), nameFReg(rs1),
+          nameFReg(rs2), nameRMOperand(rm));
+      return True;
+   }
+
+   /* ----------------- fsqrt.s rd, rs1, rm ----------------- */
+   if (INSN(6, 0) == 0b1010011 && INSN(24, 20) == 0b00000 &&
+       INSN(31, 25) == 0b0101100) {
+      UInt   rd  = INSN(11, 7);
+      UInt   rm  = INSN(14, 12);
+      UInt   rs1 = INSN(19, 15);
+      IRTemp rm_RISCV, rm_IR;
+      mk_get_rounding_mode(irsb, &rm_RISCV, &rm_IR, rm);
+      IRTemp a1 = newTemp(irsb, Ity_F32);
+      assign(irsb, a1, getFReg32(rs1));
+      putFReg32(irsb, rd, binop(Iop_SqrtF32, mkexpr(rm_IR), mkexpr(a1)));
+      accumulateFFLAGS(
+         irsb, mkIRExprCCall(Ity_I32, 0 /*regparms*/,
+                             "riscv64g_calculate_fflags_fsqrt_s",
+                             riscv64g_calculate_fflags_fsqrt_s,
+                             mkIRExprVec_2(mkexpr(a1), mkexpr(rm_RISCV))));
+      DIP("fsqrt.s %s, %s%s\n", nameFReg(rd), nameFReg(rs1), nameRMOperand(rm));
+      return True;
+   }
+
+   /* ---------------- fsgnj.s rd, rs1, rs2 ----------------- */
+   if (INSN(6, 0) == 0b1010011 && INSN(14, 12) == 0b000 &&
+       INSN(31, 25) == 0b0010000) {
+      UInt rd  = INSN(11, 7);
+      UInt rs1 = INSN(19, 15);
+      UInt rs2 = INSN(24, 20);
+      if (rs1 == rs2) {
+         putFReg32(irsb, rd, getFReg32(rs1));
+         DIP("fmv.s %s, %s\n", nameFReg(rd), nameIReg(rs1));
+      } else {
+         putFReg32(
+            irsb, rd,
+            unop(Iop_ReinterpI32asF32,
+                 binop(
+                    Iop_Or32,
+                    binop(Iop_And32, unop(Iop_ReinterpF32asI32, getFReg32(rs1)),
+                          mkU32(0x7fffffff)),
+                    binop(Iop_And32, unop(Iop_ReinterpF32asI32, getFReg32(rs2)),
+                          mkU32(0x80000000)))));
+         DIP("fsgnj.s %s, %s, %s\n", nameFReg(rd), nameIReg(rs1),
+             nameIReg(rs2));
+      }
+      return True;
+   }
+
+   /* ---------------- fsgnjn.s rd, rs1, rs2 ---------------- */
+   if (INSN(6, 0) == 0b1010011 && INSN(14, 12) == 0b001 &&
+       INSN(31, 25) == 0b0010000) {
+      UInt rd  = INSN(11, 7);
+      UInt rs1 = INSN(19, 15);
+      UInt rs2 = INSN(24, 20);
+      if (rs1 == rs2) {
+         putFReg32(irsb, rd, unop(Iop_NegF32, getFReg32(rs1)));
+         DIP("fneg.s %s, %s\n", nameFReg(rd), nameIReg(rs1));
+      } else {
+         putFReg32(irsb, rd,
+                   unop(Iop_ReinterpI32asF32,
+                        binop(Iop_Or32,
+                              binop(Iop_And32,
+                                    unop(Iop_ReinterpF32asI32, getFReg32(rs1)),
+                                    mkU32(0x7fffffff)),
+                              binop(Iop_And32,
+                                    unop(Iop_ReinterpF32asI32,
+                                         unop(Iop_NegF32, getFReg32(rs2))),
+                                    mkU32(0x80000000)))));
+         DIP("fsgnjn.s %s, %s, %s\n", nameFReg(rd), nameIReg(rs1),
+             nameIReg(rs2));
+      }
+      return True;
+   }
+
+   /* ---------------- fsgnjx.s rd, rs1, rs2 ---------------- */
+   if (INSN(6, 0) == 0b1010011 && INSN(14, 12) == 0b010 &&
+       INSN(31, 25) == 0b0010000) {
+      UInt rd  = INSN(11, 7);
+      UInt rs1 = INSN(19, 15);
+      UInt rs2 = INSN(24, 20);
+      if (rs1 == rs2) {
+         putFReg32(irsb, rd, unop(Iop_AbsF32, getFReg32(rs1)));
+         DIP("fabs.s %s, %s\n", nameFReg(rd), nameIReg(rs1));
+      } else {
+         putFReg32(
+            irsb, rd,
+            unop(Iop_ReinterpI32asF32,
+                 binop(Iop_Xor32, unop(Iop_ReinterpF32asI32, getFReg32(rs1)),
+                       binop(Iop_And32,
+                             unop(Iop_ReinterpF32asI32, getFReg32(rs2)),
+                             mkU32(0x80000000)))));
+         DIP("fsgnjx.s %s, %s, %s\n", nameFReg(rd), nameIReg(rs1),
+             nameIReg(rs2));
+      }
+      return True;
+   }
+
+   /* -------------- f{min,max}.s rd, rs1, rs2 -------------- */
+   if (INSN(6, 0) == 0b1010011 && INSN(31, 25) == 0b0010100) {
+      UInt rd  = INSN(11, 7);
+      UInt rm  = INSN(14, 12);
+      UInt rs1 = INSN(19, 15);
+      UInt rs2 = INSN(24, 20);
+      if (rm != 0b000 && rm != 0b001) {
+         /* Invalid F{MIN,MAX}.S, fall through. */
+      } else {
+         const HChar* name;
+         IROp         op;
+         const HChar* helper_name;
+         void*        helper_addr;
+         switch (rm) {
+         case 0b000:
+            name        = "fmin";
+            op          = Iop_MinNumF32;
+            helper_name = "riscv64g_calculate_fflags_fmin_s";
+            helper_addr = riscv64g_calculate_fflags_fmin_s;
+            break;
+         case 0b001:
+            name        = "fmax";
+            op          = Iop_MaxNumF32;
+            helper_name = "riscv64g_calculate_fflags_fmax_s";
+            helper_addr = riscv64g_calculate_fflags_fmax_s;
+            break;
+         default:
+            vassert(0);
+         }
+         IRTemp a1 = newTemp(irsb, Ity_F32);
+         IRTemp a2 = newTemp(irsb, Ity_F32);
+         assign(irsb, a1, getFReg32(rs1));
+         assign(irsb, a2, getFReg32(rs2));
+         putFReg32(irsb, rd, binop(op, mkexpr(a1), mkexpr(a2)));
+         accumulateFFLAGS(irsb,
+                          mkIRExprCCall(Ity_I32, 0 /*regparms*/, helper_name,
+                                        helper_addr,
+                                        mkIRExprVec_2(mkexpr(a1), mkexpr(a2))));
+         DIP("%s.s %s, %s, %s\n", name, nameFReg(rd), nameFReg(rs1),
+             nameFReg(rs2));
+         return True;
+      }
+   }
+
+   /* -------------- fcvt.{w,wu}.s rd, rs1, rm -------------- */
+   if (INSN(6, 0) == 0b1010011 && INSN(24, 21) == 0b0000 &&
+       INSN(31, 25) == 0b1100000) {
+      UInt   rd        = INSN(11, 7);
+      UInt   rm        = INSN(14, 12);
+      UInt   rs1       = INSN(19, 15);
+      Bool   is_signed = INSN(20, 20) == 0b0;
+      IRTemp rm_RISCV, rm_IR;
+      mk_get_rounding_mode(irsb, &rm_RISCV, &rm_IR, rm);
+      IRTemp a1 = newTemp(irsb, Ity_F32);
+      assign(irsb, a1, getFReg32(rs1));
+      if (rd != 0)
+         putIReg32(irsb, rd,
+                   binop(is_signed ? Iop_F32toI32S : Iop_F32toI32U,
+                         mkexpr(rm_IR), mkexpr(a1)));
+      accumulateFFLAGS(
+         irsb, mkIRExprCCall(Ity_I32, 0 /*regparms*/,
+                             is_signed ? "riscv64g_calculate_fflags_fcvt_w_s"
+                                       : "riscv64g_calculate_fflags_fcvt_wu_s",
+                             is_signed ? riscv64g_calculate_fflags_fcvt_w_s
+                                       : riscv64g_calculate_fflags_fcvt_wu_s,
+                             mkIRExprVec_2(mkexpr(a1), mkexpr(rm_RISCV))));
+      DIP("fcvt.w%s.s %s, %s%s\n", is_signed ? "" : "u", nameIReg(rd),
+          nameFReg(rs1), nameRMOperand(rm));
+      return True;
+   }
+
+   /* ------------------- fmv.x.w rd, rs1 ------------------- */
+   if (INSN(6, 0) == 0b1010011 && INSN(14, 12) == 0b000 &&
+       INSN(24, 20) == 0b00000 && INSN(31, 25) == 0b1110000) {
+      UInt rd  = INSN(11, 7);
+      UInt rs1 = INSN(19, 15);
+      if (rd != 0) {
+         // For RV64, the higher 32 bits of the destination register are filled
+         // with copies of the floating-point number’s sign bit.
+         IRExpr* freg      = getFReg64(rs1);
+         IRExpr* low_half  = unop(Iop_64to32, unop(Iop_ReinterpF64asI64, freg));
+         IRExpr* sign      = binop(Iop_And32, low_half, mkU32(1u << 31));
+         IRExpr* cond      = binop(Iop_CmpEQ32, sign, mkU32(1u << 31));
+         IRExpr* high_part = IRExpr_ITE(cond, mkU32(0xffffffff), mkU32(0));
+         putIReg64(irsb, rd, binop(Iop_32HLto64, high_part, low_half));
+      }
+      DIP("fmv.x.w %s, %s\n", nameIReg(rd), nameFReg(rs1));
+      return True;
+   }
+
+   /* ------------- f{eq,lt,le}.s rd, rs1, rs2 -------------- */
+   if (INSN(6, 0) == 0b1010011 && INSN(31, 25) == 0b1010000) {
+      UInt rd  = INSN(11, 7);
+      UInt rm  = INSN(14, 12);
+      UInt rs1 = INSN(19, 15);
+      UInt rs2 = INSN(24, 20);
+      if (rm != 0b010 && rm != 0b001 && rm != 0b000) {
+         /* Invalid F{EQ,LT,LE}.S, fall through. */
+      } else {
+         IRTemp a1 = newTemp(irsb, Ity_F32);
+         IRTemp a2 = newTemp(irsb, Ity_F32);
+         assign(irsb, a1, getFReg32(rs1));
+         assign(irsb, a2, getFReg32(rs2));
+         if (rd != 0) {
+            IRTemp cmp = newTemp(irsb, Ity_I32);
+            assign(irsb, cmp, binop(Iop_CmpF32, mkexpr(a1), mkexpr(a2)));
+            IRTemp res = newTemp(irsb, Ity_I1);
+            switch (rm) {
+            case 0b010:
+               assign(irsb, res,
+                      binop(Iop_CmpEQ32, mkexpr(cmp), mkU32(Ircr_EQ)));
+               break;
+            case 0b001:
+               assign(irsb, res,
+                      binop(Iop_CmpEQ32, mkexpr(cmp), mkU32(Ircr_LT)));
+               break;
+            case 0b000:
+               assign(irsb, res,
+                      binop(Iop_Or1,
+                            binop(Iop_CmpEQ32, mkexpr(cmp), mkU32(Ircr_LT)),
+                            binop(Iop_CmpEQ32, mkexpr(cmp), mkU32(Ircr_EQ))));
+               break;
+            default:
+               vassert(0);
+            }
+            putIReg64(irsb, rd, unop(Iop_1Uto64, mkexpr(res)));
+         }
+         const HChar* name;
+         const HChar* helper_name;
+         void*        helper_addr;
+         switch (rm) {
+         case 0b010:
+            name        = "feq";
+            helper_name = "riscv64g_calculate_fflags_feq_s";
+            helper_addr = riscv64g_calculate_fflags_feq_s;
+            break;
+         case 0b001:
+            name        = "flt";
+            helper_name = "riscv64g_calculate_fflags_flt_s";
+            helper_addr = riscv64g_calculate_fflags_flt_s;
+            break;
+         case 0b000:
+            name        = "fle";
+            helper_name = "riscv64g_calculate_fflags_fle_s";
+            helper_addr = riscv64g_calculate_fflags_fle_s;
+            break;
+         default:
+            vassert(0);
+         }
+         accumulateFFLAGS(irsb,
+                          mkIRExprCCall(Ity_I32, 0 /*regparms*/, helper_name,
+                                        helper_addr,
+                                        mkIRExprVec_2(mkexpr(a1), mkexpr(a2))));
+         DIP("%s.s %s, %s, %s\n", name, nameIReg(rd), nameFReg(rs1),
+             nameFReg(rs2));
+         return True;
+      }
+   }
+
+   /* ------------------ fclass.s rd, rs1 ------------------- */
+   if (INSN(6, 0) == 0b1010011 && INSN(14, 12) == 0b001 &&
+       INSN(24, 20) == 0b00000 && INSN(31, 25) == 0b1110000) {
+      UInt rd  = INSN(11, 7);
+      UInt rs1 = INSN(19, 15);
+      if (rd != 0)
+         putIReg64(irsb, rd,
+                   mkIRExprCCall(Ity_I64, 0 /*regparms*/,
+                                 "riscv64g_calculate_fclass_s",
+                                 riscv64g_calculate_fclass_s,
+                                 mkIRExprVec_1(getFReg32(rs1))));
+      DIP("fclass.s %s, %s\n", nameIReg(rd), nameFReg(rs1));
+      return True;
+   }
+
+   /* ------------------- fmv.w.x rd, rs1 ------------------- */
+   if (INSN(6, 0) == 0b1010011 && INSN(14, 12) == 0b000 &&
+       INSN(24, 20) == 0b00000 && INSN(31, 25) == 0b1111000) {
+      UInt rd  = INSN(11, 7);
+      UInt rs1 = INSN(19, 15);
+      putFReg32(irsb, rd, unop(Iop_ReinterpI32asF32, getIReg32(rs1)));
+      DIP("fmv.w.x %s, %s\n", nameFReg(rd), nameIReg(rs1));
+      return True;
+   }
+
+   /* -------------- fcvt.s.{w,wu} rd, rs1, rm -------------- */
+   if (INSN(6, 0) == 0b1010011 && INSN(24, 21) == 0b0000 &&
+       INSN(31, 25) == 0b1101000) {
+      UInt   rd        = INSN(11, 7);
+      UInt   rm        = INSN(14, 12);
+      UInt   rs1       = INSN(19, 15);
+      Bool   is_signed = INSN(20, 20) == 0b0;
+      IRTemp rm_RISCV, rm_IR;
+      mk_get_rounding_mode(irsb, &rm_RISCV, &rm_IR, rm);
+      IRTemp a1 = newTemp(irsb, Ity_I32);
+      assign(irsb, a1, getIReg32(rs1));
+      putFReg32(irsb, rd,
+                binop(is_signed ? Iop_I32StoF32 : Iop_I32UtoF32, mkexpr(rm_IR),
+                      mkexpr(a1)));
+      accumulateFFLAGS(
+         irsb, mkIRExprCCall(Ity_I32, 0 /*regparms*/,
+                             is_signed ? "riscv64g_calculate_fflags_fcvt_s_w"
+                                       : "riscv64g_calculate_fflags_fcvt_s_wu",
+                             is_signed ? riscv64g_calculate_fflags_fcvt_s_w
+                                       : riscv64g_calculate_fflags_fcvt_s_wu,
+                             mkIRExprVec_2(mkexpr(a1), mkexpr(rm_RISCV))));
+      DIP("fcvt.s.w%s %s, %s%s\n", is_signed ? "" : "u", nameFReg(rd),
+          nameIReg(rs1), nameRMOperand(rm));
+      return True;
+   }
+
+   /* -------------- fcvt.{l,lu}.s rd, rs1, rm -------------- */
+   if (INSN(6, 0) == 0b1010011 && INSN(24, 21) == 0b0001 &&
+       INSN(31, 25) == 0b1100000) {
+      UInt   rd        = INSN(11, 7);
+      UInt   rm        = INSN(14, 12);
+      UInt   rs1       = INSN(19, 15);
+      Bool   is_signed = INSN(20, 20) == 0b0;
+      IRTemp rm_RISCV, rm_IR;
+      mk_get_rounding_mode(irsb, &rm_RISCV, &rm_IR, rm);
+      IRTemp a1 = newTemp(irsb, Ity_F32);
+      assign(irsb, a1, getFReg32(rs1));
+      if (rd != 0)
+         putIReg64(irsb, rd,
+                   binop(is_signed ? Iop_F32toI64S : Iop_F32toI64U,
+                         mkexpr(rm_IR), mkexpr(a1)));
+      accumulateFFLAGS(
+         irsb, mkIRExprCCall(Ity_I32, 0 /*regparms*/,
+                             is_signed ? "riscv64g_calculate_fflags_fcvt_l_s"
+                                       : "riscv64g_calculate_fflags_fcvt_lu_s",
+                             is_signed ? riscv64g_calculate_fflags_fcvt_l_s
+                                       : riscv64g_calculate_fflags_fcvt_lu_s,
+                             mkIRExprVec_2(mkexpr(a1), mkexpr(rm_RISCV))));
+      DIP("fcvt.l%s.s %s, %s%s\n", is_signed ? "" : "u", nameIReg(rd),
+          nameFReg(rs1), nameRMOperand(rm));
+      return True;
+   }
+
+   /* -------------- fcvt.s.{l,lu} rd, rs1, rm -------------- */
+   if (INSN(6, 0) == 0b1010011 && INSN(24, 21) == 0b0001 &&
+       INSN(31, 25) == 0b1101000) {
+      UInt   rd        = INSN(11, 7);
+      UInt   rm        = INSN(14, 12);
+      UInt   rs1       = INSN(19, 15);
+      Bool   is_signed = INSN(20, 20) == 0b0;
+      IRTemp rm_RISCV, rm_IR;
+      mk_get_rounding_mode(irsb, &rm_RISCV, &rm_IR, rm);
+      IRTemp a1 = newTemp(irsb, Ity_I64);
+      assign(irsb, a1, getIReg64(rs1));
+      putFReg32(irsb, rd,
+                binop(is_signed ? Iop_I64StoF32 : Iop_I64UtoF32, mkexpr(rm_IR),
+                      mkexpr(a1)));
+      accumulateFFLAGS(
+         irsb, mkIRExprCCall(Ity_I32, 0 /*regparms*/,
+                             is_signed ? "riscv64g_calculate_fflags_fcvt_s_l"
+                                       : "riscv64g_calculate_fflags_fcvt_s_lu",
+                             is_signed ? riscv64g_calculate_fflags_fcvt_s_l
+                                       : riscv64g_calculate_fflags_fcvt_s_lu,
+                             mkIRExprVec_2(mkexpr(a1), mkexpr(rm_RISCV))));
+      DIP("fcvt.s.l%s %s, %s%s\n", is_signed ? "" : "u", nameFReg(rd),
+          nameIReg(rs1), nameRMOperand(rm));
+      return True;
+   }
+
+   return False;
+}
+
+static Bool dis_RV64D(/*MB_OUT*/ DisResult* dres,
+                      /*OUT*/ IRSB*         irsb,
+                      UInt                  insn)
+{
+   /* -------------- RV64D standard extension --------------- */
+
+   /* --------------- fld rd, imm[11:0](rs1) ---------------- */
+   if (INSN(6, 0) == 0b0000111 && INSN(14, 12) == 0b011) {
+      UInt  rd      = INSN(11, 7);
+      UInt  rs1     = INSN(19, 15);
+      UInt  imm11_0 = INSN(31, 20);
+      ULong simm    = vex_sx_to_64(imm11_0, 12);
+      putFReg64(irsb, rd,
+                loadLE(Ity_F64, binop(Iop_Add64, getIReg64(rs1), mkU64(simm))));
+      DIP("fld %s, %lld(%s)\n", nameFReg(rd), (Long)simm, nameIReg(rs1));
+      return True;
+   }
+
+   /* --------------- fsd rs2, imm[11:0](rs1) --------------- */
+   if (INSN(6, 0) == 0b0100111 && INSN(14, 12) == 0b011) {
+      UInt  rs1     = INSN(19, 15);
+      UInt  rs2     = INSN(24, 20);
+      UInt  imm11_0 = INSN(31, 25) << 5 | INSN(11, 7);
+      ULong simm    = vex_sx_to_64(imm11_0, 12);
+      storeLE(irsb, binop(Iop_Add64, getIReg64(rs1), mkU64(simm)),
+              getFReg64(rs2));
+      DIP("fsd %s, %lld(%s)\n", nameFReg(rs2), (Long)simm, nameIReg(rs1));
+      return True;
+   }
+
+   /* -------- f{madd,msub}.d rd, rs1, rs2, rs3, rm --------- */
+   /* ------- f{nmsub,nmadd}.d rd, rs1, rs2, rs3, rm -------- */
+   if (INSN(1, 0) == 0b11 && INSN(6, 4) == 0b100 && INSN(26, 25) == 0b01) {
+      UInt   opcode = INSN(6, 0);
+      UInt   rd     = INSN(11, 7);
+      UInt   rm     = INSN(14, 12);
+      UInt   rs1    = INSN(19, 15);
+      UInt   rs2    = INSN(24, 20);
+      UInt   rs3    = INSN(31, 27);
+      IRTemp rm_RISCV, rm_IR;
+      mk_get_rounding_mode(irsb, &rm_RISCV, &rm_IR, rm);
+      const HChar* name;
+      IRTemp       a1 = newTemp(irsb, Ity_F64);
+      IRTemp       a2 = newTemp(irsb, Ity_F64);
+      IRTemp       a3 = newTemp(irsb, Ity_F64);
+      switch (opcode) {
+      case 0b1000011:
+         name = "fmadd";
+         assign(irsb, a1, getFReg64(rs1));
+         assign(irsb, a2, getFReg64(rs2));
+         assign(irsb, a3, getFReg64(rs3));
+         break;
+      case 0b1000111:
+         name = "fmsub";
+         assign(irsb, a1, getFReg64(rs1));
+         assign(irsb, a2, getFReg64(rs2));
+         assign(irsb, a3, unop(Iop_NegF64, getFReg64(rs3)));
+         break;
+      case 0b1001011:
+         name = "fnmsub";
+         assign(irsb, a1, unop(Iop_NegF64, getFReg64(rs1)));
+         assign(irsb, a2, getFReg64(rs2));
+         assign(irsb, a3, getFReg64(rs3));
+         break;
+      case 0b1001111:
+         name = "fnmadd";
+         assign(irsb, a1, unop(Iop_NegF64, getFReg64(rs1)));
+         assign(irsb, a2, getFReg64(rs2));
+         assign(irsb, a3, unop(Iop_NegF64, getFReg64(rs3)));
+         break;
+      default:
+         vassert(0);
+      }
+      putFReg64(
+         irsb, rd,
+         qop(Iop_MAddF64, mkexpr(rm_IR), mkexpr(a1), mkexpr(a2), mkexpr(a3)));
+      accumulateFFLAGS(
+         irsb, mkIRExprCCall(Ity_I32, 0 /*regparms*/,
+                             "riscv64g_calculate_fflags_fmadd_d",
+                             riscv64g_calculate_fflags_fmadd_d,
+                             mkIRExprVec_4(mkexpr(a1), mkexpr(a2), mkexpr(a3),
+                                           mkexpr(rm_RISCV))));
+      DIP("%s.d %s, %s, %s, %s%s\n", name, nameFReg(rd), nameFReg(rs1),
+          nameFReg(rs2), nameFReg(rs3), nameRMOperand(rm));
+      return True;
+   }
+
+   /* ------------ f{add,sub}.d rd, rs1, rs2, rm ------------ */
+   /* ------------ f{mul,div}.d rd, rs1, rs2, rm ------------ */
+   if (INSN(6, 0) == 0b1010011 && INSN(26, 25) == 0b01 &&
+       INSN(31, 29) == 0b000) {
+      UInt   rd     = INSN(11, 7);
+      UInt   rm     = INSN(14, 12);
+      UInt   rs1    = INSN(19, 15);
+      UInt   rs2    = INSN(24, 20);
+      UInt   funct7 = INSN(31, 25);
+      IRTemp rm_RISCV, rm_IR;
+      mk_get_rounding_mode(irsb, &rm_RISCV, &rm_IR, rm);
+      const HChar* name;
+      IROp         op;
+      IRTemp       a1 = newTemp(irsb, Ity_F64);
+      IRTemp       a2 = newTemp(irsb, Ity_F64);
+      const HChar* helper_name;
+      void*        helper_addr;
+      switch (funct7) {
+      case 0b0000001:
+         name = "fadd";
+         op   = Iop_AddF64;
+         assign(irsb, a1, getFReg64(rs1));
+         assign(irsb, a2, getFReg64(rs2));
+         helper_name = "riscv64g_calculate_fflags_fadd_d";
+         helper_addr = riscv64g_calculate_fflags_fadd_d;
+         break;
+      case 0b0000101:
+         name = "fsub";
+         op   = Iop_AddF64;
+         assign(irsb, a1, getFReg64(rs1));
+         assign(irsb, a2, unop(Iop_NegF64, getFReg64(rs2)));
+         helper_name = "riscv64g_calculate_fflags_fadd_d";
+         helper_addr = riscv64g_calculate_fflags_fadd_d;
+         break;
+      case 0b0001001:
+         name = "fmul";
+         op   = Iop_MulF64;
+         assign(irsb, a1, getFReg64(rs1));
+         assign(irsb, a2, getFReg64(rs2));
+         helper_name = "riscv64g_calculate_fflags_fmul_d";
+         helper_addr = riscv64g_calculate_fflags_fmul_d;
+         break;
+      case 0b0001101:
+         name = "fdiv";
+         op   = Iop_DivF64;
+         assign(irsb, a1, getFReg64(rs1));
+         assign(irsb, a2, getFReg64(rs2));
+         helper_name = "riscv64g_calculate_fflags_fdiv_d";
+         helper_addr = riscv64g_calculate_fflags_fdiv_d;
+         break;
+      default:
+         vassert(0);
+      }
+      putFReg64(irsb, rd, triop(op, mkexpr(rm_IR), mkexpr(a1), mkexpr(a2)));
+      accumulateFFLAGS(irsb, mkIRExprCCall(Ity_I32, 0 /*regparms*/, helper_name,
+                                           helper_addr,
+                                           mkIRExprVec_3(mkexpr(a1), mkexpr(a2),
+                                                         mkexpr(rm_RISCV))));
+      DIP("%s.d %s, %s, %s%s\n", name, nameFReg(rd), nameFReg(rs1),
+          nameFReg(rs2), nameRMOperand(rm));
+      return True;
+   }
+
+   /* ----------------- fsqrt.d rd, rs1, rm ----------------- */
+   if (INSN(6, 0) == 0b1010011 && INSN(24, 20) == 0b00000 &&
+       INSN(31, 25) == 0b0101101) {
+      UInt   rd  = INSN(11, 7);
+      UInt   rm  = INSN(14, 12);
+      UInt   rs1 = INSN(19, 15);
+      IRTemp rm_RISCV, rm_IR;
+      mk_get_rounding_mode(irsb, &rm_RISCV, &rm_IR, rm);
+      IRTemp a1 = newTemp(irsb, Ity_F64);
+      assign(irsb, a1, getFReg64(rs1));
+      putFReg64(irsb, rd, binop(Iop_SqrtF64, mkexpr(rm_IR), mkexpr(a1)));
+      accumulateFFLAGS(
+         irsb, mkIRExprCCall(Ity_I32, 0 /*regparms*/,
+                             "riscv64g_calculate_fflags_fsqrt_d",
+                             riscv64g_calculate_fflags_fsqrt_d,
+                             mkIRExprVec_2(mkexpr(a1), mkexpr(rm_RISCV))));
+      DIP("fsqrt.d %s, %s%s\n", nameFReg(rd), nameFReg(rs1), nameRMOperand(rm));
+      return True;
+   }
+
+   /* ---------------- fsgnj.d rd, rs1, rs2 ----------------- */
+   if (INSN(6, 0) == 0b1010011 && INSN(14, 12) == 0b000 &&
+       INSN(31, 25) == 0b0010001) {
+      UInt rd  = INSN(11, 7);
+      UInt rs1 = INSN(19, 15);
+      UInt rs2 = INSN(24, 20);
+      if (rs1 == rs2) {
+         putFReg64(irsb, rd, getFReg64(rs1));
+         DIP("fmv.d %s, %s\n", nameFReg(rd), nameIReg(rs1));
+      } else {
+         putFReg64(
+            irsb, rd,
+            unop(Iop_ReinterpI64asF64,
+                 binop(
+                    Iop_Or64,
+                    binop(Iop_And64, unop(Iop_ReinterpF64asI64, getFReg64(rs1)),
+                          mkU64(0x7fffffffffffffff)),
+                    binop(Iop_And64, unop(Iop_ReinterpF64asI64, getFReg64(rs2)),
+                          mkU64(0x8000000000000000)))));
+         DIP("fsgnj.d %s, %s, %s\n", nameFReg(rd), nameIReg(rs1),
+             nameIReg(rs2));
+      }
+      return True;
+   }
+
+   /* ---------------- fsgnjn.d rd, rs1, rs2 ---------------- */
+   if (INSN(6, 0) == 0b1010011 && INSN(14, 12) == 0b001 &&
+       INSN(31, 25) == 0b0010001) {
+      UInt rd  = INSN(11, 7);
+      UInt rs1 = INSN(19, 15);
+      UInt rs2 = INSN(24, 20);
+      if (rs1 == rs2) {
+         putFReg64(irsb, rd, unop(Iop_NegF64, getFReg64(rs1)));
+         DIP("fneg.d %s, %s\n", nameFReg(rd), nameIReg(rs1));
+      } else {
+         putFReg64(irsb, rd,
+                   unop(Iop_ReinterpI64asF64,
+                        binop(Iop_Or64,
+                              binop(Iop_And64,
+                                    unop(Iop_ReinterpF64asI64, getFReg64(rs1)),
+                                    mkU64(0x7fffffffffffffff)),
+                              binop(Iop_And64,
+                                    unop(Iop_ReinterpF64asI64,
+                                         unop(Iop_NegF64, getFReg64(rs2))),
+                                    mkU64(0x8000000000000000)))));
+         DIP("fsgnjn.d %s, %s, %s\n", nameFReg(rd), nameIReg(rs1),
+             nameIReg(rs2));
+      }
+      return True;
+   }
+
+   /* ---------------- fsgnjx.d rd, rs1, rs2 ---------------- */
+   if (INSN(6, 0) == 0b1010011 && INSN(14, 12) == 0b010 &&
+       INSN(31, 25) == 0b0010001) {
+      UInt rd  = INSN(11, 7);
+      UInt rs1 = INSN(19, 15);
+      UInt rs2 = INSN(24, 20);
+      if (rs1 == rs2) {
+         putFReg64(irsb, rd, unop(Iop_AbsF64, getFReg64(rs1)));
+         DIP("fabs.d %s, %s\n", nameFReg(rd), nameIReg(rs1));
+      } else {
+         putFReg64(
+            irsb, rd,
+            unop(Iop_ReinterpI64asF64,
+                 binop(Iop_Xor64, unop(Iop_ReinterpF64asI64, getFReg64(rs1)),
+                       binop(Iop_And64,
+                             unop(Iop_ReinterpF64asI64, getFReg64(rs2)),
+                             mkU64(0x8000000000000000)))));
+         DIP("fsgnjx.d %s, %s, %s\n", nameFReg(rd), nameIReg(rs1),
+             nameIReg(rs2));
+      }
+      return True;
+   }
+
+   /* -------------- f{min,max}.d rd, rs1, rs2 -------------- */
+   if (INSN(6, 0) == 0b1010011 && INSN(31, 25) == 0b0010101) {
+      UInt rd  = INSN(11, 7);
+      UInt rm  = INSN(14, 12);
+      UInt rs1 = INSN(19, 15);
+      UInt rs2 = INSN(24, 20);
+      if (rm != 0b000 && rm != 0b001) {
+         /* Invalid F{MIN,MAX}.D, fall through. */
+      } else {
+         const HChar* name;
+         IROp         op;
+         const HChar* helper_name;
+         void*        helper_addr;
+         switch (rm) {
+         case 0b000:
+            name        = "fmin";
+            op          = Iop_MinNumF64;
+            helper_name = "riscv64g_calculate_fflags_fmin_d";
+            helper_addr = riscv64g_calculate_fflags_fmin_d;
+            break;
+         case 0b001:
+            name        = "fmax";
+            op          = Iop_MaxNumF64;
+            helper_name = "riscv64g_calculate_fflags_fmax_d";
+            helper_addr = riscv64g_calculate_fflags_fmax_d;
+            break;
+         default:
+            vassert(0);
+         }
+         IRTemp a1 = newTemp(irsb, Ity_F64);
+         IRTemp a2 = newTemp(irsb, Ity_F64);
+         assign(irsb, a1, getFReg64(rs1));
+         assign(irsb, a2, getFReg64(rs2));
+         putFReg64(irsb, rd, binop(op, mkexpr(a1), mkexpr(a2)));
+         accumulateFFLAGS(irsb,
+                          mkIRExprCCall(Ity_I32, 0 /*regparms*/, helper_name,
+                                        helper_addr,
+                                        mkIRExprVec_2(mkexpr(a1), mkexpr(a2))));
+         DIP("%s.d %s, %s, %s\n", name, nameFReg(rd), nameFReg(rs1),
+             nameFReg(rs2));
+         return True;
+      }
+   }
+
+   /* ---------------- fcvt.s.d rd, rs1, rm ----------------- */
+   if (INSN(6, 0) == 0b1010011 && INSN(24, 20) == 0b00001 &&
+       INSN(31, 25) == 0b0100000) {
+      UInt   rd  = INSN(11, 7);
+      UInt   rm  = INSN(14, 12);
+      UInt   rs1 = INSN(19, 15);
+      IRTemp rm_RISCV, rm_IR;
+      mk_get_rounding_mode(irsb, &rm_RISCV, &rm_IR, rm);
+      IRTemp a1 = newTemp(irsb, Ity_F64);
+      assign(irsb, a1, getFReg64(rs1));
+      putFReg32(irsb, rd, binop(Iop_F64toF32, mkexpr(rm_IR), mkexpr(a1)));
+      accumulateFFLAGS(
+         irsb, mkIRExprCCall(Ity_I32, 0 /*regparms*/,
+                             "riscv64g_calculate_fflags_fcvt_s_d",
+                             riscv64g_calculate_fflags_fcvt_s_d,
+                             mkIRExprVec_2(mkexpr(a1), mkexpr(rm_RISCV))));
+      DIP("fcvt.s.d %s, %s%s\n", nameFReg(rd), nameFReg(rs1),
+          nameRMOperand(rm));
+      return True;
+   }
+
+   /* ---------------- fcvt.d.s rd, rs1, rm ----------------- */
+   if (INSN(6, 0) == 0b1010011 && INSN(24, 20) == 0b00000 &&
+       INSN(31, 25) == 0b0100001) {
+      UInt rd  = INSN(11, 7);
+      UInt rm  = INSN(14, 12); /* Ignored as the result is always exact. */
+      UInt rs1 = INSN(19, 15);
+      putFReg64(irsb, rd, unop(Iop_F32toF64, getFReg32(rs1)));
+      DIP("fcvt.d.s %s, %s%s\n", nameFReg(rd), nameFReg(rs1),
+          nameRMOperand(rm));
+      return True;
+   }
+
+   /* ------------- f{eq,lt,le}.d rd, rs1, rs2 -------------- */
+   if (INSN(6, 0) == 0b1010011 && INSN(31, 25) == 0b1010001) {
+      UInt rd  = INSN(11, 7);
+      UInt rm  = INSN(14, 12);
+      UInt rs1 = INSN(19, 15);
+      UInt rs2 = INSN(24, 20);
+      if (rm != 0b010 && rm != 0b001 && rm != 0b000) {
+         /* Invalid F{EQ,LT,LE}.D, fall through. */
+      } else {
+         IRTemp a1 = newTemp(irsb, Ity_F64);
+         IRTemp a2 = newTemp(irsb, Ity_F64);
+         assign(irsb, a1, getFReg64(rs1));
+         assign(irsb, a2, getFReg64(rs2));
+         if (rd != 0) {
+            IRTemp cmp = newTemp(irsb, Ity_I32);
+            assign(irsb, cmp, binop(Iop_CmpF64, mkexpr(a1), mkexpr(a2)));
+            IRTemp res = newTemp(irsb, Ity_I1);
+            switch (rm) {
+            case 0b010:
+               assign(irsb, res,
+                      binop(Iop_CmpEQ32, mkexpr(cmp), mkU32(Ircr_EQ)));
+               break;
+            case 0b001:
+               assign(irsb, res,
+                      binop(Iop_CmpEQ32, mkexpr(cmp), mkU32(Ircr_LT)));
+               break;
+            case 0b000:
+               assign(irsb, res,
+                      binop(Iop_Or1,
+                            binop(Iop_CmpEQ32, mkexpr(cmp), mkU32(Ircr_LT)),
+                            binop(Iop_CmpEQ32, mkexpr(cmp), mkU32(Ircr_EQ))));
+               break;
+            default:
+               vassert(0);
+            }
+            putIReg64(irsb, rd, unop(Iop_1Uto64, mkexpr(res)));
+         }
+         const HChar* name;
+         const HChar* helper_name;
+         void*        helper_addr;
+         switch (rm) {
+         case 0b010:
+            name        = "feq";
+            helper_name = "riscv64g_calculate_fflags_feq_d";
+            helper_addr = riscv64g_calculate_fflags_feq_d;
+            break;
+         case 0b001:
+            name        = "flt";
+            helper_name = "riscv64g_calculate_fflags_flt_d";
+            helper_addr = riscv64g_calculate_fflags_flt_d;
+            break;
+         case 0b000:
+            name        = "fle";
+            helper_name = "riscv64g_calculate_fflags_fle_d";
+            helper_addr = riscv64g_calculate_fflags_fle_d;
+            break;
+         default:
+            vassert(0);
+         }
+         accumulateFFLAGS(irsb,
+                          mkIRExprCCall(Ity_I32, 0 /*regparms*/, helper_name,
+                                        helper_addr,
+                                        mkIRExprVec_2(mkexpr(a1), mkexpr(a2))));
+         DIP("%s.d %s, %s, %s\n", name, nameIReg(rd), nameFReg(rs1),
+             nameFReg(rs2));
+         return True;
+      }
+   }
+
+   /* ------------------ fclass.d rd, rs1 ------------------- */
+   if (INSN(6, 0) == 0b1010011 && INSN(14, 12) == 0b001 &&
+       INSN(24, 20) == 0b00000 && INSN(31, 25) == 0b1110001) {
+      UInt rd  = INSN(11, 7);
+      UInt rs1 = INSN(19, 15);
+      if (rd != 0)
+         putIReg64(irsb, rd,
+                   mkIRExprCCall(Ity_I64, 0 /*regparms*/,
+                                 "riscv64g_calculate_fclass_d",
+                                 riscv64g_calculate_fclass_d,
+                                 mkIRExprVec_1(getFReg64(rs1))));
+      DIP("fclass.d %s, %s\n", nameIReg(rd), nameFReg(rs1));
+      return True;
+   }
+
+   /* -------------- fcvt.{w,wu}.d rd, rs1, rm -------------- */
+   if (INSN(6, 0) == 0b1010011 && INSN(24, 21) == 0b0000 &&
+       INSN(31, 25) == 0b1100001) {
+      UInt   rd        = INSN(11, 7);
+      UInt   rm        = INSN(14, 12);
+      UInt   rs1       = INSN(19, 15);
+      Bool   is_signed = INSN(20, 20) == 0b0;
+      IRTemp rm_RISCV, rm_IR;
+      mk_get_rounding_mode(irsb, &rm_RISCV, &rm_IR, rm);
+      IRTemp a1 = newTemp(irsb, Ity_F64);
+      assign(irsb, a1, getFReg64(rs1));
+      if (rd != 0)
+         putIReg32(irsb, rd,
+                   binop(is_signed ? Iop_F64toI32S : Iop_F64toI32U,
+                         mkexpr(rm_IR), mkexpr(a1)));
+      accumulateFFLAGS(
+         irsb, mkIRExprCCall(Ity_I32, 0 /*regparms*/,
+                             is_signed ? "riscv64g_calculate_fflags_fcvt_w_d"
+                                       : "riscv64g_calculate_fflags_fcvt_wu_d",
+                             is_signed ? riscv64g_calculate_fflags_fcvt_w_d
+                                       : riscv64g_calculate_fflags_fcvt_wu_d,
+                             mkIRExprVec_2(mkexpr(a1), mkexpr(rm_RISCV))));
+      DIP("fcvt.w%s.d %s, %s%s\n", is_signed ? "" : "u", nameIReg(rd),
+          nameFReg(rs1), nameRMOperand(rm));
+      return True;
+   }
+
+   /* -------------- fcvt.d.{w,wu} rd, rs1, rm -------------- */
+   if (INSN(6, 0) == 0b1010011 && INSN(24, 21) == 0b0000 &&
+       INSN(31, 25) == 0b1101001) {
+      UInt rd  = INSN(11, 7);
+      UInt rm  = INSN(14, 12); /* Ignored as the result is always exact. */
+      UInt rs1 = INSN(19, 15);
+      Bool is_signed = INSN(20, 20) == 0b0;
+      putFReg64(
+         irsb, rd,
+         unop(is_signed ? Iop_I32StoF64 : Iop_I32UtoF64, getIReg32(rs1)));
+      DIP("fcvt.d.w%s %s, %s%s\n", is_signed ? "" : "u", nameFReg(rd),
+          nameIReg(rs1), nameRMOperand(rm));
+      return True;
+   }
+
+   /* -------------- fcvt.{l,lu}.d rd, rs1, rm -------------- */
+   if (INSN(6, 0) == 0b1010011 && INSN(24, 21) == 0b0001 &&
+       INSN(31, 25) == 0b1100001) {
+      UInt   rd        = INSN(11, 7);
+      UInt   rm        = INSN(14, 12);
+      UInt   rs1       = INSN(19, 15);
+      Bool   is_signed = INSN(20, 20) == 0b0;
+      IRTemp rm_RISCV, rm_IR;
+      mk_get_rounding_mode(irsb, &rm_RISCV, &rm_IR, rm);
+      IRTemp a1 = newTemp(irsb, Ity_F64);
+      assign(irsb, a1, getFReg64(rs1));
+      if (rd != 0)
+         putIReg64(irsb, rd,
+                   binop(is_signed ? Iop_F64toI64S : Iop_F64toI64U,
+                         mkexpr(rm_IR), mkexpr(a1)));
+      accumulateFFLAGS(
+         irsb, mkIRExprCCall(Ity_I32, 0 /*regparms*/,
+                             is_signed ? "riscv64g_calculate_fflags_fcvt_l_d"
+                                       : "riscv64g_calculate_fflags_fcvt_lu_d",
+                             is_signed ? riscv64g_calculate_fflags_fcvt_l_d
+                                       : riscv64g_calculate_fflags_fcvt_lu_d,
+                             mkIRExprVec_2(mkexpr(a1), mkexpr(rm_RISCV))));
+      DIP("fcvt.l%s.d %s, %s%s\n", is_signed ? "" : "u", nameIReg(rd),
+          nameFReg(rs1), nameRMOperand(rm));
+      return True;
+   }
+
+   /* ------------------- fmv.x.d rd, rs1 ------------------- */
+   if (INSN(6, 0) == 0b1010011 && INSN(14, 12) == 0b000 &&
+       INSN(24, 20) == 0b00000 && INSN(31, 25) == 0b1110001) {
+      UInt rd  = INSN(11, 7);
+      UInt rs1 = INSN(19, 15);
+      if (rd != 0)
+         putIReg64(irsb, rd, unop(Iop_ReinterpF64asI64, getFReg64(rs1)));
+      DIP("fmv.x.d %s, %s\n", nameIReg(rd), nameFReg(rs1));
+      return True;
+   }
+
+   /* -------------- fcvt.d.{l,lu} rd, rs1, rm -------------- */
+   if (INSN(6, 0) == 0b1010011 && INSN(24, 21) == 0b0001 &&
+       INSN(31, 25) == 0b1101001) {
+      UInt   rd        = INSN(11, 7);
+      UInt   rm        = INSN(14, 12);
+      UInt   rs1       = INSN(19, 15);
+      Bool   is_signed = INSN(20, 20) == 0b0;
+      IRTemp rm_RISCV, rm_IR;
+      mk_get_rounding_mode(irsb, &rm_RISCV, &rm_IR, rm);
+      IRTemp a1 = newTemp(irsb, Ity_I64);
+      assign(irsb, a1, getIReg64(rs1));
+      putFReg64(irsb, rd,
+                binop(is_signed ? Iop_I64StoF64 : Iop_I64UtoF64, mkexpr(rm_IR),
+                      mkexpr(a1)));
+      accumulateFFLAGS(
+         irsb, mkIRExprCCall(Ity_I32, 0 /*regparms*/,
+                             is_signed ? "riscv64g_calculate_fflags_fcvt_d_l"
+                                       : "riscv64g_calculate_fflags_fcvt_d_lu",
+                             is_signed ? riscv64g_calculate_fflags_fcvt_d_l
+                                       : riscv64g_calculate_fflags_fcvt_d_lu,
+                             mkIRExprVec_2(mkexpr(a1), mkexpr(rm_RISCV))));
+      DIP("fcvt.d.l%s %s, %s%s\n", is_signed ? "" : "u", nameFReg(rd),
+          nameIReg(rs1), nameRMOperand(rm));
+      return True;
+   }
+
+   /* ------------------- fmv.d.x rd, rs1 ------------------- */
+   if (INSN(6, 0) == 0b1010011 && INSN(14, 12) == 0b000 &&
+       INSN(24, 20) == 0b00000 && INSN(31, 25) == 0b1111001) {
+      UInt rd  = INSN(11, 7);
+      UInt rs1 = INSN(19, 15);
+      putFReg64(irsb, rd, unop(Iop_ReinterpI64asF64, getIReg64(rs1)));
+      DIP("fmv.d.x %s, %s\n", nameFReg(rd), nameIReg(rs1));
+      return True;
+   }
+
+   return False;
+}
+
+static Bool dis_RV64Zicsr(/*MB_OUT*/ DisResult* dres,
+                          /*OUT*/ IRSB*         irsb,
+                          UInt                  insn)
+{
+   /* ------------ RV64Zicsr standard extension ------------- */
+
+   /* -------------- csrr{w,s,c} rd, csr, rs1 --------------- */
+   if (INSN(6, 0) == 0b1110011) {
+      UInt rd     = INSN(11, 7);
+      UInt funct3 = INSN(14, 12);
+      UInt rs1    = INSN(19, 15);
+      UInt csr    = INSN(31, 20);
+      if ((funct3 != 0b001 && funct3 != 0b010 && funct3 != 0b011) ||
+          (csr != 0x001 && csr != 0x002 && csr != 0x003)) {
+         /* Invalid CSRR{W,S,C}, fall through. */
+      } else {
+         switch (csr) {
+         case 0x001: {
+            /* fflags */
+            IRTemp fcsr = newTemp(irsb, Ity_I32);
+            assign(irsb, fcsr, getFCSR());
+            if (rd != 0)
+               putIReg64(irsb, rd,
+                         unop(Iop_32Uto64,
+                              binop(Iop_And32, mkexpr(fcsr), mkU32(0x1f))));
+
+            IRExpr* expr;
+            switch (funct3) {
+            case 0b001:
+               expr = binop(Iop_Or32,
+                            binop(Iop_And32, mkexpr(fcsr), mkU32(0xffffffe0)),
+                            binop(Iop_And32, getIReg32(rs1), mkU32(0x1f)));
+               break;
+            case 0b010:
+               expr = binop(Iop_Or32, mkexpr(fcsr),
+                            binop(Iop_And32, getIReg32(rs1), mkU32(0x1f)));
+               break;
+            case 0b011:
+               expr = binop(Iop_And32, mkexpr(fcsr),
+                            unop(Iop_Not32, binop(Iop_And32, getIReg32(rs1),
+                                                  mkU32(0x1f))));
+               break;
+            default:
+               vassert(0);
+            }
+            putFCSR(irsb, expr);
+            break;
+         }
+         case 0x002: {
+            /* frm */
+            IRTemp fcsr = newTemp(irsb, Ity_I32);
+            assign(irsb, fcsr, getFCSR());
+            if (rd != 0)
+               putIReg64(
+                  irsb, rd,
+                  unop(Iop_32Uto64,
+                       binop(Iop_And32, binop(Iop_Shr32, mkexpr(fcsr), mkU8(5)),
+                             mkU32(0x7))));
+
+            IRExpr* expr;
+            switch (funct3) {
+            case 0b001:
+               expr = binop(
+                  Iop_Or32, binop(Iop_And32, mkexpr(fcsr), mkU32(0xffffff1f)),
+                  binop(Iop_Shl32, binop(Iop_And32, getIReg32(rs1), mkU32(0x7)),
+                        mkU8(5)));
+               break;
+            case 0b010:
+               expr = binop(Iop_Or32, mkexpr(fcsr),
+                            binop(Iop_Shl32,
+                                  binop(Iop_And32, getIReg32(rs1), mkU32(0x7)),
+                                  mkU8(5)));
+               break;
+            case 0b011:
+               expr =
+                  binop(Iop_And32, mkexpr(fcsr),
+                        unop(Iop_Not32,
+                             binop(Iop_Shl32,
+                                   binop(Iop_And32, getIReg32(rs1), mkU32(0x7)),
+                                   mkU8(5))));
+               break;
+            default:
+               vassert(0);
+            }
+            putFCSR(irsb, expr);
+            break;
+         }
+         case 0x003: {
+            /* fcsr */
+            IRTemp fcsr = newTemp(irsb, Ity_I32);
+            assign(irsb, fcsr, getFCSR());
+            if (rd != 0)
+               putIReg64(irsb, rd, unop(Iop_32Uto64, mkexpr(fcsr)));
+
+            IRExpr* expr;
+            switch (funct3) {
+            case 0b001:
+               expr = binop(Iop_And32, getIReg32(rs1), mkU32(0xff));
+               break;
+            case 0b010:
+               expr = binop(Iop_Or32, mkexpr(fcsr),
+                            binop(Iop_And32, getIReg32(rs1), mkU32(0xff)));
+               break;
+            case 0b011:
+               expr = binop(Iop_And32, mkexpr(fcsr),
+                            unop(Iop_Not32, binop(Iop_And32, getIReg32(rs1),
+                                                  mkU32(0xff))));
+               break;
+            default:
+               vassert(0);
+            }
+            putFCSR(irsb, expr);
+            break;
+         }
+         default:
+            vassert(0);
+         }
+
+         const HChar* name;
+         switch (funct3) {
+         case 0b001:
+            name = "csrrw";
+            break;
+         case 0b010:
+            name = "csrrs";
+            break;
+         case 0b011:
+            name = "csrrc";
+            break;
+         default:
+            vassert(0);
+         }
+         DIP("%s %s, %s, %s\n", name, nameIReg(rd), nameCSR(csr),
+             nameIReg(rs1));
+         return True;
+      }
+   }
+
+   return False;
+}
+
+static Bool dis_RISCV64_standard(/*MB_OUT*/ DisResult* dres,
+                                 /*OUT*/ IRSB*         irsb,
+                                 UInt                  insn,
+                                 Addr                  guest_pc_curr_instr,
+                                 const VexAbiInfo*     abiinfo,
+                                 Bool                  sigill_diag)
+{
+   vassert(INSN(1, 0) == 0b11);
+
+   Bool ok = False;
+   if (!ok)
+      ok = dis_RV64I(dres, irsb, insn, guest_pc_curr_instr);
+   if (!ok)
+      ok = dis_RV64M(dres, irsb, insn);
+   if (!ok)
+      ok = dis_RV64A(dres, irsb, insn, guest_pc_curr_instr, abiinfo);
+   if (!ok)
+      ok = dis_RV64F(dres, irsb, insn);
+   if (!ok)
+      ok = dis_RV64D(dres, irsb, insn);
+   if (!ok)
+      ok = dis_RV64Zicsr(dres, irsb, insn);
+   if (ok)
+      return True;
+
+   if (sigill_diag)
+      vex_printf("RISCV64 front end: standard\n");
+   return False;
+}
+
+/* Disassemble a single riscv64 instruction into IR. Returns True iff the
+   instruction was decoded, in which case *dres will be set accordingly, or
+   False, in which case *dres should be ignored by the caller. */
+static Bool disInstr_RISCV64_WRK(/*MB_OUT*/ DisResult* dres,
+                                 /*OUT*/ IRSB*         irsb,
+                                 const UChar*          guest_instr,
+                                 Addr                  guest_pc_curr_instr,
+                                 const VexArchInfo*    archinfo,
+                                 const VexAbiInfo*     abiinfo,
+                                 Bool                  sigill_diag)
+{
+   /* Set result defaults. */
+   dres->whatNext    = Dis_Continue;
+   dres->len         = 0;
+   dres->jk_StopHere = Ijk_INVALID;
+   dres->hint        = Dis_HintNone;
+
+   /* Read the instruction word. */
+   UInt insn = getInsn(guest_instr);
+
+   if (0)
+      vex_printf("insn: 0x%x\n", insn);
+
+   DIP("\t(riscv64) 0x%llx:  ", (ULong)guest_pc_curr_instr);
+
+   vassert((guest_pc_curr_instr & 1) == 0);
+
+   /* Spot "Special" instructions (see comment at top of file). */
+   {
+      const UChar* code = guest_instr;
+      /* Spot the 16-byte preamble:
+            00305013   srli zero, zero, 3
+            00d05013   srli zero, zero, 13
+            03305013   srli zero, zero, 51
+            03d05013   srli zero, zero, 61
+      */
+      UInt word1 = 0x00305013;
+      UInt word2 = 0x00d05013;
+      UInt word3 = 0x03305013;
+      UInt word4 = 0x03d05013;
+      if (getUIntLittleEndianly(code + 0) == word1 &&
+          getUIntLittleEndianly(code + 4) == word2 &&
+          getUIntLittleEndianly(code + 8) == word3 &&
+          getUIntLittleEndianly(code + 12) == word4) {
+         /* Got a "Special" instruction preamble. Which one is it? */
+         dres->len  = 20;
+         UInt which = getUIntLittleEndianly(code + 16);
+         if (which == 0x00a56533 /* or a0, a0, a0 */) {
+            /* a3 = client_request ( a4 ) */
+            DIP("a3 = client_request ( a4 )\n");
+            putPC(irsb, mkU64(guest_pc_curr_instr + 20));
+            dres->jk_StopHere = Ijk_ClientReq;
+            dres->whatNext    = Dis_StopHere;
+            return True;
+         } else if (which == 0x00b5e5b3 /* or a1, a1, a1 */) {
+            /* a3 = guest_NRADDR */
+            DIP("a3 = guest_NRADDR\n");
+            putIReg64(irsb, 13 /*x13/a3*/, IRExpr_Get(OFFB_NRADDR, Ity_I64));
+            return True;
+         } else if (which == 0x00c66633 /* or a2, a2, a2 */) {
+            /* branch-and-link-to-noredir t0 */
+            DIP("branch-and-link-to-noredir t0\n");
+            putIReg64(irsb, 1 /*x1/ra*/, mkU64(guest_pc_curr_instr + 20));
+            putPC(irsb, getIReg64(5 /*x5/t0*/));
+            dres->jk_StopHere = Ijk_NoRedir;
+            dres->whatNext    = Dis_StopHere;
+            return True;
+         } else if (which == 0x00d6e6b3 /* or a3, a3, a3 */) {
+            /* IR injection */
+            DIP("IR injection\n");
+            vex_inject_ir(irsb, Iend_LE);
+            /* Invalidate the current insn. The reason is that the IRop we're
+               injecting here can change. In which case the translation has to
+               be redone. For ease of handling, we simply invalidate all the
+               time. */
+            stmt(irsb, IRStmt_Put(OFFB_CMSTART, mkU64(guest_pc_curr_instr)));
+            stmt(irsb, IRStmt_Put(OFFB_CMLEN, mkU64(20)));
+            putPC(irsb, mkU64(guest_pc_curr_instr + 20));
+            dres->whatNext    = Dis_StopHere;
+            dres->jk_StopHere = Ijk_InvalICache;
+            return True;
+         }
+         /* We don't know what it is. */
+         return False;
+      }
+   }
+
+   /* Main riscv64 instruction decoder starts here. */
+   Bool ok = False;
+   UInt inst_size;
+
+   /* Parse insn[1:0] to determine whether the instruction is 16-bit
+      (compressed) or 32-bit. */
+   switch (INSN(1, 0)) {
+   case 0b00:
+   case 0b01:
+   case 0b10:
+      dres->len = inst_size = 2;
+      ok = dis_RV64C(dres, irsb, insn, guest_pc_curr_instr, sigill_diag);
+      break;
+
+   case 0b11:
+      dres->len = inst_size = 4;
+      ok = dis_RISCV64_standard(dres, irsb, insn, guest_pc_curr_instr, abiinfo,
+                                sigill_diag);
+      break;
+
+   default:
+      vassert(0); /* Can't happen. */
+   }
+
+   /* If the next-level down decoders failed, make sure dres didn't get
+      changed. */
+   if (!ok) {
+      vassert(dres->whatNext == Dis_Continue);
+      vassert(dres->len == inst_size);
+      vassert(dres->jk_StopHere == Ijk_INVALID);
+   }
+
+   return ok;
+}
+
+#undef INSN
+
+/*------------------------------------------------------------*/
+/*--- Top-level fn                                         ---*/
+/*------------------------------------------------------------*/
+
+/* Disassemble a single instruction into IR. The instruction is located in host
+   memory at &guest_code[delta]. */
+DisResult disInstr_RISCV64(IRSB*              irsb,
+                           const UChar*       guest_code,
+                           Long               delta,
+                           Addr               guest_IP,
+                           VexArch            guest_arch,
+                           const VexArchInfo* archinfo,
+                           const VexAbiInfo*  abiinfo,
+                           VexEndness         host_endness,
+                           Bool               sigill_diag)
+{
+   DisResult dres;
+   vex_bzero(&dres, sizeof(dres));
+
+   vassert(guest_arch == VexArchRISCV64);
+   /* Check that the host is little-endian as getFReg32() and putFReg32() depend
+      on this fact. */
+   vassert(host_endness == VexEndnessLE);
+
+   /* Try to decode. */
+   Bool ok = disInstr_RISCV64_WRK(&dres, irsb, &guest_code[delta], guest_IP,
+                                  archinfo, abiinfo, sigill_diag);
+   if (ok) {
+      /* All decode successes end up here. */
+      vassert(dres.len == 2 || dres.len == 4 || dres.len == 20);
+      switch (dres.whatNext) {
+      case Dis_Continue:
+         putPC(irsb, mkU64(guest_IP + dres.len));
+         break;
+      case Dis_StopHere:
+         break;
+      default:
+         vassert(0);
+      }
+      DIP("\n");
+   } else {
+      /* All decode failures end up here. */
+      if (sigill_diag) {
+         Int   i, j;
+         UChar buf[64];
+         UInt  insn = getInsn(&guest_code[delta]);
+         vex_bzero(buf, sizeof(buf));
+         for (i = j = 0; i < 32; i++) {
+            if (i > 0) {
+               if ((i & 7) == 0)
+                  buf[j++] = ' ';
+               else if ((i & 3) == 0)
+                  buf[j++] = '\'';
+            }
+            buf[j++] = (insn & (1 << (31 - i))) ? '1' : '0';
+         }
+         vex_printf("disInstr(riscv64): unhandled instruction 0x%08x\n", insn);
+         vex_printf("disInstr(riscv64): %s\n", buf);
+      }
+
+      /* Tell the dispatcher that this insn cannot be decoded, and so has not
+         been executed, and (is currently) the next to be executed. The pc
+         register should be up-to-date since it is made so at the start of each
+         insn, but nevertheless be paranoid and update it again right now. */
+      putPC(irsb, mkU64(guest_IP));
+      dres.len         = 0;
+      dres.whatNext    = Dis_StopHere;
+      dres.jk_StopHere = Ijk_NoDecode;
+   }
+   return dres;
+}
+
+/*--------------------------------------------------------------------*/
+/*--- end                                     guest_riscv64_toIR.c ---*/
+/*--------------------------------------------------------------------*/
diff --git a/VEX/priv/guest_s390_defs.h b/VEX/priv/guest_s390_defs.h
index a64d563..29efa01 100644
--- a/VEX/priv/guest_s390_defs.h
+++ b/VEX/priv/guest_s390_defs.h
@@ -34,6 +34,7 @@
 #include "libvex_basictypes.h"        // offsetof
 #include "guest_generic_bb_to_IR.h"   // DisResult
 #include "libvex_guest_s390x.h"       // VexGuestS390XState
+#include "main_util.h"                // STATIC_ASSERT
 
 
 /* Convert one s390 insn to IR.  See the type DisOneInstrFn in
@@ -89,9 +90,7 @@ UInt  s390_do_cvb(ULong decimal);
 ULong s390_do_cvd(ULong binary);
 ULong s390_do_ecag(ULong op2addr);
 UInt  s390_do_pfpo(UInt gpr0);
-void  s390x_dirtyhelper_PPNO_query(VexGuestS390XState *guest_state, ULong r1, ULong r2);
-ULong  s390x_dirtyhelper_PPNO_sha512(VexGuestS390XState *guest_state, ULong r1, ULong r2);
-void  s390x_dirtyhelper_PPNO_sha512_load_param_block( void );
+
 /* The various ways to compute the condition code. */
 enum {
    S390_CC_OP_BITWISE = 0,
@@ -156,7 +155,8 @@ enum {
    S390_CC_OP_PFPO_64 = 59,
    S390_CC_OP_PFPO_128 = 60,
    S390_CC_OP_MUL_32 = 61,
-   S390_CC_OP_MUL_64 = 62
+   S390_CC_OP_MUL_64 = 62,
+   S390_CC_OP_BITWISE2 = 63
 };
 
 /*------------------------------------------------------------*/
@@ -268,7 +268,6 @@ typedef enum {
    S390_VEC_OP_VPKS,
    S390_VEC_OP_VPKLS,
    S390_VEC_OP_VCEQ,
-   S390_VEC_OP_VTM,
    S390_VEC_OP_VGFM,
    S390_VEC_OP_VGFMA,
    S390_VEC_OP_VMAH,
diff --git a/VEX/priv/guest_s390_helpers.c b/VEX/priv/guest_s390_helpers.c
index 94d0a24..6e0321f 100644
--- a/VEX/priv/guest_s390_helpers.c
+++ b/VEX/priv/guest_s390_helpers.c
@@ -1454,7 +1454,10 @@ s390_calculate_cc(ULong cc_op, ULong cc_dep1, ULong cc_dep2, ULong cc_ndep)
    switch (cc_op) {
 
    case S390_CC_OP_BITWISE:
-      return S390_CC_FOR_BINARY("ogr", cc_dep1, (ULong)0);
+      return cc_dep1 == 0 ? 0 : 1;
+
+   case S390_CC_OP_BITWISE2:
+      return cc_dep1 == 0 ? 0 : 3;
 
    case S390_CC_OP_SIGNED_COMPARE:
       return S390_CC_FOR_BINARY("cgr", cc_dep1, cc_dep2);
@@ -2045,6 +2048,21 @@ guest_s390x_spechelper(const HChar *function_name, IRExpr **args,
          return mkU32(0);
       }
 
+      /* S390_CC_OP_BITWISE2
+         Like S390_CC_OP_BITWISE, but yielding cc = 3 for nonzero result */
+      if (cc_op == S390_CC_OP_BITWISE2) {
+         if ((cond & (8 + 1)) == 8 + 1) {
+            return mkU32(1);
+         }
+         if (cond & 8) {
+            return unop(Iop_1Uto32, binop(Iop_CmpEQ64, cc_dep1, mkU64(0)));
+         }
+         if (cond & 1) {
+            return unop(Iop_1Uto32, binop(Iop_CmpNE64, cc_dep1, mkU64(0)));
+         }
+         return mkU32(0);
+      }
+
       /* S390_CC_OP_INSERT_CHAR_MASK_32
          Since the mask comes from an immediate field in the opcode, we
          expect the mask to be a constant here. That simplifies matters. */
@@ -2122,8 +2140,8 @@ guest_s390x_spechelper(const HChar *function_name, IRExpr **args,
       }
 
       /* S390_CC_OP_TEST_UNDER_MASK_8
-         Since the mask comes from an immediate field in the opcode, we
-         expect the mask to be a constant here. That simplifies matters. */
+         cc_dep1 = the value to be tested, ANDed with the mask
+         cc_dep2 = an 8-bit mask; expected to be a constant here */
       if (cc_op == S390_CC_OP_TEST_UNDER_MASK_8) {
          ULong mask16;
 
@@ -2131,142 +2149,116 @@ guest_s390x_spechelper(const HChar *function_name, IRExpr **args,
 
          mask16 = cc_dep2->Iex.Const.con->Ico.U64;
 
-         /* Get rid of the mask16 == 0 case first. Some of the simplifications
-            below (e.g. for OVFL) only hold if mask16 == 0.  */
          if (mask16 == 0) {   /* cc == 0 */
             if (cond & 0x8) return mkU32(1);
             return mkU32(0);
          }
 
          /* cc == 2 is a don't care */
-         if (cond == 8 || cond == 8 + 2) {
-            return unop(Iop_1Uto32, binop(Iop_CmpEQ64,
-                                          binop(Iop_And64, cc_dep1, cc_dep2),
-                                          mkU64(0)));
+         if (cond == 8 || cond == 8 + 2) { /* all bits zero */
+            return unop(Iop_1Uto32, binop(Iop_CmpEQ64, cc_dep1, mkU64(0)));
          }
-         if (cond == 7 || cond == 7 - 2) {
-            return unop(Iop_1Uto32, binop(Iop_CmpNE64,
-                                          binop(Iop_And64, cc_dep1, cc_dep2),
-                                          mkU64(0)));
+         if (cond == 7 || cond == 7 - 2) { /* not all bits zero */
+            return unop(Iop_1Uto32, binop(Iop_CmpNE64, cc_dep1, mkU64(0)));
          }
-         if (cond == 1 || cond == 1 + 2) {
-            return unop(Iop_1Uto32, binop(Iop_CmpEQ64,
-                                          binop(Iop_And64, cc_dep1, cc_dep2),
-                                          cc_dep2));
+         if (cond == 1 || cond == 1 + 2) { /* all bits set */
+            return unop(Iop_1Uto32, binop(Iop_CmpEQ64, cc_dep1, cc_dep2));
          }
-         if (cond == 14 || cond == 14 - 2) {  /* ! OVFL */
-            return unop(Iop_1Uto32, binop(Iop_CmpNE64,
-                                          binop(Iop_And64, cc_dep1, cc_dep2),
-                                          cc_dep2));
+         if (cond == 14 || cond == 14 - 2) { /* not all bits set */
+            return unop(Iop_1Uto32, binop(Iop_CmpNE64, cc_dep1, cc_dep2));
          }
          goto missed;
       }
 
       /* S390_CC_OP_TEST_UNDER_MASK_16
-         Since the mask comes from an immediate field in the opcode, we
-         expect the mask to be a constant here. That simplifies matters. */
+         cc_dep1 = the value to be tested, ANDed with the mask
+         cc_dep2 = a 16-bit mask; expected to be a constant here */
       if (cc_op == S390_CC_OP_TEST_UNDER_MASK_16) {
-         ULong mask16;
-         UInt msb;
+         IRExpr* val = cc_dep1;
+         ULong mask;
+         ULong msb;
 
          if (! isC64(cc_dep2)) goto missed;
 
-         mask16 = cc_dep2->Iex.Const.con->Ico.U64;
+         mask = cc_dep2->Iex.Const.con->Ico.U64;
 
-         /* Get rid of the mask16 == 0 case first. Some of the simplifications
-            below (e.g. for OVFL) only hold if mask16 == 0.  */
-         if (mask16 == 0) {   /* cc == 0 */
+         if (mask == 0) {   /* cc == 0 */
             if (cond & 0x8) return mkU32(1);
             return mkU32(0);
          }
 
+         /* Find MSB in mask */
+         msb = 0x8000;
+         while (msb > mask)
+            msb >>= 1;
+
+         /* If cc_dep1 results from a shift, avoid the shift operation */
+         if (val->tag == Iex_Binop && val->Iex.Binop.op == Iop_Shr64 &&
+             val->Iex.Binop.arg2->tag == Iex_Const &&
+             val->Iex.Binop.arg2->Iex.Const.con->tag == Ico_U8) {
+            UInt n_bits = val->Iex.Binop.arg2->Iex.Const.con->Ico.U8;
+            mask <<= n_bits;
+            msb <<= n_bits;
+            val = val->Iex.Binop.arg1;
+         }
+
          if (cond == 15) return mkU32(1);
 
-         if (cond == 8) {
-            return unop(Iop_1Uto32, binop(Iop_CmpEQ64,
-                                          binop(Iop_And64, cc_dep1, cc_dep2),
-                                          mkU64(0)));
+         if (cond == 8) { /* all bits zero */
+            return unop(Iop_1Uto32, binop(Iop_CmpEQ64, val, mkU64(0)));
          }
-         if (cond == 7) {
-            return unop(Iop_1Uto32, binop(Iop_CmpNE64,
-                                          binop(Iop_And64, cc_dep1, cc_dep2),
-                                          mkU64(0)));
+         if (cond == 7) { /* not all bits zero */
+            return unop(Iop_1Uto32, binop(Iop_CmpNE64, val, mkU64(0)));
          }
-         if (cond == 1) {
-            return unop(Iop_1Uto32, binop(Iop_CmpEQ64,
-                                          binop(Iop_And64, cc_dep1, cc_dep2),
-                                          mkU64(mask16)));
+         if (cond == 1) { /* all bits set */
+            return unop(Iop_1Uto32, binop(Iop_CmpEQ64, val, mkU64(mask)));
          }
-         if (cond == 14) {  /* ! OVFL */
-            return unop(Iop_1Uto32, binop(Iop_CmpNE64,
-                                          binop(Iop_And64, cc_dep1, cc_dep2),
-                                          mkU64(mask16)));
+         if (cond == 14) { /* not all bits set */
+            return unop(Iop_1Uto32, binop(Iop_CmpNE64, val, mkU64(mask)));
          }
 
-         /* Find MSB in mask */
-         msb = 0x8000;
-         while (msb > mask16)
-            msb >>= 1;
+         IRExpr *masked_msb = binop(Iop_And64, val, mkU64(msb));
 
          if (cond == 2) {  /* cc == 2 */
-            IRExpr *c1, *c2;
-
-            /* (cc_dep & msb) != 0 && (cc_dep & mask16) != mask16 */
-            c1 = binop(Iop_CmpNE64,
-                       binop(Iop_And64, cc_dep1, mkU64(msb)), mkU64(0));
-            c2 = binop(Iop_CmpNE64,
-                       binop(Iop_And64, cc_dep1, cc_dep2),
-                       mkU64(mask16));
-            return binop(Iop_And32, unop(Iop_1Uto32, c1),
-                         unop(Iop_1Uto32, c2));
+            /* mixed, and leftmost bit set */
+            return unop(Iop_1Uto32,
+                        binop(Iop_And1,
+                              binop(Iop_CmpNE64, masked_msb, mkU64(0)),
+                              binop(Iop_CmpNE64, val, mkU64(mask))));
          }
 
          if (cond == 4) {  /* cc == 1 */
-            IRExpr *c1, *c2;
-
-            /* (cc_dep & msb) == 0 && (cc_dep & mask16) != 0 */
-            c1 = binop(Iop_CmpEQ64,
-                       binop(Iop_And64, cc_dep1, mkU64(msb)), mkU64(0));
-            c2 = binop(Iop_CmpNE64,
-                       binop(Iop_And64, cc_dep1, cc_dep2),
-                       mkU64(0));
-            return binop(Iop_And32, unop(Iop_1Uto32, c1),
-                         unop(Iop_1Uto32, c2));
+            /* mixed, and leftmost bit zero */
+            return unop(Iop_1Uto32,
+                        binop(Iop_And1,
+                              binop(Iop_CmpEQ64, masked_msb, mkU64(0)),
+                              binop(Iop_CmpNE64, val, mkU64(0))));
          }
 
          if (cond == 11) {  /* cc == 0,2,3 */
-            IRExpr *c1, *c2;
-
-            c1 = binop(Iop_CmpNE64,
-                       binop(Iop_And64, cc_dep1, mkU64(msb)), mkU64(0));
-            c2 = binop(Iop_CmpEQ64,
-                       binop(Iop_And64, cc_dep1, cc_dep2),
-                       mkU64(0));
-            return binop(Iop_Or32, unop(Iop_1Uto32, c1),
-                         unop(Iop_1Uto32, c2));
+            /* leftmost bit set, or all bits zero */
+            return unop(Iop_1Uto32,
+                        binop(Iop_Or1,
+                              binop(Iop_CmpNE64, masked_msb, mkU64(0)),
+                              binop(Iop_CmpEQ64, val, mkU64(0))));
          }
 
          if (cond == 3) {  /* cc == 2 || cc == 3 */
+            /* leftmost bit set, rest don't care */
             return unop(Iop_1Uto32,
-                        binop(Iop_CmpNE64,
-                              binop(Iop_And64, cc_dep1, mkU64(msb)),
-                              mkU64(0)));
+                        binop(Iop_CmpNE64, masked_msb, mkU64(0)));
          }
          if (cond == 12) { /* cc == 0 || cc == 1 */
+            /* leftmost bit zero, rest don't care */
             return unop(Iop_1Uto32,
-                        binop(Iop_CmpEQ64,
-                              binop(Iop_And64, cc_dep1, mkU64(msb)),
-                              mkU64(0)));
+                        binop(Iop_CmpEQ64, masked_msb, mkU64(0)));
          }
          if (cond == 13) { /* cc == 0 || cc == 1 || cc == 3 */
-            IRExpr *c01, *c3;
-
-            c01 = binop(Iop_CmpEQ64, binop(Iop_And64, cc_dep1, mkU64(msb)),
-                        mkU64(0));
-            c3 = binop(Iop_CmpEQ64, binop(Iop_And64, cc_dep1, cc_dep2),
-                       mkU64(mask16));
-            return binop(Iop_Or32, unop(Iop_1Uto32, c01),
-                         unop(Iop_1Uto32, c3));
+            /* leftmost bit zero, or all bits set */
+            return unop(Iop_1Uto32,
+                        binop(Iop_Or1,
+                              binop(Iop_CmpEQ64, masked_msb, mkU64(0)),
+                              binop(Iop_CmpEQ64, val, mkU64(mask))));
          }
          // fixs390: handle cond = 5,6,9,10 (the missing cases)
          // vex_printf("TUM mask = 0x%llx\n", mask16);
@@ -2461,7 +2453,6 @@ s390x_dirtyhelper_vec_op(VexGuestS390XState *guest_state,
       [S390_VEC_OP_VPKS]  = {0xe7, 0x97},
       [S390_VEC_OP_VPKLS] = {0xe7, 0x95},
       [S390_VEC_OP_VCEQ]  = {0xe7, 0xf8},
-      [S390_VEC_OP_VTM]   = {0xe7, 0xd8},
       [S390_VEC_OP_VGFM]  = {0xe7, 0xb4},
       [S390_VEC_OP_VGFMA] = {0xe7, 0xbc},
       [S390_VEC_OP_VMAH]  = {0xe7, 0xab},
@@ -2546,12 +2537,6 @@ s390x_dirtyhelper_vec_op(VexGuestS390XState *guest_state,
    the_insn.VRR.op2 = opcodes[d->op][1];
 
    switch(d->op) {
-   case S390_VEC_OP_VTM:
-      the_insn.VRR.v1 = 2;
-      the_insn.VRR.v2 = 3;
-      the_insn.VRR.rxb = 0b1100;
-      break;
-
    case S390_VEC_OP_VPKS:
    case S390_VEC_OP_VPKLS:
    case S390_VEC_OP_VCEQ:
@@ -2659,75 +2644,6 @@ s390x_dirtyhelper_vec_op(VexGuestS390XState *guest_state,
 
 #endif
 
-/*-----------------------------------------------------------------*/
-/*--- Dirty helper for Perform Pseudorandom number instruction  ---*/
-/*-----------------------------------------------------------------*/
-
-/* Dummy helper that is needed to indicate load of parameter block.
-   We have to use it because dirty helper cannot have two memory side
-   effects.
- */
-void s390x_dirtyhelper_PPNO_sha512_load_param_block( void )
-{
-}
-
-#if defined(VGA_s390x)
-
-/* IMPORTANT!
-   We return here bit mask where only supported functions are set to one.
-   If you implement new functions don't forget the supported array.
- */
-void
-s390x_dirtyhelper_PPNO_query(VexGuestS390XState *guest_state, ULong r1, ULong r2)
-{
-   ULong supported[2] = {0x9000000000000000ULL, 0x0000000000000000ULL};
-   ULong *result = (ULong*) guest_state->guest_r1;
-
-   result[0] = supported[0];
-   result[1] = supported[1];
-}
-
-ULong
-s390x_dirtyhelper_PPNO_sha512(VexGuestS390XState *guest_state, ULong r1, ULong r2)
-{
-   ULong* op1 = (ULong*) (((ULong)(&guest_state->guest_r0)) + r1 * sizeof(ULong));
-   ULong* op2 = (ULong*) (((ULong)(&guest_state->guest_r0)) + r2 * sizeof(ULong));
-
-   register ULong reg0 asm("0") = guest_state->guest_r0;
-   register ULong reg1 asm("1") = guest_state->guest_r1;
-   register ULong reg2 asm("2") = op1[0];
-   register ULong reg3 asm("3") = op1[1];
-   register ULong reg4 asm("4") = op2[0];
-   register ULong reg5 asm("5") = op2[1];
-
-   ULong cc = 0;
-   asm volatile(".insn rre, 0xb93c0000, %%r2, %%r4\n"
-                "ipm %[cc]\n"
-                "srl %[cc], 28\n"
-                : "+d"(reg0), "+d"(reg1),
-                  "+d"(reg2), "+d"(reg3),
-                  "+d"(reg4), "+d"(reg5),
-                  [cc] "=d"(cc)
-                :
-                : "cc", "memory");
-
-   return cc;
-}
-
-#else
-
-void
-s390x_dirtyhelper_PPNO_query(VexGuestS390XState *guest_state, ULong r1, ULong r2)
-{
-}
-
-ULong
-s390x_dirtyhelper_PPNO_sha512(VexGuestS390XState *guest_state, ULong r1, ULong r2)
-{
-   return 0;
-}
-
-#endif /* VGA_s390x */
 /*---------------------------------------------------------------*/
 /*--- end                                guest_s390_helpers.c ---*/
 /*---------------------------------------------------------------*/
diff --git a/VEX/priv/guest_s390_toIR.c b/VEX/priv/guest_s390_toIR.c
index 2e0f6bb..102c6a9 100644
--- a/VEX/priv/guest_s390_toIR.c
+++ b/VEX/priv/guest_s390_toIR.c
@@ -49,7 +49,17 @@
 static UInt s390_decode_and_irgen(const UChar *, UInt, DisResult *);
 static void s390_irgen_xonc(IROp, IRTemp, IRTemp, IRTemp);
 static void s390_irgen_CLC_EX(IRTemp, IRTemp, IRTemp);
-static const HChar *s390_irgen_BIC(UChar r1, IRTemp op2addr);
+static const HChar *s390_irgen_BIC(UChar, IRTemp);
+static const HChar *s390_irgen_VPDI(UChar, UChar, UChar, UChar);
+static const HChar *s390_irgen_VFLR(UChar, UChar, UChar, UChar, UChar);
+static const HChar *s390_irgen_VFI(UChar, UChar, UChar, UChar, UChar);
+static const HChar *s390_irgen_VFPSO(UChar, UChar, UChar, UChar, UChar);
+static const HChar *s390_irgen_VCGD(UChar, UChar, UChar, UChar, UChar);
+static const HChar *s390_irgen_VCDG(UChar, UChar, UChar, UChar, UChar);
+static const HChar *s390_irgen_VCDLG(UChar, UChar, UChar, UChar, UChar);
+static const HChar *s390_irgen_VCLGD(UChar, UChar, UChar, UChar, UChar);
+static const HChar *s390_irgen_KMA(UChar, UChar, UChar);
+static const HChar *s390_irgen_KMCTR(UChar, UChar, UChar);
 
 /*------------------------------------------------------------*/
 /*--- Globals                                              ---*/
@@ -94,6 +104,11 @@ typedef enum {
 /*------------------------------------------------------------*/
 
 #define I_i(insn) ((insn) & 0xff)
+#define IE_i1(insn) (((insn) >> 4) & 0xf)
+#define IE_i2(insn) ((insn) & 0xf)
+#define MII_m1(insn) (((insn) >> 52) & 0xf)
+#define MII_i2(insn) (((insn) >> 40) & 0xfff)
+#define MII_i3(insn) (((insn) >> 16) & 0xffffff)
 #define RR_r1(insn) (((insn) >> 4) & 0xf)
 #define RR_r2(insn) ((insn) & 0xf)
 #define RI_r1(insn) (((insn) >> 20) & 0xf)
@@ -133,6 +148,10 @@ typedef enum {
 #define SI_i2(insn) (((insn) >> 16) & 0xff)
 #define SI_b1(insn) (((insn) >> 12) & 0xf)
 #define SI_d1(insn) ((insn) & 0xfff)
+#define SMI_m1(insn) (((insn) >> 52) & 0xf)
+#define SMI_b3(insn) (((insn) >> 44) & 0xf)
+#define SMI_d3(insn) (((insn) >> 32) & 0xfff)
+#define SMI_i2(insn) (((insn) >> 16) & 0xffff)
 #define RIE_r1(insn) (((insn) >> 52) & 0xf)
 #define RIE_r3(insn) (((insn) >> 48) & 0xf)
 #define RIE_i2(insn) (((insn) >> 32) & 0xffff)
@@ -141,9 +160,9 @@ typedef enum {
 #define RIE_RRUUU_i3(insn) (((insn) >> 40) & 0xff)
 #define RIE_RRUUU_i4(insn) (((insn) >> 32) & 0xff)
 #define RIE_RRUUU_i5(insn) (((insn) >> 24) & 0xff)
-#define RIEv1_r1(insn) (((insn) >> 52) & 0xf)
-#define RIEv1_i2(insn) (((insn) >> 32) & 0xffff)
-#define RIEv1_m3(insn) (((insn) >> 28) & 0xf)
+#define RIE_R0xU_r1(insn) (((insn) >> 52) & 0xf)
+#define RIE_R0xU_i2(insn) (((insn) >> 32) & 0xffff)
+#define RIE_R0xU_m3(insn) (((insn) >> 28) & 0xf)
 #define RIE_RRPU_r1(insn) (((insn) >> 52) & 0xf)
 #define RIE_RRPU_r2(insn) (((insn) >> 48) & 0xf)
 #define RIE_RRPU_i4(insn) (((insn) >> 32) & 0xffff)
@@ -204,9 +223,12 @@ typedef enum {
 #define VRX_rxb(insn) (((insn) >> 24) & 0xf)
 #define VRR_v1(insn) (((insn) >> 52) & 0xf)
 #define VRR_v2(insn) (((insn) >> 48) & 0xf)
+#define VRR_r2(insn) (((insn) >> 48) & 0xf)
 #define VRR_r3(insn) (((insn) >> 44) & 0xf)
+#define VRR_v3(insn) (((insn) >> 44) & 0xf)
 #define VRR_m5(insn) (((insn) >> 36) & 0xf)
 #define VRR_m4(insn) (((insn) >> 28) & 0xf)
+#define VRR_v4(insn) (((insn) >> 28) & 0xf)
 #define VRR_rxb(insn) (((insn) >> 24) & 0xf)
 #define VRRa_v1(insn) (((insn) >> 52) & 0xf)
 #define VRRa_v2(insn) (((insn) >> 48) & 0xf)
@@ -234,6 +256,16 @@ typedef enum {
 #define VRI_i2(insn) (((insn) >> 32) & 0xffff)
 #define VRI_m3(insn) (((insn) >> 28) & 0xf)
 #define VRI_rxb(insn) (((insn) >> 24) & 0xf)
+#define VRIb_v1(insn) (((insn) >> 52) & 0xf)
+#define VRIb_i2(insn) (((insn) >> 40) & 0xff)
+#define VRIb_i3(insn) (((insn) >> 32) & 0xff)
+#define VRIb_m4(insn) (((insn) >> 28) & 0xf)
+#define VRIb_rxb(insn) (((insn) >> 24) & 0xf)
+#define VRIc_v1(insn) (((insn) >> 52) & 0xf)
+#define VRIc_v3(insn) (((insn) >> 48) & 0xf)
+#define VRIc_i2(insn) (((insn) >> 32) & 0xffff)
+#define VRIc_m4(insn) (((insn) >> 28) & 0xf)
+#define VRIc_rxb(insn) (((insn) >> 24) & 0xf)
 #define VRId_v1(insn) (((insn) >> 52) & 0xf)
 #define VRId_v2(insn) (((insn) >> 48) & 0xf)
 #define VRId_v3(insn) (((insn) >> 44) & 0xf)
@@ -248,10 +280,17 @@ typedef enum {
 #define VRIe_rxb(insn) (((insn) >> 24) & 0xf)
 #define VRS_v1(insn) (((insn) >> 52) & 0xf)
 #define VRS_v3(insn) (((insn) >> 48) & 0xf)
+#define VRS_r3(insn) (((insn) >> 48) & 0xf)
 #define VRS_b2(insn) (((insn) >> 44) & 0xf)
 #define VRS_d2(insn) (((insn) >> 32) & 0xfff)
 #define VRS_m4(insn) (((insn) >> 28) & 0xf)
 #define VRS_rxb(insn) (((insn) >> 24) & 0xf)
+#define VRSc_r1(insn) (((insn) >> 52) & 0xf)
+#define VRSc_v3(insn) (((insn) >> 48) & 0xf)
+#define VRSc_b2(insn) (((insn) >> 44) & 0xf)
+#define VRSc_d2(insn) (((insn) >> 32) & 0xfff)
+#define VRSc_m4(insn) (((insn) >> 28) & 0xf)
+#define VRSc_rxb(insn) (((insn) >> 24) & 0xf)
 #define VRSd_v1(insn) (((insn) >> 28) & 0xf)
 #define VRSd_r3(insn) (((insn) >> 48) & 0xf)
 #define VSI_i3(insn) (((insn) >> 48) & 0xff)
@@ -259,12 +298,27 @@ typedef enum {
 #define VSI_d2(insn) (((insn) >> 32) & 0xfff)
 #define VSI_v1(insn) (((insn) >> 28) & 0xf)
 #define VSI_rxb(insn) (((insn) >> 24) & 0xf)
+#define VRV_v1(insn) (((insn) >> 52) & 0xf)
+#define VRV_x2(insn) (((insn) >> 48) & 0xf)
+#define VRV_b2(insn) (((insn) >> 44) & 0xf)
+#define VRV_d2(insn) (((insn) >> 32) & 0xfff)
+#define VRV_m3(insn) (((insn) >> 28) & 0xf)
+#define VRV_rxb(insn) (((insn) >> 24) & 0xf)
 
 
 /*------------------------------------------------------------*/
 /*--- Helpers for constructing IR.                         ---*/
 /*------------------------------------------------------------*/
 
+/* Whether or not REGNO designates a valid FPR pair. */
+#define is_valid_fpr_pair(regno)   (((regno) & 0x2) == 0)
+
+/* Whether or not REGNO designates an even-odd GPR pair. */
+#define is_valid_gpr_pair(regno)   (((regno) & 0x1) == 0)
+
+#define is_valid_rounding_mode(rm) ((rm) < 8 && (rm) != 2)
+
+
 /* Add a statement to the current irsb. */
 static __inline__ void
 stmt(IRStmt *st)
@@ -1983,11 +2037,9 @@ get_vr_b15(UInt archreg)
 static IRType
 s390_vr_get_type(const UChar m)
 {
+   vassert(m <= 4);
+
    static const IRType results[] = {Ity_I8, Ity_I16, Ity_I32, Ity_I64, Ity_V128};
-   if (m > 4) {
-      vex_printf("s390_vr_get_type: m=%x\n", m);
-      vpanic("s390_vr_get_type: reserved m value");
-   }
 
    return results[m];
 }
@@ -1996,20 +2048,10 @@ s390_vr_get_type(const UChar m)
 static IRType
 s390_vr_get_ftype(const UChar m)
 {
-   static const IRType results[] = {Ity_F32, Ity_F64, Ity_F128};
-   if (m >= 2 && m <= 4)
-      return results[m - 2];
-   return Ity_INVALID;
-}
+   vassert(m >= 2 && m <= 4);
 
-/* Determine number of elements from instruction's floating-point format
-   field */
-static UChar
-s390_vr_get_n_elem(const UChar m)
-{
-   if (m >= 2 && m <= 4)
-      return 1 << (4 - m);
-   return 0;
+   static const IRType results[] = {Ity_F32, Ity_F64, Ity_F128};
+   return results[m - 2];
 }
 
 /* Determine if Condition Code Set (CS) flag is set in m field */
@@ -2714,7 +2756,17 @@ s390_format_I(const HChar *(*irgen)(UChar i),
    const HChar *mnm = irgen(i);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC2(MNM, UINT), mnm, i);
+      S390_DISASM(MNM(mnm), UINT(i));
+}
+
+static void
+s390_format_IE(const HChar *(*irgen)(UChar i1, UChar i2),
+               UChar i1, UChar i2)
+{
+   const HChar *mnm = irgen(i1, i2);
+
+   if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
+      S390_DISASM(MNM(mnm), UINT(i1), UINT(i2));
 }
 
 static void
@@ -2723,7 +2775,20 @@ s390_format_E(const HChar *(*irgen)(void))
    const HChar *mnm = irgen();
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC1(MNM), mnm);
+      S390_DISASM(MNM(mnm));
+}
+
+static void
+s390_format_MII_UPP(const HChar *(*irgen)(UChar m1, UShort i2, UShort i3),
+                    UChar m1, UShort i2, UShort i3)
+{
+   const HChar *mnm;
+
+   mnm = irgen(m1, i2, i3);
+
+   if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
+      S390_DISASM(MNM(mnm), UINT(m1), PCREL((Int)((Short)(i2 << 4) >> 4)),
+                  PCREL((Int)(Short)i3));
 }
 
 static void
@@ -2740,7 +2805,7 @@ s390_format_RI_RU(const HChar *(*irgen)(UChar r1, UShort i2),
    const HChar *mnm = irgen(r1, i2);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC3(MNM, GPR, UINT), mnm, r1, i2);
+      S390_DISASM(MNM(mnm), GPR(r1), UINT(i2));
 }
 
 static void
@@ -2750,7 +2815,7 @@ s390_format_RI_RI(const HChar *(*irgen)(UChar r1, UShort i2),
    const HChar *mnm = irgen(r1, i2);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC3(MNM, GPR, INT), mnm, r1, (Int)(Short)i2);
+      S390_DISASM(MNM(mnm), GPR(r1), INT((Int)(Short)i2));
 }
 
 static void
@@ -2760,7 +2825,7 @@ s390_format_RI_RP(const HChar *(*irgen)(UChar r1, UShort i2),
    const HChar *mnm = irgen(r1, i2);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC3(MNM, GPR, PCREL), mnm, r1, (Int)(Short)i2);
+      S390_DISASM(MNM(mnm), GPR(r1), PCREL((Int)(Short)i2));
 }
 
 static void
@@ -2770,7 +2835,7 @@ s390_format_RIE_RRP(const HChar *(*irgen)(UChar r1, UChar r3, UShort i2),
    const HChar *mnm = irgen(r1, r3, i2);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC4(MNM, GPR, GPR, PCREL), mnm, r1, r3, (Int)(Short)i2);
+      S390_DISASM(MNM(mnm), GPR(r1), GPR(r3), PCREL((Int)(Short)i2));
 }
 
 static void
@@ -2780,7 +2845,7 @@ s390_format_RIE_RRI0(const HChar *(*irgen)(UChar r1, UChar r3, UShort i2),
    const HChar *mnm = irgen(r1, r3, i2);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC4(MNM, GPR, GPR, INT), mnm, r1, r3, (Int)(Short)i2);
+      S390_DISASM(MNM(mnm), GPR(r1), GPR(r3), INT((Int)(Short)i2));
 }
 
 static void
@@ -2791,18 +2856,27 @@ s390_format_RIE_RRUUU(const HChar *(*irgen)(UChar r1, UChar r2, UChar i3,
    const HChar *mnm = irgen(r1, r2, i3, i4, i5);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC6(MNM, GPR, GPR, UINT, UINT, UINT), mnm, r1, r2, i3, i4,
-                  i5);
+      S390_DISASM(XMNM(mnm, rotate_disasm), GPR(r1), GPR(r2), MASK(i3), MASK(i4), MASK(i5));
+}
+
+static void
+s390_format_R0UU(const HChar *(*irgen)(UChar r1, UShort i2, UChar m3),
+                 UChar r1, UShort i2, UChar m3)
+{
+   const HChar *mnm = irgen(r1, i2, m3);
+
+   if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
+      S390_DISASM(XMNM(mnm, cabt_disasm), GPR(r1), INT(i2), MASK(m3));
 }
 
 static void
-s390_format_RIEv1(const HChar *(*irgen)(UChar r1, UShort i2, UChar m3),
-                  UChar r1, UShort i2, UChar m3)
+s390_format_R0IU(const HChar *(*irgen)(UChar r1, UShort i2, UChar m3),
+                 UChar r1, UShort i2, UChar m3)
 {
    const HChar *mnm = irgen(r1, i2, m3);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC4(MNM, GPR, UINT, UINT), mnm, r1, i2, m3);
+      S390_DISASM(XMNM(mnm, cabt_disasm), GPR(r1), INT((Int)(Short)i2), MASK(m3));
 }
 
 static void
@@ -2813,8 +2887,7 @@ s390_format_RIE_RRPU(const HChar *(*irgen)(UChar r1, UChar r2, UShort i4,
    const HChar *mnm = irgen(r1, r2, i4, m3);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC5(XMNM, GPR, GPR, CABM, PCREL), S390_XMNM_CAB, mnm, m3, r1,
-                  r2, m3, (Int)(Short)i4);
+      S390_DISASM(XMNM(mnm, cabt_disasm), GPR(r1), GPR(r2), MASK(m3), PCREL((Int)(Short)i4));
 }
 
 static void
@@ -2825,8 +2898,7 @@ s390_format_RIE_RUPU(const HChar *(*irgen)(UChar r1, UChar m3, UShort i4,
    const HChar *mnm = irgen(r1, m3, i4, i2);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC5(XMNM, GPR, UINT, CABM, PCREL), S390_XMNM_CAB, mnm, m3,
-                  r1, i2, m3, (Int)(Short)i4);
+      S390_DISASM(XMNM(mnm, cabt_disasm), GPR(r1), UINT(i2), MASK(m3), PCREL((Int)(Short)i4));
 }
 
 static void
@@ -2837,18 +2909,17 @@ s390_format_RIE_RUPI(const HChar *(*irgen)(UChar r1, UChar m3, UShort i4,
    const HChar *mnm = irgen(r1, m3, i4, i2);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC5(XMNM, GPR, INT, CABM, PCREL), S390_XMNM_CAB, mnm, m3, r1,
-                  (Int)(Char)i2, m3, (Int)(Short)i4);
+      S390_DISASM(XMNM(mnm, cabt_disasm), GPR(r1), INT((Int)(Char)i2), MASK(m3), PCREL((Int)(Short)i4));
 }
 
 static void
 s390_format_RIE_RUPIX(const HChar *(*irgen)(UChar r1, UChar m3, UShort i2),
-                      UChar r1, UChar m3, UShort i2, Int xmnm_kind)
+                      UChar r1, UChar m3, UShort i2)
 {
-   irgen(r1, m3, i2);
+   const HChar *mnm = irgen(r1, m3, i2);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC3(XMNM, GPR, INT), xmnm_kind, m3, r1, (Int)(Short)i2);
+      S390_DISASM(XMNM(mnm, cls_disasm), GPR(r1), INT((Int)(Short)i2), MASK(m3));
 }
 
 static void
@@ -2865,7 +2936,7 @@ s390_format_RIL_RU(const HChar *(*irgen)(UChar r1, UInt i2),
    const HChar *mnm = irgen(r1, i2);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC3(MNM, GPR, UINT), mnm, r1, i2);
+      S390_DISASM(MNM(mnm), GPR(r1), UINT(i2));
 }
 
 static void
@@ -2875,7 +2946,7 @@ s390_format_RIL_RI(const HChar *(*irgen)(UChar r1, UInt i2),
    const HChar *mnm = irgen(r1, i2);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC3(MNM, GPR, INT), mnm, r1, i2);
+      S390_DISASM(MNM(mnm), GPR(r1), INT(i2));
 }
 
 static void
@@ -2885,7 +2956,7 @@ s390_format_RIL_RP(const HChar *(*irgen)(UChar r1, UInt i2),
    const HChar *mnm = irgen(r1, i2);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC3(MNM, GPR, PCREL), mnm, r1, i2);
+      S390_DISASM(MNM(mnm), GPR(r1), PCREL(i2));
 }
 
 static void
@@ -2895,7 +2966,7 @@ s390_format_RIL_UP(const HChar *(*irgen)(void),
    const HChar *mnm = irgen();
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC3(MNM, UINT, PCREL), mnm, r1, i2);
+      S390_DISASM(MNM(mnm), UINT(r1), PCREL(i2));
 }
 
 static void
@@ -2912,8 +2983,7 @@ s390_format_RIS_RURDI(const HChar *(*irgen)(UChar r1, UChar m3, UChar i2,
    mnm = irgen(r1, m3, i2, op4addr);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC5(XMNM, GPR, INT, CABM, UDXB), S390_XMNM_CAB, mnm, m3, r1,
-                  (Int)(Char)i2, m3, d4, 0, b4);
+      S390_DISASM(XMNM(mnm, cabt_disasm), GPR(r1), INT((Int)(Char)i2), MASK(m3), UDXB(d4, 0, b4));
 }
 
 static void
@@ -2930,8 +3000,7 @@ s390_format_RIS_RURDU(const HChar *(*irgen)(UChar r1, UChar m3, UChar i2,
    mnm = irgen(r1, m3, i2, op4addr);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC5(XMNM, GPR, UINT, CABM, UDXB), S390_XMNM_CAB, mnm, m3, r1,
-                  i2, m3, d4, 0, b4);
+      S390_DISASM(XMNM(mnm, cabt_disasm), GPR(r1), UINT(i2), MASK(m3), UDXB(d4, 0, b4));
 }
 
 static void
@@ -2948,7 +3017,7 @@ s390_format_RR_RR(const HChar *(*irgen)(UChar r1, UChar r2),
    const HChar *mnm = irgen(r1, r2);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC3(MNM, GPR, GPR), mnm, r1, r2);
+      S390_DISASM(MNM(mnm), GPR(r1), GPR(r2));
 }
 
 static void
@@ -2958,7 +3027,7 @@ s390_format_RR_FF(const HChar *(*irgen)(UChar r1, UChar r2),
    const HChar *mnm = irgen(r1, r2);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC3(MNM, FPR, FPR), mnm, r1, r2);
+      S390_DISASM(MNM(mnm), FPR(r1), FPR(r2));
 }
 
 static void
@@ -2975,7 +3044,7 @@ s390_format_RRE_RR(const HChar *(*irgen)(UChar r1, UChar r2),
    const HChar *mnm = irgen(r1, r2);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC3(MNM, GPR, GPR), mnm, r1, r2);
+      S390_DISASM(MNM(mnm), GPR(r1), GPR(r2));
 }
 
 static void
@@ -2985,7 +3054,7 @@ s390_format_RRE_FF(const HChar *(*irgen)(UChar r1, UChar r2),
    const HChar *mnm = irgen(r1, r2);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC3(MNM, FPR, FPR), mnm, r1, r2);
+      S390_DISASM(MNM(mnm), FPR(r1), FPR(r2));
 }
 
 static void
@@ -2995,7 +3064,7 @@ s390_format_RRE_RF(const HChar *(*irgen)(UChar, UChar),
    const HChar *mnm = irgen(r1, r2);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC3(MNM, GPR, FPR), mnm, r1, r2);
+      S390_DISASM(MNM(mnm), GPR(r1), FPR(r2));
 }
 
 static void
@@ -3005,7 +3074,7 @@ s390_format_RRE_FR(const HChar *(*irgen)(UChar r1, UChar r2),
    const HChar *mnm = irgen(r1, r2);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC3(MNM, FPR, GPR), mnm, r1, r2);
+      S390_DISASM(MNM(mnm), FPR(r1), GPR(r2));
 }
 
 static void
@@ -3015,7 +3084,7 @@ s390_format_RRE_R0(const HChar *(*irgen)(UChar r1),
    const HChar *mnm = irgen(r1);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC2(MNM, GPR), mnm, r1);
+      S390_DISASM(MNM(mnm), GPR(r1));
 }
 
 static void
@@ -3025,7 +3094,7 @@ s390_format_RRE_F0(const HChar *(*irgen)(UChar r1),
    const HChar *mnm = irgen(r1);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC2(MNM, FPR), mnm, r1);
+      S390_DISASM(MNM(mnm), FPR(r1));
 }
 
 static void
@@ -3035,7 +3104,7 @@ s390_format_RRF_M0RERE(const HChar *(*irgen)(UChar m3, UChar r1, UChar r2),
    const HChar *mnm = irgen(m3, r1, r2);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC4(MNM, GPR, GPR, UINT), mnm, r1, r2, m3);
+      S390_DISASM(XMNM(mnm, mask0_disasm), GPR(r1), GPR(r2), MASK(m3));
 }
 
 static void
@@ -3045,7 +3114,7 @@ s390_format_RRF_F0FF(const HChar *(*irgen)(UChar, UChar, UChar),
    const HChar *mnm = irgen(r1, r3, r2);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC4(MNM, FPR, FPR, FPR), mnm, r1, r3, r2);
+      S390_DISASM(MNM(mnm), FPR(r1), FPR(r3), FPR(r2));
 }
 
 static void
@@ -3055,7 +3124,7 @@ s390_format_RRF_F0FR(const HChar *(*irgen)(UChar, UChar, UChar),
    const HChar *mnm = irgen(r3, r1, r2);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC4(MNM, FPR, FPR, GPR), mnm, r1, r3, r2);
+      S390_DISASM(MNM(mnm), FPR(r1), FPR(r3), GPR(r2));
 }
 
 static void
@@ -3066,7 +3135,18 @@ s390_format_RRF_UUFF(const HChar *(*irgen)(UChar m3, UChar m4, UChar r1,
    const HChar *mnm = irgen(m3, m4, r1, r2);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC5(MNM, FPR, UINT, FPR, UINT), mnm, r1, m3, r2, m4);
+      S390_DISASM(XMNM(mnm, fp_convf_disasm), FPR(r1), MASK(m3), FPR(r2), MASK(m4));
+}
+
+static void
+s390_format_RRF_UUFF2(const HChar *(*irgen)(UChar m3, UChar m4, UChar r1,
+                                           UChar r2),
+                     UChar m3, UChar m4, UChar r1, UChar r2)
+{
+   const HChar *mnm = irgen(m3, m4, r1, r2);
+
+   if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
+      S390_DISASM(XMNM(mnm, fp_convt_disasm), FPR(r1), MASK(m3), FPR(r2), MASK(m4));
 }
 
 static void
@@ -3076,7 +3156,7 @@ s390_format_RRF_0UFF(const HChar *(*irgen)(UChar m4, UChar r1, UChar r2),
    const HChar *mnm = irgen(m4, r1, r2);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC4(MNM, FPR, FPR, UINT), mnm, r1, r2, m4);
+      S390_DISASM(MNM(mnm), FPR(r1), FPR(r2), UINT(m4));
 }
 
 static void
@@ -3087,7 +3167,7 @@ s390_format_RRF_UUFR(const HChar *(*irgen)(UChar m3, UChar m4, UChar r1,
    const HChar *mnm = irgen(m3, m4, r1, r2);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC5(MNM, FPR, UINT, GPR, UINT), mnm, r1, m3, r2, m4);
+      S390_DISASM(XMNM(mnm, fp_convf_disasm), FPR(r1), MASK(m3), GPR(r2), MASK(m4));
 }
 
 static void
@@ -3098,18 +3178,18 @@ s390_format_RRF_UURF(const HChar *(*irgen)(UChar m3, UChar m4, UChar r1,
    const HChar *mnm = irgen(m3, m4, r1, r2);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC5(MNM, GPR, UINT, FPR, UINT), mnm, r1, m3, r2, m4);
+      S390_DISASM(XMNM(mnm, fp_convt_disasm), GPR(r1), MASK(m3), FPR(r2), MASK(m4));
 }
 
 
 static void
 s390_format_RRF_U0RR(const HChar *(*irgen)(UChar m3, UChar r1, UChar r2),
-                     UChar m3, UChar r1, UChar r2, Int xmnm_kind)
+                     UChar m3, UChar r1, UChar r2, HChar *(*handler)(const s390_opnd *, HChar *))
 {
-   irgen(m3, r1, r2);
+   const HChar *mnm = irgen(m3, r1, r2);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC3(XMNM, GPR, GPR), xmnm_kind, m3, r1, r2);
+      S390_DISASM(XMNM(mnm, handler), GPR(r1), GPR(r2), MASK(m3));
 }
 
 static void
@@ -3120,9 +3200,9 @@ s390_format_RRFa_U0RR(const HChar *(*irgen)(UChar m3, UChar r1, UChar r2),
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE)) {
       if (m3 != 0)
-         s390_disasm(ENC4(MNM, GPR, GPR, UINT), mnm, r1, r2, m3);
+         S390_DISASM(MNM(mnm), GPR(r1), GPR(r2), UINT(m3));
       else
-         s390_disasm(ENC3(MNM, GPR, GPR), mnm, r1, r2);
+         S390_DISASM(MNM(mnm), GPR(r1), GPR(r2));
    }
 }
 
@@ -3133,7 +3213,7 @@ s390_format_RRF_F0FF2(const HChar *(*irgen)(UChar, UChar, UChar),
    const HChar *mnm = irgen(r3, r1, r2);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC4(MNM, FPR, FPR, FPR), mnm, r1, r3, r2);
+      S390_DISASM(MNM(mnm), FPR(r1), FPR(r3), FPR(r2));
 }
 
 static void
@@ -3143,7 +3223,7 @@ s390_format_RRF_FFRU(const HChar *(*irgen)(UChar, UChar, UChar, UChar),
    const HChar *mnm = irgen(r3, m4, r1, r2);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC5(MNM, FPR, FPR, GPR, UINT), mnm, r1, r3, r2, m4);
+      S390_DISASM(MNM(mnm), FPR(r1), FPR(r3), GPR(r2), UINT(m4));
 }
 
 static void
@@ -3153,7 +3233,7 @@ s390_format_RRF_FUFF(const HChar *(*irgen)(UChar, UChar, UChar, UChar),
    const HChar *mnm = irgen(r3, m4, r1, r2);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC5(MNM, FPR, FPR, FPR, UINT), mnm, r1, r3, r2, m4);
+      S390_DISASM(MNM(mnm), FPR(r1), FPR(r3), FPR(r2), UINT(m4));
 }
 
 static void
@@ -3163,7 +3243,7 @@ s390_format_RRF_FUFF2(const HChar *(*irgen)(UChar, UChar, UChar, UChar),
    const HChar *mnm = irgen(r3, m4, r1, r2);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC5(MNM, FPR, FPR, FPR, UINT), mnm, r1, r2, r3, m4);
+      S390_DISASM(XMNM(mnm, adtra_like_disasm), FPR(r1), FPR(r2), FPR(r3), MASK(m4));
 }
 
 static void
@@ -3173,7 +3253,7 @@ s390_format_RRF_RURR(const HChar *(*irgen)(UChar, UChar, UChar, UChar),
    const HChar *mnm = irgen(r3, m4, r1, r2);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC5(MNM, GPR, GPR, GPR, UINT), mnm, r1, r3, r2, m4);
+      S390_DISASM(XMNM(mnm, cls_disasm), GPR(r1), GPR(r2), GPR(r3), MASK(m4));
 }
 
 static void
@@ -3182,8 +3262,12 @@ s390_format_RRF_R0RR2(const HChar *(*irgen)(UChar r3, UChar r1, UChar r2),
 {
    const HChar *mnm = irgen(r3, r1, r2);
 
-   if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC4(MNM, GPR, GPR, GPR), mnm, r1, r2, r3);
+   if (UNLIKELY(vex_traceflags & VEX_TRACE_FE)) {
+      if (irgen == s390_irgen_KMA || irgen == s390_irgen_KMCTR)
+         S390_DISASM(MNM(mnm), GPR(r1), GPR(r3), GPR(r2));
+      else
+         S390_DISASM(MNM(mnm), GPR(r1), GPR(r2), GPR(r3));
+   }
 }
 
 static void
@@ -3200,8 +3284,7 @@ s390_format_RRS(const HChar *(*irgen)(UChar r1, UChar r2, UChar m3,
    mnm = irgen(r1, r2, m3, op4addr);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC5(XMNM, GPR, GPR, CABM, UDXB), S390_XMNM_CAB, mnm, m3, r1,
-                  r2, m3, d4, 0, b4);
+      S390_DISASM(XMNM(mnm, cabt_disasm), GPR(r1), GPR(r2), MASK(m3), UDXB(d4, 0, b4));
 }
 
 static void
@@ -3217,7 +3300,7 @@ s390_format_RS_R0RD(const HChar *(*irgen)(UChar r1, IRTemp op2addr),
    mnm = irgen(r1, op2addr);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC3(MNM, GPR, UDXB), mnm, r1, d2, 0, b2);
+      S390_DISASM(MNM(mnm), GPR(r1), UDXB(d2, 0, b2));
 }
 
 static void
@@ -3233,7 +3316,7 @@ s390_format_RS_RRRD(const HChar *(*irgen)(UChar r1, UChar r3, IRTemp op2addr),
    mnm = irgen(r1, r3, op2addr);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC4(MNM, GPR, GPR, UDXB), mnm, r1, r3, d2, 0, b2);
+      S390_DISASM(MNM(mnm), GPR(r1), GPR(r3), UDXB(d2, 0, b2));
 }
 
 static void
@@ -3249,7 +3332,7 @@ s390_format_RS_RURD(const HChar *(*irgen)(UChar r1, UChar r3, IRTemp op2addr),
    mnm = irgen(r1, r3, op2addr);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC4(MNM, GPR, UINT, UDXB), mnm, r1, r3, d2, 0, b2);
+      S390_DISASM(MNM(mnm), GPR(r1), UINT(r3), UDXB(d2, 0, b2));
 }
 
 static void
@@ -3265,7 +3348,7 @@ s390_format_RS_AARD(const HChar *(*irgen)(UChar, UChar, IRTemp),
    mnm = irgen(r1, r3, op2addr);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC4(MNM, AR, AR, UDXB), mnm, r1, r3, d2, 0, b2);
+      S390_DISASM(MNM(mnm), AR(r1), AR(r3), UDXB(d2, 0, b2));
 }
 
 static void
@@ -3275,7 +3358,7 @@ s390_format_RSI_RRP(const HChar *(*irgen)(UChar r1, UChar r3, UShort i2),
    const HChar *mnm = irgen(r1, r3, i2);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC4(MNM, GPR, GPR, PCREL), mnm, r1, r3, (Int)(Short)i2);
+      S390_DISASM(MNM(mnm), GPR(r1), GPR(r3), PCREL((Int)(Short)i2));
 }
 
 static void
@@ -3293,7 +3376,7 @@ s390_format_RSY_RRRD(const HChar *(*irgen)(UChar r1, UChar r3, IRTemp op2addr),
    mnm = irgen(r1, r3, op2addr);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC4(MNM, GPR, GPR, SDXB), mnm, r1, r3, dh2, dl2, 0, b2);
+      S390_DISASM(MNM(mnm), GPR(r1), GPR(r3), SDXB(dh2, dl2, 0, b2));
 }
 
 static void
@@ -3311,12 +3394,12 @@ s390_format_RSY_AARD(const HChar *(*irgen)(UChar, UChar, IRTemp),
    mnm = irgen(r1, r3, op2addr);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC4(MNM, AR, AR, SDXB), mnm, r1, r3, dh2, dl2, 0, b2);
+      S390_DISASM(MNM(mnm), AR(r1), AR(r3), SDXB(dh2, dl2, 0, b2));
 }
 
 static void
-s390_format_RSY_RURD(const HChar *(*irgen)(UChar r1, UChar r3, IRTemp op2addr),
-                     UChar r1, UChar r3, UChar b2, UShort dl2, UChar dh2)
+s390_format_RSY_RURD(const HChar *(*irgen)(UChar r1, UChar m3, IRTemp op2addr),
+                     UChar r1, UChar m3, UChar b2, UShort dl2, UChar dh2)
 {
    const HChar *mnm;
    IRTemp op2addr = newTemp(Ity_I64);
@@ -3326,16 +3409,33 @@ s390_format_RSY_RURD(const HChar *(*irgen)(UChar r1, UChar r3, IRTemp op2addr),
    assign(op2addr, binop(Iop_Add64, mkexpr(d2), b2 != 0 ? get_gpr_dw0(b2) :
           mkU64(0)));
 
-   mnm = irgen(r1, r3, op2addr);
+   mnm = irgen(r1, m3, op2addr);
+
+   if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
+      S390_DISASM(MNM(mnm), GPR(r1), UINT(m3), SDXB(dh2, dl2, 0, b2));
+}
+
+static void
+s390_format_RSY_R0RD(const HChar *(*irgen)(UChar r1, UChar m3, IRTemp op2addr),
+                     UChar r1, UChar m3, UChar b2, UShort dl2, UChar dh2)
+{
+   const HChar *mnm;
+   IRTemp op2addr = newTemp(Ity_I64);
+   IRTemp d2 = newTemp(Ity_I64);
+
+   assign(d2, mkU64(((ULong)(Long)(Char)dh2 << 12) | ((ULong)dl2)));
+   assign(op2addr, binop(Iop_Add64, mkexpr(d2), b2 != 0 ? get_gpr_dw0(b2) :
+          mkU64(0)));
+
+   mnm = irgen(r1, m3, op2addr);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC4(MNM, GPR, UINT, SDXB), mnm, r1, r3, dh2, dl2, 0, b2);
+      S390_DISASM(XMNM(mnm, cabt_disasm), GPR(r1), MASK(m3), SDXB(dh2, dl2, 0, b2));
 }
 
 static void
 s390_format_RSY_RDRM(const HChar *(*irgen)(UChar r1, IRTemp op2addr),
-                     UChar r1, UChar m3, UChar b2, UShort dl2, UChar dh2,
-                     Int xmnm_kind)
+                     UChar r1, UChar m3, UChar b2, UShort dl2, UChar dh2)
 {
    IRTemp op2addr = newTemp(Ity_I64);
    IRTemp d2 = newTemp(Ity_I64);
@@ -3346,12 +3446,12 @@ s390_format_RSY_RDRM(const HChar *(*irgen)(UChar r1, IRTemp op2addr),
    assign(op2addr, binop(Iop_Add64, mkexpr(d2), b2 != 0 ? get_gpr_dw0(b2) :
           mkU64(0)));
 
-   irgen(r1, op2addr);
+   const HChar *mnm = irgen(r1, op2addr);
 
    vassert(dis_res->whatNext == Dis_Continue);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC3(XMNM, GPR, SDXB), xmnm_kind, m3, r1, dh2, dl2, 0, b2);
+      S390_DISASM(XMNM(mnm, cls_disasm), GPR(r1), SDXB(dh2, dl2, 0, b2), MASK(m3));
 }
 
 static void
@@ -3382,7 +3482,7 @@ s390_format_RX_RRRD(const HChar *(*irgen)(UChar r1, IRTemp op2addr),
    mnm = irgen(r1, op2addr);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC3(MNM, GPR, UDXB), mnm, r1, d2, x2, b2);
+      S390_DISASM(MNM(mnm), GPR(r1), UDXB(d2, x2, b2));
 }
 
 static void
@@ -3399,7 +3499,7 @@ s390_format_RX_FRRD(const HChar *(*irgen)(UChar r1, IRTemp op2addr),
    mnm = irgen(r1, op2addr);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC3(MNM, FPR, UDXB), mnm, r1, d2, x2, b2);
+      S390_DISASM(MNM(mnm), FPR(r1), UDXB(d2, x2, b2));
 }
 
 static void
@@ -3416,7 +3516,7 @@ s390_format_RXE_FRRD(const HChar *(*irgen)(UChar r1, IRTemp op2addr),
    mnm = irgen(r1, op2addr);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC3(MNM, FPR, UDXB), mnm, r1, d2, x2, b2);
+      S390_DISASM(MNM(mnm), FPR(r1), UDXB(d2, x2, b2));
 }
 
 static void
@@ -3433,7 +3533,7 @@ s390_format_RXE_RRRDR(const HChar *(*irgen)(UChar r1, IRTemp op2addr, UChar m3),
    mnm = irgen(r1, op2addr, m3);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC3(MNM, GPR, UDXB), mnm, r1, d2, x2, b2);
+      S390_DISASM(MNM(mnm), GPR(r1), UDXB(d2, x2, b2), UINT(m3));
 }
 
 static void
@@ -3450,7 +3550,7 @@ s390_format_RXF_FRRDF(const HChar *(*irgen)(UChar, IRTemp, UChar),
    mnm = irgen(r3, op2addr, r1);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC4(MNM, FPR, FPR, UDXB), mnm, r1, r3, d2, x2, b2);
+      S390_DISASM(MNM(mnm), FPR(r1), FPR(r3), UDXB(d2, x2, b2));
 }
 
 static void
@@ -3470,9 +3570,9 @@ s390_format_RXY_RRRD(const HChar *(*irgen)(UChar r1, IRTemp op2addr),
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE)) {
       if (irgen == s390_irgen_BIC)
-         s390_disasm(ENC2(XMNM, SDXB), S390_XMNM_BIC, r1, dh2, dl2, x2, b2);
+         S390_DISASM(XMNM(mnm, bic_disasm), MASK(r1), SDXB(dh2, dl2, x2, b2));
       else
-         s390_disasm(ENC3(MNM, GPR, SDXB), mnm, r1, dh2, dl2, x2, b2);
+         S390_DISASM(MNM(mnm), GPR(r1), SDXB(dh2, dl2, x2, b2));
    }
 }
 
@@ -3492,7 +3592,7 @@ s390_format_RXY_FRRD(const HChar *(*irgen)(UChar r1, IRTemp op2addr),
    mnm = irgen(r1, op2addr);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC3(MNM, FPR, SDXB), mnm, r1, dh2, dl2, x2, b2);
+      S390_DISASM(MNM(mnm), FPR(r1), SDXB(dh2, dl2, x2, b2));
 }
 
 static void
@@ -3511,7 +3611,7 @@ s390_format_RXY_URRD(const HChar *(*irgen)(void),
    mnm = irgen();
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC3(MNM, UINT, SDXB), mnm, r1, dh2, dl2, x2, b2);
+      S390_DISASM(MNM(mnm), UINT(r1), SDXB(dh2, dl2, x2, b2));
 }
 
 static void
@@ -3527,7 +3627,7 @@ s390_format_S_RD(const HChar *(*irgen)(IRTemp op2addr),
    mnm = irgen(op2addr);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC2(MNM, UDXB), mnm, d2, 0, b2);
+      S390_DISASM(MNM(mnm), UDXB(d2, 0, b2));
 }
 
 static void
@@ -3539,7 +3639,7 @@ s390_format_S_RD_raw(const HChar *(*irgen)(UChar b2, UShort d2),
    mnm = irgen(b2, d2);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC2(MNM, UDXB), mnm, d2, 0, b2);
+      S390_DISASM(MNM(mnm), UDXB(d2, 0, b2));
 }
 
 static void
@@ -3555,7 +3655,7 @@ s390_format_SI_URD(const HChar *(*irgen)(UChar i2, IRTemp op1addr),
    mnm = irgen(i2, op1addr);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC3(MNM, UDXB, UINT), mnm, d1, 0, b1, i2);
+      S390_DISASM(MNM(mnm), UDXB(d1, 0, b1), UINT(i2));
 }
 
 static void
@@ -3573,7 +3673,7 @@ s390_format_SIY_URD(const HChar *(*irgen)(UChar i2, IRTemp op1addr),
    mnm = irgen(i2, op1addr);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC3(MNM, SDXB, UINT), mnm, dh1, dl1, 0, b1, i2);
+      S390_DISASM(MNM(mnm), SDXB(dh1, dl1, 0, b1), UINT(i2));
 }
 
 static void
@@ -3591,7 +3691,23 @@ s390_format_SIY_IRD(const HChar *(*irgen)(UChar i2, IRTemp op1addr),
    mnm = irgen(i2, op1addr);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC3(MNM, SDXB, INT), mnm, dh1, dl1, 0, b1, (Int)(Char)i2);
+      S390_DISASM(MNM(mnm), SDXB(dh1, dl1, 0, b1), INT((Int)(Char)i2));
+}
+
+static void
+s390_format_SMI_U0RDP(const HChar *(*irgen)(UChar m1, UShort i2, IRTemp op3addr),
+                      UChar m1, UShort i2, UChar b3, UShort d3)
+{
+   const HChar *mnm;
+   IRTemp op3addr = newTemp(Ity_I64);
+
+   assign(op3addr,
+          binop(Iop_Add64, mkU64(d3), b3 != 0 ? get_gpr_dw0(b3) : mkU64(0)));
+
+   mnm = irgen(m1, i2, op3addr);
+
+   if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
+      S390_DISASM(MNM(mnm), UINT(m1), PCREL((Int)(Short)i2), UDXB(d3, 0, b3));
 }
 
 static void
@@ -3610,7 +3726,7 @@ s390_format_SS_L0RDRD(const HChar *(*irgen)(UChar, IRTemp, IRTemp),
    mnm = irgen(l, op1addr, op2addr);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC3(MNM, UDLB, UDXB), mnm, d1, l, b1, d2, 0, b2);
+      S390_DISASM(MNM(mnm), UDLB(d1, l, b1), UDXB(d2, 0, b2));
 }
 
 static void
@@ -3629,7 +3745,7 @@ s390_format_SSE_RDRD(const HChar *(*irgen)(IRTemp, IRTemp),
    mnm = irgen(op1addr, op2addr);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC2(UDXB, UDXB), mnm, d1, 0, b1, d2, 0, b2);
+      S390_DISASM(MNM(mnm), UDXB(d1, 0, b1), UDXB(d2, 0, b2));
 }
 
 static void
@@ -3645,7 +3761,7 @@ s390_format_SIL_RDI(const HChar *(*irgen)(UShort i2, IRTemp op1addr),
    mnm = irgen(i2, op1addr);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC3(MNM, UDXB, INT), mnm, d1, 0, b1, (Int)(Short)i2);
+      S390_DISASM(MNM(mnm), UDXB(d1, 0, b1), INT((Int)(Short)i2));
 }
 
 static void
@@ -3661,12 +3777,12 @@ s390_format_SIL_RDU(const HChar *(*irgen)(UShort i2, IRTemp op1addr),
    mnm = irgen(i2, op1addr);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC3(MNM, UDXB, UINT), mnm, d1, 0, b1, i2);
+      S390_DISASM(MNM(mnm), UDXB(d1, 0, b1), UINT(i2));
 }
 
 static void
 s390_format_VRX_VRRD(const HChar *(*irgen)(UChar v1, IRTemp op2addr),
-                    UChar v1, UChar x2, UChar b2, UShort d2, UChar rxb)
+                     UChar v1, UChar x2, UChar b2, UShort d2, UChar m3, UChar rxb)
 {
    const HChar *mnm;
    IRTemp op2addr = newTemp(Ity_I64);
@@ -3684,13 +3800,14 @@ s390_format_VRX_VRRD(const HChar *(*irgen)(UChar v1, IRTemp op2addr),
    mnm = irgen(v1, op2addr);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC3(MNM, VR, UDXB), mnm, v1, d2, x2, b2);
+      S390_DISASM(XMNM(mnm, mask0_disasm), VR(v1), UDXB(d2, x2, b2), MASK(m3));
 }
 
 
 static void
 s390_format_VRX_VRRDM(const HChar *(*irgen)(UChar v1, IRTemp op2addr, UChar m3),
-                    UChar v1, UChar x2, UChar b2, UShort d2, UChar m3, UChar rxb)
+                      UChar v1, UChar x2, UChar b2, UShort d2, UChar m3, UChar rxb,
+                      HChar *(*handler)(const s390_opnd *, HChar *))
 {
    const HChar *mnm;
    IRTemp op2addr = newTemp(Ity_I64);
@@ -3707,8 +3824,12 @@ s390_format_VRX_VRRDM(const HChar *(*irgen)(UChar v1, IRTemp op2addr, UChar m3),
    v1  = s390_vr_getVRindex(v1, 1, rxb);
    mnm = irgen(v1, op2addr, m3);
 
-   if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC3(MNM, VR, UDXB), mnm, v1, d2, x2, b2);
+   if (UNLIKELY(vex_traceflags & VEX_TRACE_FE)) {
+      if (handler)
+         S390_DISASM(XMNM(mnm, handler), VR(v1), UDXB(d2, x2, b2), MASK(m3));
+      else
+         S390_DISASM(MNM(mnm), VR(v1), UDXB(d2, x2, b2), UINT(m3));
+   }
 }
 
 
@@ -3728,7 +3849,7 @@ s390_format_VRR_VV(const HChar *(*irgen)(UChar v1, UChar v2),
    mnm = irgen(v1, v2);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC3(MNM, VR, VR), mnm, v1, v2);
+      S390_DISASM(MNM(mnm), VR(v1), VR(v2));
 }
 
 
@@ -3749,7 +3870,7 @@ s390_format_VRR_VVV(const HChar *(*irgen)(UChar v1, UChar v2, UChar v3),
    mnm = irgen(v1, v2, v3);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC4(MNM, VR, VR, VR), mnm, v1, v2, v3);
+      S390_DISASM(MNM(mnm), VR(v1), VR(v2), VR(v3));
 }
 
 
@@ -3769,14 +3890,19 @@ s390_format_VRR_VVVM(const HChar *(*irgen)(UChar v1, UChar v2, UChar v3, UChar m
    v3  = s390_vr_getVRindex(v3, 3, rxb);
    mnm = irgen(v1, v2, v3, m4);
 
-   if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC5(MNM, VR, VR, VR, UINT), mnm, v1, v2, v3, m4);
+   if (UNLIKELY(vex_traceflags & VEX_TRACE_FE)) {
+      if (irgen == s390_irgen_VPDI)
+         S390_DISASM(MNM(mnm), VR(v1), VR(v2), VR(v3), UINT(m4));
+      else
+         S390_DISASM(XMNM(mnm, va_like_disasm), VR(v1), VR(v2), VR(v3), MASK(m4));
+   }
 }
 
 
 static void
 s390_format_VRR_VVVMM(const HChar *(*irgen)(UChar v1, UChar v2, UChar v3, UChar m4, UChar m5),
-                    UChar v1, UChar v2, UChar v3, UChar m4, UChar m5, UChar rxb)
+                      UChar v1, UChar v2, UChar v3, UChar m4, UChar m5, UChar rxb,
+                      HChar *(*handler)(const s390_opnd *, HChar *))
 {
    const HChar *mnm;
 
@@ -3791,7 +3917,7 @@ s390_format_VRR_VVVMM(const HChar *(*irgen)(UChar v1, UChar v2, UChar v3, UChar 
    mnm = irgen(v1, v2, v3, m4, m5);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC6(MNM, VR, VR, VR, UINT, UINT), mnm, v1, v2, v3, m4, m5);
+      S390_DISASM(XMNM(mnm, handler), VR(v1), VR(v2), VR(v3), MASK(m4), MASK(m5));
 }
 
 
@@ -3813,7 +3939,7 @@ s390_format_VRR_VVVV(const HChar *(*irgen)(UChar v1, UChar v2, UChar v3, UChar v
    mnm = irgen(v1, v2, v3, v4);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC5(MNM, VR, VR, VR, VR), mnm, v1, v2, v3, v4);
+      S390_DISASM(MNM(mnm), VR(v1), VR(v2), VR(v3), VR(v4));
 }
 
 
@@ -3832,7 +3958,7 @@ s390_format_VRR_VRR(const HChar *(*irgen)(UChar v1, UChar r2, UChar r3),
    mnm = irgen(v1, r2, r3);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC4(MNM, VR, GPR, GPR), mnm, v1, r2, r3);
+      S390_DISASM(MNM(mnm), VR(v1), GPR(r2), GPR(r3));
 }
 
 
@@ -3852,7 +3978,66 @@ s390_format_VRR_VVM(const HChar *(*irgen)(UChar v1, UChar v2, UChar m3),
    mnm = irgen(v1, v2, m3);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC4(MNM, VR, VR, UINT), mnm, v1, v2, m3);
+      S390_DISASM(XMNM(mnm, va_like_disasm), VR(v1), VR(v2), MASK(m3));
+}
+
+
+static void
+s390_format_VRI_V0U(const HChar *(*irgen)(UChar v1, UShort i2),
+                    UChar v1, UShort i2, UChar rxb,
+                    HChar *(*handler)(const s390_opnd *, HChar *))
+{
+   const HChar *mnm;
+
+   if (! s390_host_has_vx) {
+      emulation_failure(EmFail_S390X_vx);
+      return;
+   }
+
+   v1  = s390_vr_getVRindex(v1, 1, rxb);
+   mnm = irgen(v1, i2);
+
+   if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
+      S390_DISASM(XMNM(mnm, handler), VR(v1), UINT(i2));
+}
+
+
+static void
+s390_format_VRI_V0UUU(const HChar *(*irgen)(UChar v1, UChar i2, UChar i3,
+                                            UChar m4),
+                      UChar v1, UChar i2, UChar i3, UChar m4, UChar rxb)
+{
+   const HChar *mnm;
+
+   if (! s390_host_has_vx) {
+      emulation_failure(EmFail_S390X_vx);
+      return;
+   }
+
+   v1  = s390_vr_getVRindex(v1, 1, rxb);
+   mnm = irgen(v1, i2, i3, m4);
+
+   if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
+      S390_DISASM(XMNM(mnm, va_like_disasm), VR(v1), UINT(i2), UINT(i3), MASK(m4));
+}
+
+
+static void
+s390_format_VRI_V0IU(const HChar *(*irgen)(UChar v1, UShort i2, UChar m3),
+                     UChar v1, UShort i2, UChar m3, UChar rxb)
+{
+   const HChar *mnm;
+
+   if (! s390_host_has_vx) {
+      emulation_failure(EmFail_S390X_vx);
+      return;
+   }
+
+   v1  = s390_vr_getVRindex(v1, 1, rxb);
+   mnm = irgen(v1, i2, m3);
+
+   if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
+      S390_DISASM(XMNM(mnm, va_like_disasm), VR(v1), INT((Short)i2), MASK(m3));
 }
 
 
@@ -3871,7 +4056,7 @@ s390_format_VRI_VIM(const HChar *(*irgen)(UChar v1, UShort i2, UChar m3),
    mnm = irgen(v1, i2, m3);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC4(MNM, VR, UINT, UINT), mnm, v1, i2, m3);
+      S390_DISASM(MNM(mnm), VR(v1), INT((Short)i2), UINT(m3));
 }
 
 
@@ -3891,7 +4076,7 @@ s390_format_VRI_VVIM(const HChar *(*irgen)(UChar v1, UChar v3, UShort i2, UChar 
    mnm = irgen(v1, v3, i2, m4);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC5(MNM, VR, VR, UINT, UINT), mnm, v1, v3, i2, m4);
+      S390_DISASM(XMNM(mnm, va_like_disasm), VR(v1), VR(v3), UINT(i2), MASK(m4));
 }
 
 static void
@@ -3912,7 +4097,7 @@ s390_format_VRI_VVIMM(const HChar *(*irgen)(UChar v1, UChar v2, UShort i3,
    mnm = irgen(v1, v2, i3, m4, m5);
 
    if (vex_traceflags & VEX_TRACE_FE)
-      s390_disasm(ENC6(MNM, VR, VR, UINT, UINT, UINT), mnm, v1, v2, i3, m4, m5);
+      S390_DISASM(XMNM(mnm, vfmix_like_disasm), VR(v1), VR(v2), UINT(i3), MASK(m4), MASK(m5));
 }
 
 static void
@@ -3935,7 +4120,7 @@ s390_format_VRS_RRDVM(const HChar *(*irgen)(UChar r1, IRTemp op2addr, UChar v3,
    mnm = irgen(r1, op2addr, v3, m4);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC5(MNM, GPR, UDXB, VR, UINT), mnm, r1, d2, 0, b2, v3, m4);
+      S390_DISASM(XMNM(mnm, va_like_disasm), GPR(r1), VR(v3), UDXB(d2, 0, b2), MASK(m4));
 }
 
 static void
@@ -3957,7 +4142,7 @@ s390_format_VRS_RRDV(const HChar *(*irgen)(UChar v1, UChar r3, IRTemp op2addr),
    mnm = irgen(v1, r3, op2addr);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC4(MNM, VR, GPR, UDXB), mnm, v1, r3, d2, 0, b2);
+      S390_DISASM(MNM(mnm), VR(v1), GPR(r3), UDXB(d2, 0, b2));
 }
 
 
@@ -3982,13 +4167,13 @@ s390_format_VRS_VRDVM(const HChar *(*irgen)(UChar v1, IRTemp op2addr, UChar v3,
    mnm = irgen(v1, op2addr, v3, m4);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC5(MNM, VR, UDXB, VR, UINT), mnm, v1, d2, 0, b2, v3, m4);
+      S390_DISASM(XMNM(mnm, va_like_disasm), VR(v1), VR(v3), UDXB(d2, 0, b2), MASK(m4));
 }
 
 
 static void
 s390_format_VRS_VRDV(const HChar *(*irgen)(UChar v1, IRTemp op2addr, UChar v3),
-                     UChar v1, UChar b2, UShort d2, UChar v3, UChar rxb)
+                     UChar v1, UChar b2, UShort d2, UChar v3, UChar m4, UChar rxb)
 {
    const HChar *mnm;
    IRTemp op2addr = newTemp(Ity_I64);
@@ -4006,7 +4191,7 @@ s390_format_VRS_VRDV(const HChar *(*irgen)(UChar v1, IRTemp op2addr, UChar v3),
    mnm = irgen(v1, op2addr, v3);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC4(MNM, VR, UDXB, VR), mnm, v1, d2, 0, b2, v3);
+      S390_DISASM(XMNM(mnm, mask0_disasm), VR(v1), VR(v3), UDXB(d2, 0, b2), MASK(m4));
 }
 
 
@@ -4030,7 +4215,7 @@ s390_format_VRS_VRRDM(const HChar *(*irgen)(UChar v1, IRTemp op2addr, UChar r3,
    mnm = irgen(v1, op2addr, r3, m4);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC5(MNM, VR, GPR, UDXB, UINT), mnm, v1, r3, d2, 0, b2, m4);
+      S390_DISASM(XMNM(mnm, va_like_disasm), VR(v1), GPR(r3), UDXB(d2, 0, b2), MASK(m4));
 }
 
 
@@ -4053,7 +4238,7 @@ s390_format_VRS_VRRD(const HChar *(*irgen)(UChar v1, IRTemp op2addr, UChar r3),
    mnm = irgen(v1, op2addr, r3);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC4(MNM, VR, GPR, UDXB), mnm, v1, r3, d2, 0, b2);
+      S390_DISASM(MNM(mnm), VR(v1), GPR(r3), UDXB(d2, 0, b2));
 }
 
 
@@ -4087,15 +4272,16 @@ s390_format_VRV_VVRDMT(const HChar *(*irgen)(UChar v1, IRTemp op2addr, UChar m3)
    mnm = irgen(v1, op2addr, m3);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC4(MNM, VR, UDVB, UINT), mnm, v1, d2, v2, b2, m3);
+      S390_DISASM(MNM(mnm), VR(v1), UDVB(d2, v2, b2), UINT(m3));
 }
 
 
 static void
 s390_format_VRR_VVVVMM(const HChar *(*irgen)(UChar v1, UChar v2, UChar v3,
-                                              UChar v4, UChar m5, UChar m6),
-                        UChar v1, UChar v2, UChar v3, UChar v4, UChar m5,
-                        UChar m6, UChar rxb)
+                                             UChar v4, UChar m5, UChar m6),
+                       UChar v1, UChar v2, UChar v3, UChar v4, UChar m5,
+                       UChar m6, UChar rxb,
+                       HChar *(*handler)(const s390_opnd *, HChar *))
 {
    const HChar *mnm;
 
@@ -4111,8 +4297,7 @@ s390_format_VRR_VVVVMM(const HChar *(*irgen)(UChar v1, UChar v2, UChar v3,
    mnm = irgen(v1, v2, v3, v4, m5, m6);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC7(MNM, VR, VR, VR, VR, UINT, UINT),
-                  mnm, v1, v2, v3, v4, m5, m6);
+      S390_DISASM(XMNM(mnm, handler), VR(v1), VR(v2), VR(v3), VR(v4), MASK(m5), MASK(m6));
 }
 
 
@@ -4133,7 +4318,7 @@ s390_format_VRR_VVMM(const HChar *(*irgen)(UChar v1, UChar v2, UChar m3,
    mnm = irgen(v1, v2, m3, m5);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC5(MNM, VR, VR, UINT, UINT), mnm, v1, v2, m3, m5);
+      S390_DISASM(XMNM(mnm, vch_like_disasm), VR(v1), VR(v2), MASK(m3), MASK(m5));
 }
 
 
@@ -4156,7 +4341,7 @@ s390_format_VRId_VVVIM(const HChar *(*irgen)(UChar v1, UChar v2, UChar v3,
    mnm = irgen(v1, v2, v3, i4, m5);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC6(MNM, VR, VR, VR, UINT, UINT), mnm, v1, v2, v3, i4, m5);
+      S390_DISASM(XMNM(mnm, va_like_disasm), VR(v1), VR(v2), VR(v3), UINT(i4), MASK(m5));
 }
 
 
@@ -4178,7 +4363,7 @@ s390_format_VRId_VVVI(const HChar *(*irgen)(UChar v1, UChar v2, UChar v3,
    mnm = irgen(v1, v2, v3, i4);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC5(MNM, VR, VR, VR, UINT), mnm, v1, v2, v3, i4);
+      S390_DISASM(MNM(mnm), VR(v1), VR(v2), VR(v3), UINT(i4));
 }
 
 
@@ -4202,7 +4387,7 @@ s390_format_VRRd_VVVVM(const HChar *(*irgen)(UChar v1, UChar v2, UChar v3,
    mnm = irgen(v1, v2, v3, v4, m5);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC6(MNM, VR, VR, VR, VR, UINT), mnm, v1, v2, v3, v4, m5);
+      S390_DISASM(XMNM(mnm, va_like_disasm), VR(v1), VR(v2), VR(v3), VR(v4), MASK(m5));
 }
 
 
@@ -4223,15 +4408,31 @@ s390_format_VRRa_VVMMM(const HChar *(*irgen)(UChar v1, UChar v2, UChar m3,
    v2 = s390_vr_getVRindex(v2, 2, rxb);
    mnm = irgen(v1, v2, m3, m4, m5);
 
-   if (vex_traceflags & VEX_TRACE_FE)
-      s390_disasm(ENC6(MNM, VR, VR, UINT, UINT, UINT), mnm, v1, v2, m3, m4, m5);
+   if (vex_traceflags & VEX_TRACE_FE) {
+      if (irgen == s390_irgen_VFLR)
+         S390_DISASM(XMNM(mnm, vflr_disasm), VR(v1), VR(v2), MASK(m3), MASK(m4), UINT(m5));
+      else if (irgen == s390_irgen_VFI)
+         S390_DISASM(XMNM(mnm, vfi_disasm), VR(v1), VR(v2), MASK(m3), MASK(m4), UINT(m5));
+      else if (irgen == s390_irgen_VFPSO)
+         S390_DISASM(XMNM(mnm, vfpso_disasm), VR(v1), VR(v2), MASK(m3), MASK(m4), MASK(m5));
+      else if (irgen == s390_irgen_VCGD)
+         S390_DISASM(XMNM(mnm, vcgd_disasm), VR(v1), VR(v2), MASK(m3), MASK(m4), MASK(m5));
+      else if (irgen == s390_irgen_VCDG)
+         S390_DISASM(XMNM(mnm, vcdg_disasm), VR(v1), VR(v2), MASK(m3), MASK(m4), MASK(m5));
+      else if (irgen == s390_irgen_VCLGD)
+         S390_DISASM(XMNM(mnm, vclgd_disasm), VR(v1), VR(v2), MASK(m3), MASK(m4), MASK(m5));
+      else if (irgen == s390_irgen_VCDLG)
+         S390_DISASM(XMNM(mnm, vcgld_disasm), VR(v1), VR(v2), MASK(m3), MASK(m4), MASK(m5));
+      else
+         S390_DISASM(MNM(mnm), VR(v1), VR(v2), UINT(m3), UINT(m4), UINT(m5));
+   }
 }
 
 static void
 s390_format_VRRa_VVVMM(const HChar *(*irgen)(UChar v1, UChar v2, UChar v3,
                                              UChar m4, UChar m5),
                        UChar v1, UChar v2, UChar v3, UChar m4, UChar m5,
-                       UChar rxb)
+                       UChar rxb, HChar *(*handler)(const s390_opnd *, HChar *))
 {
    const HChar *mnm;
 
@@ -4245,14 +4446,19 @@ s390_format_VRRa_VVVMM(const HChar *(*irgen)(UChar v1, UChar v2, UChar v3,
    v3 = s390_vr_getVRindex(v3, 3, rxb);
    mnm = irgen(v1, v2, v3, m4, m5);
 
-   if (vex_traceflags & VEX_TRACE_FE)
-      s390_disasm(ENC6(MNM, VR, VR, VR, UINT, UINT), mnm, v1, v2, v3, m4, m5);
+   if (vex_traceflags & VEX_TRACE_FE) {
+      if (handler)
+         S390_DISASM(XMNM(mnm, handler), VR(v1), VR(v2), VR(v3), MASK(m4), MASK(m5));
+      else
+         S390_DISASM(MNM(mnm), VR(v1), VR(v2), VR(v3), UINT(m4), UINT(m5));
+   }
 }
 
 static void
 s390_format_VRRa_VVMM(const HChar *(*irgen)(UChar v1, UChar v2, UChar m3,
                                             UChar m4),
-                       UChar v1, UChar v2, UChar m3, UChar m4, UChar rxb)
+                      UChar v1, UChar v2, UChar m3, UChar m4, UChar rxb,
+                      HChar *(*handler)(const s390_opnd *, HChar *))
 {
    const HChar *mnm;
 
@@ -4265,8 +4471,12 @@ s390_format_VRRa_VVMM(const HChar *(*irgen)(UChar v1, UChar v2, UChar m3,
    v2 = s390_vr_getVRindex(v2, 2, rxb);
    mnm = irgen(v1, v2, m3, m4);
 
-   if (vex_traceflags & VEX_TRACE_FE)
-      s390_disasm(ENC5(MNM, VR, VR, UINT, UINT), mnm, v1, v2, m3, m4);
+   if (vex_traceflags & VEX_TRACE_FE) {
+      if (handler)
+         S390_DISASM(XMNM(mnm, handler), VR(v1), VR(v2), MASK(m3), MASK(m4));
+      else
+         S390_DISASM(MNM(mnm), VR(v1), VR(v2), UINT(m3), UINT(m4));
+   }
 }
 
 static void
@@ -4288,13 +4498,35 @@ s390_format_VRRa_VVVMMM(const HChar *(*irgen)(UChar v1, UChar v2, UChar v3,
    mnm = irgen(v1, v2, v3, m4, m5, m6);
 
    if (vex_traceflags & VEX_TRACE_FE)
-      s390_disasm(ENC6(MNM, VR, VR, VR, UINT, UINT),
-                  mnm, v1, v2, v3, m4, m5, m6);
+      S390_DISASM(XMNM(mnm, vfce_like_disasm), VR(v1), VR(v2), VR(v3), MASK(m4), MASK(m5), MASK(m6));
+}
+
+
+static void
+s390_format_VRRa_VVVMMM2(const HChar *(*irgen)(UChar v1, UChar v2, UChar v3,
+                                               UChar m4, UChar m5, UChar m6),
+                         UChar v1, UChar v2, UChar v3, UChar m4, UChar m5,
+                         UChar m6, UChar rxb)
+{
+   const HChar *mnm;
+
+   if (!s390_host_has_vx) {
+      emulation_failure(EmFail_S390X_vx);
+      return;
+   }
+
+   v1 = s390_vr_getVRindex(v1, 1, rxb);
+   v2 = s390_vr_getVRindex(v2, 2, rxb);
+   v3 = s390_vr_getVRindex(v3, 3, rxb);
+   mnm = irgen(v1, v2, v3, m4, m5, m6);
+
+   if (vex_traceflags & VEX_TRACE_FE)
+      S390_DISASM(XMNM(mnm, vfmix_like_disasm), VR(v1), VR(v2), VR(v3), MASK(m4), MASK(m5), UINT(m6));
 }
 
 static void
 s390_format_VSI_URDV(const HChar *(*irgen)(UChar v1, IRTemp op2addr, UChar i3),
-                     UChar v1, UChar b2, UChar d2, UChar i3, UChar rxb)
+                     UChar v1, UChar b2, UShort d2, UChar i3, UChar rxb)
 {
    const HChar *mnm;
    IRTemp op2addr = newTemp(Ity_I64);
@@ -4312,7 +4544,7 @@ s390_format_VSI_URDV(const HChar *(*irgen)(UChar v1, IRTemp op2addr, UChar i3),
    mnm = irgen(v1, op2addr, i3);
 
    if (vex_traceflags & VEX_TRACE_FE)
-      s390_disasm(ENC4(MNM, VR, UDXB, UINT), mnm, v1, d2, 0, b2, i3);
+      S390_DISASM(MNM(mnm), VR(v1), UDXB(d2, 0, b2), UINT(i3));
 }
 
 /*------------------------------------------------------------*/
@@ -5404,47 +5636,47 @@ s390_irgen_BAS(UChar r1, IRTemp op2addr)
 }
 
 static const HChar *
-s390_irgen_BCR(UChar r1, UChar r2)
+s390_irgen_BCR(UChar m1, UChar r2)
 {
    IRTemp cond = newTemp(Ity_I32);
 
-   if (r2 == 0 && (r1 >= 14)) {    /* serialization */
+   if (r2 == 0 && (m1 >= 14)) {    /* serialization */
       stmt(IRStmt_MBE(Imbe_Fence));
    }
 
-   if ((r2 == 0) || (r1 == 0)) {
+   if ((r2 == 0) || (m1 == 0)) {
    } else {
-      if (r1 == 15) {
+      if (m1 == 15) {
          return_from_function(get_gpr_dw0(r2));
       } else {
-         assign(cond, s390_call_calculate_cond(r1));
+         assign(cond, s390_call_calculate_cond(m1));
          if_condition_goto_computed(binop(Iop_CmpNE32, mkexpr(cond), mkU32(0)),
                                     get_gpr_dw0(r2));
       }
    }
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC2(XMNM, GPR), S390_XMNM_BCR, r1, r2);
+      S390_DISASM(XMNM("bcr", bcr_disasm), MASK(m1), GPR(r2));
 
    return "bcr";
 }
 
 static const HChar *
-s390_irgen_BC(UChar r1, UChar x2, UChar b2, UShort d2, IRTemp op2addr)
+s390_irgen_BC(UChar m1, UChar x2, UChar b2, UShort d2, IRTemp op2addr)
 {
    IRTemp cond = newTemp(Ity_I32);
 
-   if (r1 == 0) {
+   if (m1 == 0) {
    } else {
-      if (r1 == 15) {
+      if (m1 == 15) {
          always_goto(mkexpr(op2addr));
       } else {
-         assign(cond, s390_call_calculate_cond(r1));
+         assign(cond, s390_call_calculate_cond(m1));
          if_condition_goto_computed(binop(Iop_CmpNE32, mkexpr(cond), mkU32(0)),
                                     mkexpr(op2addr));
       }
    }
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC2(XMNM, UDXB), S390_XMNM_BC, r1, d2, x2, b2);
+      S390_DISASM(XMNM("bc", bc_disasm), MASK(m1), UDXB(d2, x2, b2));
 
    return "bc";
 }
@@ -5582,44 +5814,44 @@ s390_irgen_BRASL(UChar r1, UInt i2)
 }
 
 static const HChar *
-s390_irgen_BRC(UChar r1, UShort i2)
+s390_irgen_BRC(UChar m1, UShort i2)
 {
    IRTemp cond = newTemp(Ity_I32);
 
-   if (r1 == 0) {
+   if (m1 == 0) {
    } else {
-      if (r1 == 15) {
+      if (m1 == 15) {
          always_goto_and_chase(addr_relative(i2));
       } else {
-         assign(cond, s390_call_calculate_cond(r1));
+         assign(cond, s390_call_calculate_cond(m1));
          if_condition_goto(binop(Iop_CmpNE32, mkexpr(cond), mkU32(0)),
                            addr_relative(i2));
 
       }
    }
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC2(XMNM, PCREL), S390_XMNM_BRC, r1, (Int)(Short)i2);
+      S390_DISASM(XMNM("brc", brc_disasm), MASK(m1), PCREL((Int)(Short)i2));
 
    return "brc";
 }
 
 static const HChar *
-s390_irgen_BRCL(UChar r1, UInt i2)
+s390_irgen_BRCL(UChar m1, UInt i2)
 {
    IRTemp cond = newTemp(Ity_I32);
 
-   if (r1 == 0) {
+   if (m1 == 0) {
    } else {
-      if (r1 == 15) {
+      if (m1 == 15) {
          always_goto_and_chase(addr_rel_long(i2));
       } else {
-         assign(cond, s390_call_calculate_cond(r1));
+         assign(cond, s390_call_calculate_cond(m1));
          if_condition_goto(binop(Iop_CmpNE32, mkexpr(cond), mkU32(0)),
                            addr_rel_long(i2));
       }
    }
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC2(XMNM, PCREL), S390_XMNM_BRCL, r1, i2);
+      S390_DISASM(XMNM("brcl", brcl_disasm), MASK(m1), PCREL(i2));
 
    return "brcl";
 }
@@ -7150,7 +7382,7 @@ s390_irgen_CPYA(UChar r1, UChar r2)
 {
    put_ar_w0(r1, get_ar_w0(r2));
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC3(MNM, AR, AR), "cpya", r1, r2);
+      S390_DISASM(MNM("cpya"), AR(r1), AR(r2));
 
    return "cpya";
 }
@@ -7336,7 +7568,7 @@ s390_irgen_EAR(UChar r1, UChar r2)
 {
    put_gpr_w1(r1, get_ar_w0(r2));
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC3(MNM, GPR, AR), "ear", r1, r2);
+      S390_DISASM(MNM("ear"), GPR(r1), AR(r2));
 
    return "ear";
 }
@@ -7638,7 +7870,7 @@ s390_irgen_LARL(UChar r1, UInt i2)
    return "larl";
 }
 
-/* The IR representation of LAA and friends is an approximation of what 
+/* The IR representation of LAA and friends is an approximation of what
    happens natively. Essentially a loop containing a compare-and-swap is
    constructed which will iterate until the CAS succeeds. As a consequence,
    instrumenters may see more memory accesses than happen natively. See also
@@ -8319,12 +8551,12 @@ s390_irgen_LNGR(UChar r1, UChar r2)
 }
 
 static const HChar *
-s390_irgen_LNGFR(UChar r1, UChar r2 __attribute__((unused)))
+s390_irgen_LNGFR(UChar r1, UChar r2)
 {
    IRTemp op2 = newTemp(Ity_I64);
    IRTemp result = newTemp(Ity_I64);
 
-   assign(op2, unop(Iop_32Sto64, get_gpr_w1(r1)));
+   assign(op2, unop(Iop_32Sto64, get_gpr_w1(r2)));
    assign(result, mkite(binop(Iop_CmpLE64S, mkexpr(op2), mkU64(0)), mkexpr(op2),
           binop(Iop_Sub64, mkU64(0), mkexpr(op2))));
    put_gpr_dw0(r1, mkexpr(result));
@@ -8372,6 +8604,8 @@ s390_irgen_LOCG(UChar r1, IRTemp op2addr)
 static const HChar *
 s390_irgen_LPQ(UChar r1, IRTemp op2addr)
 {
+   s390_insn_assert("lpq", is_valid_gpr_pair(r1));
+
    put_gpr_dw0(r1, load(Ity_I64, mkexpr(op2addr)));
    put_gpr_dw0(r1 + 1, load(Ity_I64, binop(Iop_Add64, mkexpr(op2addr), mkU64(8))
                ));
@@ -8573,6 +8807,8 @@ s390_irgen_MVIY(UChar i2, IRTemp op1addr)
 static const HChar *
 s390_irgen_MR(UChar r1, UChar r2)
 {
+   s390_insn_assert("mr", is_valid_gpr_pair(r1));
+
    IRTemp op1 = newTemp(Ity_I32);
    IRTemp op2 = newTemp(Ity_I32);
    IRTemp result = newTemp(Ity_I64);
@@ -8589,6 +8825,8 @@ s390_irgen_MR(UChar r1, UChar r2)
 static const HChar *
 s390_irgen_M(UChar r1, IRTemp op2addr)
 {
+   s390_insn_assert("m", is_valid_gpr_pair(r1));
+
    IRTemp op1 = newTemp(Ity_I32);
    IRTemp op2 = newTemp(Ity_I32);
    IRTemp result = newTemp(Ity_I64);
@@ -8605,6 +8843,8 @@ s390_irgen_M(UChar r1, IRTemp op2addr)
 static const HChar *
 s390_irgen_MFY(UChar r1, IRTemp op2addr)
 {
+   s390_insn_assert("mfy", is_valid_gpr_pair(r1));
+
    IRTemp op1 = newTemp(Ity_I32);
    IRTemp op2 = newTemp(Ity_I32);
    IRTemp result = newTemp(Ity_I64);
@@ -8621,6 +8861,8 @@ s390_irgen_MFY(UChar r1, IRTemp op2addr)
 static const HChar *
 s390_irgen_MG(UChar r1, IRTemp op2addr)
 {
+   s390_insn_assert("mg", is_valid_gpr_pair(r1));
+
    IRTemp op1 = newTemp(Ity_I64);
    IRTemp op2 = newTemp(Ity_I64);
    IRTemp result = newTemp(Ity_I128);
@@ -8653,6 +8895,8 @@ s390_irgen_MGH(UChar r1, IRTemp op2addr)
 static const HChar *
 s390_irgen_MGRK(UChar r3, UChar r1, UChar r2)
 {
+   s390_insn_assert("mgrk", is_valid_gpr_pair(r1));
+
    IRTemp op2 = newTemp(Ity_I64);
    IRTemp op3 = newTemp(Ity_I64);
    IRTemp result = newTemp(Ity_I128);
@@ -8733,6 +8977,8 @@ s390_irgen_MGHI(UChar r1, UShort i2)
 static const HChar *
 s390_irgen_MLR(UChar r1, UChar r2)
 {
+   s390_insn_assert("mlr", is_valid_gpr_pair(r1));
+
    IRTemp op1 = newTemp(Ity_I32);
    IRTemp op2 = newTemp(Ity_I32);
    IRTemp result = newTemp(Ity_I64);
@@ -8749,6 +8995,8 @@ s390_irgen_MLR(UChar r1, UChar r2)
 static const HChar *
 s390_irgen_MLGR(UChar r1, UChar r2)
 {
+   s390_insn_assert("mlgr", is_valid_gpr_pair(r1));
+
    IRTemp op1 = newTemp(Ity_I64);
    IRTemp op2 = newTemp(Ity_I64);
    IRTemp result = newTemp(Ity_I128);
@@ -8765,6 +9013,8 @@ s390_irgen_MLGR(UChar r1, UChar r2)
 static const HChar *
 s390_irgen_ML(UChar r1, IRTemp op2addr)
 {
+   s390_insn_assert("ml", is_valid_gpr_pair(r1));
+
    IRTemp op1 = newTemp(Ity_I32);
    IRTemp op2 = newTemp(Ity_I32);
    IRTemp result = newTemp(Ity_I64);
@@ -8781,6 +9031,8 @@ s390_irgen_ML(UChar r1, IRTemp op2addr)
 static const HChar *
 s390_irgen_MLG(UChar r1, IRTemp op2addr)
 {
+   s390_insn_assert("mlg", is_valid_gpr_pair(r1));
+
    IRTemp op1 = newTemp(Ity_I64);
    IRTemp op2 = newTemp(Ity_I64);
    IRTemp result = newTemp(Ity_I128);
@@ -9312,6 +9564,11 @@ s390_call_pfpo_helper(IRExpr *gr0)
 static const HChar *
 s390_irgen_PFPO(void)
 {
+   if (! s390_host_has_pfpo) {
+      emulation_failure(EmFail_S390X_pfpo);
+      return "pfpo";
+   }
+
    IRTemp gr0 = newTemp(Ity_I32);     /* word 1 [32:63] of GR 0 */
    IRTemp test_bit = newTemp(Ity_I32); /* bit 32 of GR 0 - test validity */
    IRTemp fn = newTemp(Ity_I32);       /* [33:55] of GR 0 - function code */
@@ -9354,11 +9611,6 @@ s390_irgen_PFPO(void)
    IRTemp dst18 = newTemp(Ity_F128);
    IRExpr *irrm;
 
-   if (! s390_host_has_pfpo) {
-      emulation_failure(EmFail_S390X_pfpo);
-      goto done;
-   }
-
    assign(gr0, get_gpr_w1(0));
    /* get function code */
    assign(fn, binop(Iop_And32, binop(Iop_Shr32, mkexpr(gr0), mkU8(8)),
@@ -9536,7 +9788,6 @@ s390_irgen_PFPO(void)
    s390_cc_thunk_put1d128Z(S390_CC_OP_PFPO_128, src18, gr0);
    next_insn_if(binop(Iop_CmpEQ32, mkexpr(fn), mkU32(S390_PFPO_D128_TO_F128)));
 
- done:
    return "pfpo";
 }
 
@@ -9807,7 +10058,7 @@ s390_irgen_SAR(UChar r1, UChar r2)
 {
    put_ar_w0(r1, get_gpr_w1(r2));
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC3(MNM, AR, GPR), "sar", r1, r2);
+      S390_DISASM(MNM("sar"), AR(r1), GPR(r2));
 
    return "sar";
 }
@@ -9815,6 +10066,8 @@ s390_irgen_SAR(UChar r1, UChar r2)
 static const HChar *
 s390_irgen_SLDA(UChar r1, IRTemp op2addr)
 {
+   s390_insn_assert("slda", is_valid_gpr_pair(r1));
+
    IRTemp p1 = newTemp(Ity_I64);
    IRTemp p2 = newTemp(Ity_I64);
    IRTemp op = newTemp(Ity_I64);
@@ -9841,6 +10094,8 @@ s390_irgen_SLDA(UChar r1, IRTemp op2addr)
 static const HChar *
 s390_irgen_SLDL(UChar r1, IRTemp op2addr)
 {
+   s390_insn_assert("sldl", is_valid_gpr_pair(r1));
+
    IRTemp p1 = newTemp(Ity_I64);
    IRTemp p2 = newTemp(Ity_I64);
    IRTemp result = newTemp(Ity_I64);
@@ -9952,6 +10207,8 @@ s390_irgen_SLLG(UChar r1, UChar r3, IRTemp op2addr)
 static const HChar *
 s390_irgen_SRDA(UChar r1, IRTemp op2addr)
 {
+   s390_insn_assert("srda", is_valid_gpr_pair(r1));
+
    IRTemp p1 = newTemp(Ity_I64);
    IRTemp p2 = newTemp(Ity_I64);
    IRTemp result = newTemp(Ity_I64);
@@ -9971,6 +10228,8 @@ s390_irgen_SRDA(UChar r1, IRTemp op2addr)
 static const HChar *
 s390_irgen_SRDL(UChar r1, IRTemp op2addr)
 {
+   s390_insn_assert("srdl", is_valid_gpr_pair(r1));
+
    IRTemp p1 = newTemp(Ity_I64);
    IRTemp p2 = newTemp(Ity_I64);
    IRTemp result = newTemp(Ity_I64);
@@ -10273,6 +10532,8 @@ s390_irgen_STOCG(UChar r1, IRTemp op2addr)
 static const HChar *
 s390_irgen_STPQ(UChar r1, IRTemp op2addr)
 {
+   s390_insn_assert("stpq", is_valid_gpr_pair(r1));
+
    store(mkexpr(op2addr), get_gpr_dw0(r1));
    store(binop(Iop_Add64, mkexpr(op2addr), mkU64(8)), get_gpr_dw0(r1 + 1));
 
@@ -10507,14 +10768,14 @@ s390_irgen_SHY(UChar r1, IRTemp op2addr)
 }
 
 static const HChar *
-s390_irgen_SHHHR(UChar r3 __attribute__((unused)), UChar r1, UChar r2)
+s390_irgen_SHHHR(UChar r3, UChar r1, UChar r2)
 {
    IRTemp op2 = newTemp(Ity_I32);
    IRTemp op3 = newTemp(Ity_I32);
    IRTemp result = newTemp(Ity_I32);
 
-   assign(op2, get_gpr_w0(r1));
-   assign(op3, get_gpr_w0(r2));
+   assign(op2, get_gpr_w0(r2));
+   assign(op3, get_gpr_w0(r3));
    assign(result, binop(Iop_Sub32, mkexpr(op2), mkexpr(op3)));
    s390_cc_thunk_putSS(S390_CC_OP_SIGNED_SUB_32, op2, op3);
    put_gpr_w0(r1, mkexpr(result));
@@ -10523,14 +10784,14 @@ s390_irgen_SHHHR(UChar r3 __attribute__((unused)), UChar r1, UChar r2)
 }
 
 static const HChar *
-s390_irgen_SHHLR(UChar r3 __attribute__((unused)), UChar r1, UChar r2)
+s390_irgen_SHHLR(UChar r3, UChar r1, UChar r2)
 {
    IRTemp op2 = newTemp(Ity_I32);
    IRTemp op3 = newTemp(Ity_I32);
    IRTemp result = newTemp(Ity_I32);
 
-   assign(op2, get_gpr_w0(r1));
-   assign(op3, get_gpr_w1(r2));
+   assign(op2, get_gpr_w0(r2));
+   assign(op3, get_gpr_w1(r3));
    assign(result, binop(Iop_Sub32, mkexpr(op2), mkexpr(op3)));
    s390_cc_thunk_putSS(S390_CC_OP_SIGNED_SUB_32, op2, op3);
    put_gpr_w0(r1, mkexpr(result));
@@ -10717,14 +10978,14 @@ s390_irgen_SLGFI(UChar r1, UInt i2)
 }
 
 static const HChar *
-s390_irgen_SLHHHR(UChar r3 __attribute__((unused)), UChar r1, UChar r2)
+s390_irgen_SLHHHR(UChar r3, UChar r1, UChar r2)
 {
    IRTemp op2 = newTemp(Ity_I32);
    IRTemp op3 = newTemp(Ity_I32);
    IRTemp result = newTemp(Ity_I32);
 
-   assign(op2, get_gpr_w0(r1));
-   assign(op3, get_gpr_w0(r2));
+   assign(op2, get_gpr_w0(r2));
+   assign(op3, get_gpr_w0(r3));
    assign(result, binop(Iop_Sub32, mkexpr(op2), mkexpr(op3)));
    s390_cc_thunk_putZZ(S390_CC_OP_UNSIGNED_SUB_32, op2, op3);
    put_gpr_w0(r1, mkexpr(result));
@@ -10733,14 +10994,14 @@ s390_irgen_SLHHHR(UChar r3 __attribute__((unused)), UChar r1, UChar r2)
 }
 
 static const HChar *
-s390_irgen_SLHHLR(UChar r3 __attribute__((unused)), UChar r1, UChar r2)
+s390_irgen_SLHHLR(UChar r3, UChar r1, UChar r2)
 {
    IRTemp op2 = newTemp(Ity_I32);
    IRTemp op3 = newTemp(Ity_I32);
    IRTemp result = newTemp(Ity_I32);
 
-   assign(op2, get_gpr_w0(r1));
-   assign(op3, get_gpr_w1(r2));
+   assign(op2, get_gpr_w0(r2));
+   assign(op3, get_gpr_w1(r3));
    assign(result, binop(Iop_Sub32, mkexpr(op2), mkexpr(op3)));
    s390_cc_thunk_putZZ(S390_CC_OP_UNSIGNED_SUB_32, op2, op3);
    put_gpr_w0(r1, mkexpr(result));
@@ -10844,87 +11105,75 @@ s390_irgen_SVC(UChar i)
 }
 
 static const HChar *
-s390_irgen_TM(UChar i2, IRTemp op1addr)
+s390_irgen_TMx(const HChar *mnem, UChar mask, IRTemp op1addr)
 {
-   UChar mask;
-   IRTemp value = newTemp(Ity_I8);
+   IRTemp masked = newTemp(Ity_I8);
 
-   mask = i2;
-   assign(value, load(Ity_I8, mkexpr(op1addr)));
-   s390_cc_thunk_putZZ(S390_CC_OP_TEST_UNDER_MASK_8, value, mktemp(Ity_I8,
+   assign(masked, binop(Iop_And8, load(Ity_I8, mkexpr(op1addr)), mkU8(mask)));
+   s390_cc_thunk_putZZ(S390_CC_OP_TEST_UNDER_MASK_8, masked, mktemp(Ity_I8,
                        mkU8(mask)));
+   return mnem;
+}
 
-   return "tm";
+static const HChar *
+s390_irgen_TM(UChar i2, IRTemp op1addr)
+{
+   return s390_irgen_TMx("tm", i2, op1addr);
 }
 
 static const HChar *
 s390_irgen_TMY(UChar i2, IRTemp op1addr)
 {
-   UChar mask;
-   IRTemp value = newTemp(Ity_I8);
+   return s390_irgen_TMx("tmy", i2, op1addr);
+}
 
-   mask = i2;
-   assign(value, load(Ity_I8, mkexpr(op1addr)));
-   s390_cc_thunk_putZZ(S390_CC_OP_TEST_UNDER_MASK_8, value, mktemp(Ity_I8,
-                       mkU8(mask)));
+static const HChar *
+s390_irgen_TMxx(const HChar *mnem, UChar r1, UShort mask, UChar offs)
+{
+   if (mask == 0) {
+      s390_cc_set_val(0);
+      return mnem;
+   }
+
+   IRExpr* masked;
+   masked = binop(Iop_And64, get_gpr_dw0(r1), mkU64((ULong)mask << offs));
 
-   return "tmy";
+   if ((mask & (mask - 1)) == 0) {
+      /* Single-bit mask */
+      s390_cc_thunk_put1(S390_CC_OP_BITWISE2, mktemp(Ity_I64, masked), False);
+   } else {
+      if (offs) {
+         masked = binop(Iop_Shr64, masked, mkU8(offs));
+      }
+      s390_cc_thunk_putZZ(S390_CC_OP_TEST_UNDER_MASK_16,
+                          mktemp(Ity_I64, masked),
+                          mktemp(Ity_I64, mkU64(mask)));
+   }
+   return mnem;
 }
 
 static const HChar *
 s390_irgen_TMHH(UChar r1, UShort i2)
 {
-   UShort mask;
-   IRTemp value = newTemp(Ity_I16);
-
-   mask = i2;
-   assign(value, get_gpr_hw0(r1));
-   s390_cc_thunk_putZZ(S390_CC_OP_TEST_UNDER_MASK_16, value, mktemp(Ity_I16,
-                       mkU16(mask)));
-
-   return "tmhh";
+   return s390_irgen_TMxx("tmhh", r1, i2, 48);
 }
 
 static const HChar *
 s390_irgen_TMHL(UChar r1, UShort i2)
 {
-   UShort mask;
-   IRTemp value = newTemp(Ity_I16);
-
-   mask = i2;
-   assign(value, get_gpr_hw1(r1));
-   s390_cc_thunk_putZZ(S390_CC_OP_TEST_UNDER_MASK_16, value, mktemp(Ity_I16,
-                       mkU16(mask)));
-
-   return "tmhl";
+   return s390_irgen_TMxx("tmhl", r1, i2, 32);
 }
 
 static const HChar *
 s390_irgen_TMLH(UChar r1, UShort i2)
 {
-   UShort mask;
-   IRTemp value = newTemp(Ity_I16);
-
-   mask = i2;
-   assign(value, get_gpr_hw2(r1));
-   s390_cc_thunk_putZZ(S390_CC_OP_TEST_UNDER_MASK_16, value, mktemp(Ity_I16,
-                       mkU16(mask)));
-
-   return "tmlh";
+   return s390_irgen_TMxx("tmlh", r1, i2, 16);
 }
 
 static const HChar *
 s390_irgen_TMLL(UChar r1, UShort i2)
 {
-   UShort mask;
-   IRTemp value = newTemp(Ity_I16);
-
-   mask = i2;
-   assign(value, get_gpr_hw3(r1));
-   s390_cc_thunk_putZZ(S390_CC_OP_TEST_UNDER_MASK_16, value, mktemp(Ity_I16,
-                       mkU16(mask)));
-
-   return "tmll";
+   return s390_irgen_TMxx("tmll", r1, i2, 0);
 }
 
 static const HChar *
@@ -10963,6 +11212,9 @@ s390_irgen_LDER(UChar r1, UChar r2)
 static const HChar *
 s390_irgen_LXR(UChar r1, UChar r2)
 {
+   s390_insn_assert("lxr", is_valid_fpr_pair(r1));
+   s390_insn_assert("lxr", is_valid_fpr_pair(r2));
+
    put_fpr_dw0(r1, get_fpr_dw0(r2));
    put_fpr_dw0(r1 + 2, get_fpr_dw0(r2 + 2));
 
@@ -11037,6 +11289,8 @@ s390_irgen_LZDR(UChar r1)
 static const HChar *
 s390_irgen_LZXR(UChar r1)
 {
+   s390_insn_assert("lzxr", is_valid_fpr_pair(r1));
+
    put_fpr_dw0(r1, mkF64i(0x0));
    put_fpr_dw0(r1 + 2, mkF64i(0x0));
 
@@ -11059,12 +11313,24 @@ s390_irgen_SRNM(IRTemp op2addr)
 }
 
 static const HChar *
-s390_irgen_SRNMB(IRTemp op2addr)
+s390_irgen_SRNMB(UChar b2, UShort d2)
 {
-   UInt input_mask, fpc_mask;
+   /* Can only check at IR generation time when b2 == 0 */
+   if (b2 == 0) {
+      s390_insn_assert("srnmb", d2 <= 3 || d2 == 7);  // valid rounding mode
+      /* d2 == 7 requires fpext */
+      if (d2 == 7 && ! s390_host_has_fpext) {
+         emulation_failure(EmFail_S390X_fpext);
+         return "srnmb";
+      }
+   }
+   IRTemp op2addr = newTemp(Ity_I64);
 
-   input_mask = 7;
-   fpc_mask = 7;
+   assign(op2addr, binop(Iop_Add64, mkU64(d2), b2 != 0 ? get_gpr_dw0(b2) :
+          mkU64(0)));
+
+   UInt input_mask = 7;
+   UInt fpc_mask = 7;
 
    put_fpc_w0(binop(Iop_Or32,
                     binop(Iop_And32, get_fpc_w0(), mkU32(~fpc_mask)),
@@ -11073,25 +11339,8 @@ s390_irgen_SRNMB(IRTemp op2addr)
    return "srnmb";
 }
 
-static void
-s390_irgen_srnmb_wrapper(UChar b2, UShort d2)
-{
-   if (b2 == 0) {  /* This is the typical case */
-      if (d2 > 3) {
-         if (s390_host_has_fpext && d2 == 7) {
-            /* ok */
-         } else {
-            emulation_warning(EmWarn_S390X_invalid_rounding);
-            d2 = S390_FPC_BFP_ROUND_NEAREST_EVEN;
-         }
-      }
-   }
-
-   s390_format_S_RD(s390_irgen_SRNMB, b2, d2);
-}
 
-/* Wrapper to validate the parameter as in SRNMB is not required, as all
-   the 8 values in op2addr[61:63] correspond to a valid DFP rounding mode */
+/* All 8 values in op2addr[61:63] correspond to a valid DFP rounding mode */
 static const HChar *
 s390_irgen_SRNMT(IRTemp op2addr)
 {
@@ -11231,66 +11480,74 @@ s390_irgen_ADB(UChar r1, IRTemp op2addr)
 }
 
 static const HChar *
-s390_irgen_CEFBR(UChar m3, UChar m4 __attribute__((unused)),
-                 UChar r1, UChar r2)
+s390_irgen_CEFBRA(UChar m3, UChar m4 __attribute__((unused)),
+                  UChar r1, UChar r2)
 {
    if (! s390_host_has_fpext && m3 != S390_BFP_ROUND_PER_FPC) {
       emulation_warning(EmWarn_S390X_fpext_rounding);
       m3 = S390_BFP_ROUND_PER_FPC;
    }
+   s390_insn_assert("cefbra", is_valid_rounding_mode(m3));
+
    IRTemp op2 = newTemp(Ity_I32);
 
    assign(op2, get_gpr_w1(r2));
    put_fpr_w0(r1, binop(Iop_I32StoF32, mkexpr(encode_bfp_rounding_mode(m3)),
                         mkexpr(op2)));
 
-   return "cefbr";
+   return "cefbra";
 }
 
 static const HChar *
-s390_irgen_CDFBR(UChar m3 __attribute__((unused)),
-                 UChar m4 __attribute__((unused)), UChar r1, UChar r2)
+s390_irgen_CDFBRA(UChar m3,
+                  UChar m4 __attribute__((unused)), UChar r1, UChar r2)
 {
+   s390_insn_assert("cdfbra", is_valid_rounding_mode(m3));
+
    IRTemp op2 = newTemp(Ity_I32);
 
    assign(op2, get_gpr_w1(r2));
    put_fpr_dw0(r1, unop(Iop_I32StoF64, mkexpr(op2)));
 
-   return "cdfbr";
+   return "cdfbra";
 }
 
 static const HChar *
-s390_irgen_CEGBR(UChar m3, UChar m4 __attribute__((unused)),
-                 UChar r1, UChar r2)
+s390_irgen_CEGBRA(UChar m3, UChar m4 __attribute__((unused)),
+                  UChar r1, UChar r2)
 {
    if (! s390_host_has_fpext && m3 != S390_BFP_ROUND_PER_FPC) {
       emulation_warning(EmWarn_S390X_fpext_rounding);
       m3 = S390_BFP_ROUND_PER_FPC;
    }
+   s390_insn_assert("cegbra", is_valid_rounding_mode(m3));
+
    IRTemp op2 = newTemp(Ity_I64);
 
    assign(op2, get_gpr_dw0(r2));
    put_fpr_w0(r1, binop(Iop_I64StoF32, mkexpr(encode_bfp_rounding_mode(m3)),
                         mkexpr(op2)));
 
-   return "cegbr";
+   return "cegbra";
 }
 
 static const HChar *
-s390_irgen_CDGBR(UChar m3, UChar m4 __attribute__((unused)),
-                 UChar r1, UChar r2)
+s390_irgen_CDGBRA(UChar m3, UChar m4 __attribute__((unused)),
+                  UChar r1, UChar r2)
 {
    if (! s390_host_has_fpext && m3 != S390_BFP_ROUND_PER_FPC) {
       emulation_warning(EmWarn_S390X_fpext_rounding);
       m3 = S390_BFP_ROUND_PER_FPC;
    }
+   s390_insn_assert("cdgbra", is_valid_rounding_mode(m3));
+
    IRTemp op2 = newTemp(Ity_I64);
 
    assign(op2, get_gpr_dw0(r2));
    put_fpr_dw0(r1, binop(Iop_I64StoF64, mkexpr(encode_bfp_rounding_mode(m3)),
                          mkexpr(op2)));
 
-   return "cdgbr";
+   return "cdgbra";
 }
 
 static const HChar *
@@ -11300,6 +11557,8 @@ s390_irgen_CELFBR(UChar m3, UChar m4 __attribute__((unused)),
    if (! s390_host_has_fpext) {
       emulation_failure(EmFail_S390X_fpext);
    } else {
+      s390_insn_assert("celfbr", is_valid_rounding_mode(m3));
+
       IRTemp op2 = newTemp(Ity_I32);
 
       assign(op2, get_gpr_w1(r2));
@@ -11310,12 +11569,14 @@ s390_irgen_CELFBR(UChar m3, UChar m4 __attribute__((unused)),
 }
 
 static const HChar *
-s390_irgen_CDLFBR(UChar m3 __attribute__((unused)),
+s390_irgen_CDLFBR(UChar m3,
                   UChar m4 __attribute__((unused)), UChar r1, UChar r2)
 {
    if (! s390_host_has_fpext) {
       emulation_failure(EmFail_S390X_fpext);
    } else {
+      s390_insn_assert("cdlfbr", is_valid_rounding_mode(m3));
+
       IRTemp op2 = newTemp(Ity_I32);
 
       assign(op2, get_gpr_w1(r2));
@@ -11331,6 +11592,8 @@ s390_irgen_CELGBR(UChar m3, UChar m4 __attribute__((unused)),
    if (! s390_host_has_fpext) {
       emulation_failure(EmFail_S390X_fpext);
    } else {
+      s390_insn_assert("celgbr", is_valid_rounding_mode(m3));
+
       IRTemp op2 = newTemp(Ity_I64);
 
       assign(op2, get_gpr_dw0(r2));
@@ -11347,6 +11610,8 @@ s390_irgen_CDLGBR(UChar m3, UChar m4 __attribute__((unused)),
    if (! s390_host_has_fpext) {
       emulation_failure(EmFail_S390X_fpext);
    } else {
+      s390_insn_assert("cdlgbr", is_valid_rounding_mode(m3));
+
       IRTemp op2 = newTemp(Ity_I64);
 
       assign(op2, get_gpr_dw0(r2));
@@ -11364,6 +11629,8 @@ s390_irgen_CLFEBR(UChar m3, UChar m4 __attribute__((unused)),
    if (! s390_host_has_fpext) {
       emulation_failure(EmFail_S390X_fpext);
    } else {
+      s390_insn_assert("clfebr", is_valid_rounding_mode(m3));
+
       IRTemp op = newTemp(Ity_F32);
       IRTemp result = newTemp(Ity_I32);
       IRTemp rounding_mode = encode_bfp_rounding_mode(m3);
@@ -11384,6 +11651,8 @@ s390_irgen_CLFDBR(UChar m3, UChar m4 __attribute__((unused)),
    if (! s390_host_has_fpext) {
       emulation_failure(EmFail_S390X_fpext);
    } else {
+      s390_insn_assert("clfdbr", is_valid_rounding_mode(m3));
+
       IRTemp op = newTemp(Ity_F64);
       IRTemp result = newTemp(Ity_I32);
       IRTemp rounding_mode = encode_bfp_rounding_mode(m3);
@@ -11404,6 +11673,8 @@ s390_irgen_CLGEBR(UChar m3, UChar m4 __attribute__((unused)),
    if (! s390_host_has_fpext) {
       emulation_failure(EmFail_S390X_fpext);
    } else {
+      s390_insn_assert("clgebr", is_valid_rounding_mode(m3));
+
       IRTemp op = newTemp(Ity_F32);
       IRTemp result = newTemp(Ity_I64);
       IRTemp rounding_mode = encode_bfp_rounding_mode(m3);
@@ -11424,6 +11695,8 @@ s390_irgen_CLGDBR(UChar m3, UChar m4 __attribute__((unused)),
    if (! s390_host_has_fpext) {
       emulation_failure(EmFail_S390X_fpext);
    } else {
+      s390_insn_assert("clgdbr", is_valid_rounding_mode(m3));
+
       IRTemp op = newTemp(Ity_F64);
       IRTemp result = newTemp(Ity_I64);
       IRTemp rounding_mode = encode_bfp_rounding_mode(m3);
@@ -11438,9 +11711,11 @@ s390_irgen_CLGDBR(UChar m3, UChar m4 __attribute__((unused)),
 }
 
 static const HChar *
-s390_irgen_CFEBR(UChar m3, UChar m4 __attribute__((unused)),
-                 UChar r1, UChar r2)
+s390_irgen_CFEBRA(UChar m3, UChar m4 __attribute__((unused)),
+                  UChar r1, UChar r2)
 {
+   s390_insn_assert("cfebra", is_valid_rounding_mode(m3));
+
    IRTemp op = newTemp(Ity_F32);
    IRTemp result = newTemp(Ity_I32);
    IRTemp rounding_mode = encode_bfp_rounding_mode(m3);
@@ -11451,13 +11726,15 @@ s390_irgen_CFEBR(UChar m3, UChar m4 __attribute__((unused)),
    put_gpr_w1(r1, mkexpr(result));
    s390_cc_thunk_putFZ(S390_CC_OP_BFP_32_TO_INT_32, op, rounding_mode);
 
-   return "cfebr";
+   return "cfebra";
 }
 
 static const HChar *
-s390_irgen_CFDBR(UChar m3, UChar m4 __attribute__((unused)),
-                 UChar r1, UChar r2)
+s390_irgen_CFDBRA(UChar m3, UChar m4 __attribute__((unused)),
+                  UChar r1, UChar r2)
 {
+   s390_insn_assert("cfdbra", is_valid_rounding_mode(m3));
+
    IRTemp op = newTemp(Ity_F64);
    IRTemp result = newTemp(Ity_I32);
    IRTemp rounding_mode = encode_bfp_rounding_mode(m3);
@@ -11468,13 +11745,15 @@ s390_irgen_CFDBR(UChar m3, UChar m4 __attribute__((unused)),
    put_gpr_w1(r1, mkexpr(result));
    s390_cc_thunk_putFZ(S390_CC_OP_BFP_64_TO_INT_32, op, rounding_mode);
 
-   return "cfdbr";
+   return "cfdbra";
 }
 
 static const HChar *
-s390_irgen_CGEBR(UChar m3, UChar m4 __attribute__((unused)),
-                 UChar r1, UChar r2)
+s390_irgen_CGEBRA(UChar m3, UChar m4 __attribute__((unused)),
+                  UChar r1, UChar r2)
 {
+   s390_insn_assert("cgebra", is_valid_rounding_mode(m3));
+
    IRTemp op = newTemp(Ity_F32);
    IRTemp result = newTemp(Ity_I64);
    IRTemp rounding_mode = encode_bfp_rounding_mode(m3);
@@ -11485,13 +11764,15 @@ s390_irgen_CGEBR(UChar m3, UChar m4 __attribute__((unused)),
    put_gpr_dw0(r1, mkexpr(result));
    s390_cc_thunk_putFZ(S390_CC_OP_BFP_32_TO_INT_64, op, rounding_mode);
 
-   return "cgebr";
+   return "cgebra";
 }
 
 static const HChar *
-s390_irgen_CGDBR(UChar m3, UChar m4 __attribute__((unused)),
-                 UChar r1, UChar r2)
+s390_irgen_CGDBRA(UChar m3, UChar m4 __attribute__((unused)),
+                  UChar r1, UChar r2)
 {
+   s390_insn_assert("cgdbra", is_valid_rounding_mode(m3));
+
    IRTemp op = newTemp(Ity_F64);
    IRTemp result = newTemp(Ity_I64);
    IRTemp rounding_mode = encode_bfp_rounding_mode(m3);
@@ -11502,7 +11783,7 @@ s390_irgen_CGDBR(UChar m3, UChar m4 __attribute__((unused)),
    put_gpr_dw0(r1, mkexpr(result));
    s390_cc_thunk_putFZ(S390_CC_OP_BFP_64_TO_INT_64, op, rounding_mode);
 
-   return "cgdbr";
+   return "cgdbra";
 }
 
 static const HChar *
@@ -11644,20 +11925,22 @@ s390_irgen_LDEB(UChar r1, IRTemp op2addr)
 }
 
 static const HChar *
-s390_irgen_LEDBR(UChar m3, UChar m4 __attribute__((unused)),
-                 UChar r1, UChar r2)
+s390_irgen_LEDBRA(UChar m3, UChar m4 __attribute__((unused)),
+                  UChar r1, UChar r2)
 {
    if (! s390_host_has_fpext && m3 != S390_BFP_ROUND_PER_FPC) {
       emulation_warning(EmWarn_S390X_fpext_rounding);
       m3 = S390_BFP_ROUND_PER_FPC;
    }
+   s390_insn_assert("ledbra", is_valid_rounding_mode(m3));
+
    IRTemp op = newTemp(Ity_F64);
 
    assign(op, get_fpr_dw0(r2));
    put_fpr_w0(r1, binop(Iop_F64toF32, mkexpr(encode_bfp_rounding_mode(m3)),
                         mkexpr(op)));
 
-   return "ledbr";
+   return "ledbra";
 }
 
 static const HChar *
@@ -11825,7 +12108,7 @@ s390_irgen_ADTRA(UChar r3, UChar m4, UChar r1, UChar r2)
       s390_cc_thunk_putF(S390_CC_OP_DFP_RESULT_64, result);
       put_dpr_dw0(r1, mkexpr(result));
    }
-   return (m4 == 0) ? "adtr" : "adtra";
+   return "adtra";
 }
 
 static const HChar *
@@ -11834,6 +12117,10 @@ s390_irgen_AXTRA(UChar r3, UChar m4, UChar r1, UChar r2)
    if (! s390_host_has_dfp) {
       emulation_failure(EmFail_S390X_DFP_insn);
    } else {
+      s390_insn_assert("axtra", is_valid_fpr_pair(r1));
+      s390_insn_assert("axtra", is_valid_fpr_pair(r2));
+      s390_insn_assert("axtra", is_valid_fpr_pair(r3));
+
       IRTemp op1 = newTemp(Ity_D128);
       IRTemp op2 = newTemp(Ity_D128);
       IRTemp result = newTemp(Ity_D128);
@@ -11853,7 +12140,7 @@ s390_irgen_AXTRA(UChar r3, UChar m4, UChar r1, UChar r2)
 
       s390_cc_thunk_put1d128(S390_CC_OP_DFP_RESULT_128, result);
    }
-   return (m4 == 0) ? "axtr" : "axtra";
+   return "axtra";
 }
 
 static const HChar *
@@ -11877,6 +12164,9 @@ s390_irgen_CDTR(UChar r1, UChar r2)
 static const HChar *
 s390_irgen_CXTR(UChar r1, UChar r2)
 {
+   s390_insn_assert("cxtr", is_valid_fpr_pair(r1));
+   s390_insn_assert("cxtr", is_valid_fpr_pair(r2));
+
    IRTemp op1 = newTemp(Ity_D128);
    IRTemp op2 = newTemp(Ity_D128);
    IRTemp cc_vex  = newTemp(Ity_I32);
@@ -11899,14 +12189,10 @@ s390_irgen_CDFTR(UChar m3 __attribute__((unused)),
    if (! s390_host_has_dfp) {
       emulation_failure(EmFail_S390X_DFP_insn);
    } else {
-      if (! s390_host_has_fpext) {
-         emulation_failure(EmFail_S390X_fpext);
-      } else {
-         IRTemp op2 = newTemp(Ity_I32);
+      IRTemp op2 = newTemp(Ity_I32);
 
-         assign(op2, get_gpr_w1(r2));
-         put_dpr_dw0(r1, unop(Iop_I32StoD64, mkexpr(op2)));
-      }
+      assign(op2, get_gpr_w1(r2));
+      put_dpr_dw0(r1, unop(Iop_I32StoD64, mkexpr(op2)));
    }
    return "cdftr";
 }
@@ -11918,14 +12204,12 @@ s390_irgen_CXFTR(UChar m3 __attribute__((unused)),
    if (! s390_host_has_dfp) {
       emulation_failure(EmFail_S390X_DFP_insn);
    } else {
-      if (! s390_host_has_fpext) {
-         emulation_failure(EmFail_S390X_fpext);
-      } else {
-         IRTemp op2 = newTemp(Ity_I32);
+      s390_insn_assert("cxftr", is_valid_fpr_pair(r1));
 
-         assign(op2, get_gpr_w1(r2));
-         put_dpr_pair(r1, unop(Iop_I32StoD128, mkexpr(op2)));
-      }
+      IRTemp op2 = newTemp(Ity_I32);
+
+      assign(op2, get_gpr_w1(r2));
+      put_dpr_pair(r1, unop(Iop_I32StoD128, mkexpr(op2)));
    }
    return "cxftr";
 }
@@ -11948,16 +12232,18 @@ s390_irgen_CDGTRA(UChar m3, UChar m4 __attribute__((unused)),
       put_dpr_dw0(r1, binop(Iop_I64StoD64, mkexpr(encode_dfp_rounding_mode(m3)),
                             mkexpr(op2)));
    }
-   return (m3 == 0) ? "cdgtr" : "cdgtra";
+   return "cdgtra";
 }
 
 static const HChar *
-s390_irgen_CXGTR(UChar m3 __attribute__((unused)),
-                 UChar m4 __attribute__((unused)), UChar r1, UChar r2)
+s390_irgen_CXGTRA(UChar m3 __attribute__((unused)),
+                  UChar m4 __attribute__((unused)), UChar r1, UChar r2)
 {
    if (! s390_host_has_dfp) {
       emulation_failure(EmFail_S390X_DFP_insn);
    } else {
+      s390_insn_assert("cxgtra", is_valid_fpr_pair(r1));
+
       IRTemp op2 = newTemp(Ity_I64);
 
       /* No emulation warning here about an non-zero m3 on hosts without
@@ -11966,7 +12252,7 @@ s390_irgen_CXGTR(UChar m3 __attribute__((unused)),
       assign(op2, get_gpr_dw0(r2));
       put_dpr_pair(r1, unop(Iop_I64StoD128, mkexpr(op2)));
    }
-   return "cxgtr";
+   return "cxgtra";
 }
 
 static const HChar *
@@ -11998,6 +12284,8 @@ s390_irgen_CXLFTR(UChar m3 __attribute__((unused)),
       if (! s390_host_has_fpext) {
          emulation_failure(EmFail_S390X_fpext);
       } else {
+         s390_insn_assert("cxlftr", is_valid_fpr_pair(r1));
+
          IRTemp op2 = newTemp(Ity_I32);
 
          assign(op2, get_gpr_w1(r2));
@@ -12038,6 +12326,8 @@ s390_irgen_CXLGTR(UChar m3 __attribute__((unused)),
       if (! s390_host_has_fpext) {
          emulation_failure(EmFail_S390X_fpext);
       } else {
+         s390_insn_assert("cxlgtr", is_valid_fpr_pair(r1));
+
          IRTemp op2 = newTemp(Ity_I64);
 
          assign(op2, get_gpr_dw0(r2));
@@ -12073,7 +12363,7 @@ s390_irgen_CFDTR(UChar m3, UChar m4 __attribute__((unused)),
 
 static const HChar *
 s390_irgen_CFXTR(UChar m3, UChar m4 __attribute__((unused)),
-                 UChar r1, UChar r2)
+                  UChar r1, UChar r2)
 {
    if (! s390_host_has_dfp) {
       emulation_failure(EmFail_S390X_DFP_insn);
@@ -12081,6 +12371,8 @@ s390_irgen_CFXTR(UChar m3, UChar m4 __attribute__((unused)),
       if (! s390_host_has_fpext) {
          emulation_failure(EmFail_S390X_fpext);
       } else {
+         s390_insn_assert("cfxtr", is_valid_fpr_pair(r2));
+
          IRTemp op = newTemp(Ity_D128);
          IRTemp result = newTemp(Ity_I32);
          IRTemp rounding_mode = encode_dfp_rounding_mode(m3);
@@ -12097,50 +12389,47 @@ s390_irgen_CFXTR(UChar m3, UChar m4 __attribute__((unused)),
 }
 
 static const HChar *
-s390_irgen_CGDTR(UChar m3, UChar m4 __attribute__((unused)),
-                 UChar r1, UChar r2)
+s390_irgen_CGDTRA(UChar m3, UChar m4 __attribute__((unused)),
+                  UChar r1, UChar r2)
 {
    if (! s390_host_has_dfp) {
       emulation_failure(EmFail_S390X_DFP_insn);
    } else {
-      IRTemp op = newTemp(Ity_D64);
-      IRTemp rounding_mode = encode_dfp_rounding_mode(m3);
+      if (! s390_host_has_fpext) {
+         emulation_failure(EmFail_S390X_fpext);
+      } else {
+         IRTemp op = newTemp(Ity_D64);
+         IRTemp rounding_mode = encode_dfp_rounding_mode(m3);
 
-      /* If fpext is not installed and m3 is in 1:7,
-         rounding mode performed is unpredictable */
-      if (! s390_host_has_fpext && m3 > 0 && m3 < 8) {
-         emulation_warning(EmWarn_S390X_fpext_rounding);
-         m3 = S390_DFP_ROUND_PER_FPC_0;
+         assign(op, get_dpr_dw0(r2));
+         put_gpr_dw0(r1, binop(Iop_D64toI64S, mkexpr(rounding_mode), mkexpr(op)));
+         s390_cc_thunk_putFZ(S390_CC_OP_DFP_64_TO_INT_64, op, rounding_mode);
       }
-
-      assign(op, get_dpr_dw0(r2));
-      put_gpr_dw0(r1, binop(Iop_D64toI64S, mkexpr(rounding_mode), mkexpr(op)));
-      s390_cc_thunk_putFZ(S390_CC_OP_DFP_64_TO_INT_64, op, rounding_mode);
    }
-   return "cgdtr";
+   return "cgdtra";
 }
 
 static const HChar *
-s390_irgen_CGXTR(UChar m3, UChar m4 __attribute__((unused)),
-                 UChar r1, UChar r2)
+s390_irgen_CGXTRA(UChar m3, UChar m4 __attribute__((unused)),
+                  UChar r1, UChar r2)
 {
    if (! s390_host_has_dfp) {
       emulation_failure(EmFail_S390X_DFP_insn);
    } else {
-      IRTemp op = newTemp(Ity_D128);
-      IRTemp rounding_mode = encode_dfp_rounding_mode(m3);
+      if (! s390_host_has_fpext) {
+         emulation_failure(EmFail_S390X_fpext);
+      } else {
+         s390_insn_assert("cgxtra", is_valid_fpr_pair(r2));
 
-      /* If fpext is not installed and m3 is in 1:7,
-         rounding mode performed is unpredictable */
-      if (! s390_host_has_fpext && m3 > 0 && m3 < 8) {
-         emulation_warning(EmWarn_S390X_fpext_rounding);
-         m3 = S390_DFP_ROUND_PER_FPC_0;
+         IRTemp op = newTemp(Ity_D128);
+         IRTemp rounding_mode = encode_dfp_rounding_mode(m3);
+
+         assign(op, get_dpr_pair(r2));
+         put_gpr_dw0(r1, binop(Iop_D128toI64S, mkexpr(rounding_mode), mkexpr(op)));
+         s390_cc_thunk_put1d128Z(S390_CC_OP_DFP_128_TO_INT_64, op, rounding_mode);
       }
-      assign(op, get_dpr_pair(r2));
-      put_gpr_dw0(r1, binop(Iop_D128toI64S, mkexpr(rounding_mode), mkexpr(op)));
-      s390_cc_thunk_put1d128Z(S390_CC_OP_DFP_128_TO_INT_64, op, rounding_mode);
    }
-   return "cgxtr";
+   return "cgxtra";
 }
 
 static const HChar *
@@ -12170,6 +12459,9 @@ s390_irgen_CEXTR(UChar r1, UChar r2)
    if (! s390_host_has_dfp) {
       emulation_failure(EmFail_S390X_DFP_insn);
    } else {
+      s390_insn_assert("cextr", is_valid_fpr_pair(r1));
+      s390_insn_assert("cextr", is_valid_fpr_pair(r2));
+
       IRTemp op1 = newTemp(Ity_D128);
       IRTemp op2 = newTemp(Ity_D128);
       IRTemp cc_vex  = newTemp(Ity_I32);
@@ -12219,6 +12511,8 @@ s390_irgen_CLFXTR(UChar m3, UChar m4 __attribute__((unused)),
       if (! s390_host_has_fpext) {
          emulation_failure(EmFail_S390X_fpext);
       } else {
+         s390_insn_assert("clfxtr", is_valid_fpr_pair(r2));
+
          IRTemp op = newTemp(Ity_D128);
          IRTemp result = newTemp(Ity_I32);
          IRTemp rounding_mode = encode_dfp_rounding_mode(m3);
@@ -12268,6 +12562,8 @@ s390_irgen_CLGXTR(UChar m3, UChar m4 __attribute__((unused)),
       if (! s390_host_has_fpext) {
          emulation_failure(EmFail_S390X_fpext);
       } else {
+         s390_insn_assert("clgxtr", is_valid_fpr_pair(r2));
+
          IRTemp op = newTemp(Ity_D128);
          IRTemp result = newTemp(Ity_I64);
          IRTemp rounding_mode = encode_dfp_rounding_mode(m3);
@@ -12306,7 +12602,7 @@ s390_irgen_DDTRA(UChar r3, UChar m4, UChar r1, UChar r2)
                            mkexpr(op2)));
       put_dpr_dw0(r1, mkexpr(result));
    }
-   return (m4 == 0) ? "ddtr" : "ddtra";
+   return "ddtra";
 }
 
 static const HChar *
@@ -12315,6 +12611,10 @@ s390_irgen_DXTRA(UChar r3, UChar m4, UChar r1, UChar r2)
    if (! s390_host_has_dfp) {
       emulation_failure(EmFail_S390X_DFP_insn);
    } else {
+      s390_insn_assert("dxtra", is_valid_fpr_pair(r1));
+      s390_insn_assert("dxtra", is_valid_fpr_pair(r2));
+      s390_insn_assert("dxtra", is_valid_fpr_pair(r3));
+
       IRTemp op1 = newTemp(Ity_D128);
       IRTemp op2 = newTemp(Ity_D128);
       IRTemp result = newTemp(Ity_D128);
@@ -12332,7 +12632,7 @@ s390_irgen_DXTRA(UChar r3, UChar m4, UChar r1, UChar r2)
                            mkexpr(op2)));
       put_dpr_pair(r1, mkexpr(result));
    }
-   return (m4 == 0) ? "dxtr" : "dxtra";
+   return "dxtra";
 }
 
 static const HChar *
@@ -12352,6 +12652,8 @@ s390_irgen_EEXTR(UChar r1, UChar r2)
    if (! s390_host_has_dfp) {
       emulation_failure(EmFail_S390X_DFP_insn);
    } else {
+      s390_insn_assert("eextr", is_valid_fpr_pair(r2));
+
       put_gpr_dw0(r1, unop(Iop_ExtractExpD128, get_dpr_pair(r2)));
    }
    return "eextr";
@@ -12374,6 +12676,8 @@ s390_irgen_ESXTR(UChar r1, UChar r2)
    if (! s390_host_has_dfp) {
       emulation_failure(EmFail_S390X_DFP_insn);
    } else {
+      s390_insn_assert("esxtr", is_valid_fpr_pair(r2));
+
       put_gpr_dw0(r1, unop(Iop_ExtractSigD128, get_dpr_pair(r2)));
    }
    return "esxtr";
@@ -12403,6 +12707,9 @@ s390_irgen_IEXTR(UChar r3, UChar r1, UChar r2)
    if (! s390_host_has_dfp) {
       emulation_failure(EmFail_S390X_DFP_insn);
    } else {
+      s390_insn_assert("iextr", is_valid_fpr_pair(r1));
+      s390_insn_assert("iextr", is_valid_fpr_pair(r3));
+
       IRTemp op1 = newTemp(Ity_I64);
       IRTemp op2 = newTemp(Ity_D128);
       IRTemp result = newTemp(Ity_D128);
@@ -12432,11 +12739,16 @@ s390_irgen_LDETR(UChar m4 __attribute__((unused)), UChar r1, UChar r2)
 static const HChar *
 s390_irgen_LXDTR(UChar m4 __attribute__((unused)), UChar r1, UChar r2)
 {
-   IRTemp op = newTemp(Ity_D64);
+   if (! s390_host_has_dfp) {
+      emulation_failure(EmFail_S390X_DFP_insn);
+   } else {
+      s390_insn_assert("lxdtr", is_valid_fpr_pair(r1));
 
-   assign(op, get_dpr_dw0(r2));
-   put_dpr_pair(r1, unop(Iop_D64toD128, mkexpr(op)));
+      IRTemp op = newTemp(Ity_D64);
 
+      assign(op, get_dpr_dw0(r2));
+      put_dpr_pair(r1, unop(Iop_D64toD128, mkexpr(op)));
+   }
    return "lxdtr";
 }
 
@@ -12447,6 +12759,9 @@ s390_irgen_LDXTR(UChar m3, UChar m4 __attribute__((unused)),
    if (! s390_host_has_dfp) {
       emulation_failure(EmFail_S390X_DFP_insn);
    } else {
+      s390_insn_assert("ldxtr", is_valid_fpr_pair(r1));
+      s390_insn_assert("ldxtr", is_valid_fpr_pair(r2));
+
       /* If fpext is not installed and m3 is in 1:7,
          rounding mode performed is unpredictable */
       if (! s390_host_has_fpext && m3 > 0 && m3 < 8) {
@@ -12487,24 +12802,33 @@ s390_irgen_LEDTR(UChar m3, UChar m4 __attribute__((unused)),
 static const HChar *
 s390_irgen_LTDTR(UChar r1, UChar r2)
 {
-   IRTemp result = newTemp(Ity_D64);
-
-   assign(result, get_dpr_dw0(r2));
-   put_dpr_dw0(r1, mkexpr(result));
-   s390_cc_thunk_putF(S390_CC_OP_DFP_RESULT_64, result);
+   if (! s390_host_has_dfp) {
+      emulation_failure(EmFail_S390X_DFP_insn);
+   } else {
+      IRTemp result = newTemp(Ity_D64);
 
+      assign(result, get_dpr_dw0(r2));
+      put_dpr_dw0(r1, mkexpr(result));
+      s390_cc_thunk_putF(S390_CC_OP_DFP_RESULT_64, result);
+   }
    return "ltdtr";
 }
 
 static const HChar *
 s390_irgen_LTXTR(UChar r1, UChar r2)
 {
-   IRTemp result = newTemp(Ity_D128);
+   if (! s390_host_has_dfp) {
+      emulation_failure(EmFail_S390X_DFP_insn);
+   } else {
+      s390_insn_assert("ltxtr", is_valid_fpr_pair(r1));
+      s390_insn_assert("ltxtr", is_valid_fpr_pair(r2));
 
-   assign(result, get_dpr_pair(r2));
-   put_dpr_pair(r1, mkexpr(result));
-   s390_cc_thunk_put1d128(S390_CC_OP_DFP_RESULT_128, result);
+      IRTemp result = newTemp(Ity_D128);
 
+      assign(result, get_dpr_pair(r2));
+      put_dpr_pair(r1, mkexpr(result));
+      s390_cc_thunk_put1d128(S390_CC_OP_DFP_RESULT_128, result);
+   }
    return "ltxtr";
 }
 
@@ -12531,7 +12855,7 @@ s390_irgen_MDTRA(UChar r3, UChar m4, UChar r1, UChar r2)
                            mkexpr(op2)));
       put_dpr_dw0(r1, mkexpr(result));
    }
-   return (m4 == 0) ? "mdtr" : "mdtra";
+   return "mdtra";
 }
 
 static const HChar *
@@ -12540,6 +12864,10 @@ s390_irgen_MXTRA(UChar r3, UChar m4, UChar r1, UChar r2)
    if (! s390_host_has_dfp) {
       emulation_failure(EmFail_S390X_DFP_insn);
    } else {
+      s390_insn_assert("mxtra", is_valid_fpr_pair(r1));
+      s390_insn_assert("mxtra", is_valid_fpr_pair(r2));
+      s390_insn_assert("mxtra", is_valid_fpr_pair(r3));
+
       IRTemp op1 = newTemp(Ity_D128);
       IRTemp op2 = newTemp(Ity_D128);
       IRTemp result = newTemp(Ity_D128);
@@ -12557,7 +12885,7 @@ s390_irgen_MXTRA(UChar r3, UChar m4, UChar r1, UChar r2)
                            mkexpr(op2)));
       put_dpr_pair(r1, mkexpr(result));
    }
-   return (m4 == 0) ? "mxtr" : "mxtra";
+   return "mxtra";
 }
 
 static const HChar *
@@ -12594,6 +12922,10 @@ s390_irgen_QAXTR(UChar r3, UChar m4, UChar r1, UChar r2)
    if (! s390_host_has_dfp) {
       emulation_failure(EmFail_S390X_DFP_insn);
    } else {
+      s390_insn_assert("qaxtr", is_valid_fpr_pair(r1));
+      s390_insn_assert("qaxtr", is_valid_fpr_pair(r2));
+      s390_insn_assert("qaxtr", is_valid_fpr_pair(r3));
+
       IRTemp op1 = newTemp(Ity_D128);
       IRTemp op2 = newTemp(Ity_D128);
       IRTemp result = newTemp(Ity_D128);
@@ -12650,6 +12982,9 @@ s390_irgen_RRXTR(UChar r3, UChar m4, UChar r1, UChar r2)
    if (! s390_host_has_dfp) {
       emulation_failure(EmFail_S390X_DFP_insn);
    } else {
+      s390_insn_assert("rrxtr", is_valid_fpr_pair(r1));
+      s390_insn_assert("rrxtr", is_valid_fpr_pair(r3));
+
       IRTemp op1 = newTemp(Ity_I8);
       IRTemp op2 = newTemp(Ity_D128);
       IRTemp result = newTemp(Ity_D128);
@@ -12696,7 +13031,7 @@ s390_irgen_SDTRA(UChar r3, UChar m4, UChar r1, UChar r2)
       s390_cc_thunk_putF(S390_CC_OP_DFP_RESULT_64, result);
       put_dpr_dw0(r1, mkexpr(result));
    }
-   return (m4 == 0) ? "sdtr" : "sdtra";
+   return "sdtra";
 }
 
 static const HChar *
@@ -12705,6 +13040,10 @@ s390_irgen_SXTRA(UChar r3, UChar m4, UChar r1, UChar r2)
    if (! s390_host_has_dfp) {
       emulation_failure(EmFail_S390X_DFP_insn);
    } else {
+      s390_insn_assert("sxtra", is_valid_fpr_pair(r1));
+      s390_insn_assert("sxtra", is_valid_fpr_pair(r2));
+      s390_insn_assert("sxtra", is_valid_fpr_pair(r3));
+
       IRTemp op1 = newTemp(Ity_D128);
       IRTemp op2 = newTemp(Ity_D128);
       IRTemp result = newTemp(Ity_D128);
@@ -12724,7 +13063,7 @@ s390_irgen_SXTRA(UChar r3, UChar m4, UChar r1, UChar r2)
 
       s390_cc_thunk_put1d128(S390_CC_OP_DFP_RESULT_128, result);
    }
-   return (m4 == 0) ? "sxtr" : "sxtra";
+   return "sxtra";
 }
 
 static const HChar *
@@ -12749,6 +13088,9 @@ s390_irgen_SLXT(UChar r3, IRTemp op2addr, UChar r1)
    if (! s390_host_has_dfp) {
       emulation_failure(EmFail_S390X_DFP_insn);
    } else {
+      s390_insn_assert("slxt", is_valid_fpr_pair(r1));
+      s390_insn_assert("slxt", is_valid_fpr_pair(r3));
+
       IRTemp op = newTemp(Ity_D128);
 
       assign(op, get_dpr_pair(r3));
@@ -12781,6 +13123,9 @@ s390_irgen_SRXT(UChar r3, IRTemp op2addr, UChar r1)
    if (! s390_host_has_dfp) {
       emulation_failure(EmFail_S390X_DFP_insn);
    } else {
+      s390_insn_assert("srxt", is_valid_fpr_pair(r1));
+      s390_insn_assert("srxt", is_valid_fpr_pair(r3));
+
       IRTemp op = newTemp(Ity_D128);
 
       assign(op, get_dpr_pair(r3));
@@ -12827,6 +13172,8 @@ s390_irgen_TDCXT(UChar r1, IRTemp op2addr)
    if (! s390_host_has_dfp) {
       emulation_failure(EmFail_S390X_DFP_insn);
    } else {
+      s390_insn_assert("tdcxt", is_valid_fpr_pair(r1));
+
       IRTemp value = newTemp(Ity_D128);
 
       assign(value, get_dpr_pair(r1));
@@ -12872,6 +13219,8 @@ s390_irgen_TDGXT(UChar r1, IRTemp op2addr)
    if (! s390_host_has_dfp) {
       emulation_failure(EmFail_S390X_DFP_insn);
    } else {
+      s390_insn_assert("tdgxt", is_valid_fpr_pair(r1));
+
       IRTemp value = newTemp(Ity_D128);
 
       assign(value, get_dpr_pair(r1));
@@ -12912,6 +13261,9 @@ s390_irgen_CLC(UChar length, IRTemp start1, IRTemp start2)
 static const HChar *
 s390_irgen_CLCL(UChar r1, UChar r2)
 {
+   s390_insn_assert("clcl", is_valid_gpr_pair(r1));
+   s390_insn_assert("clcl", is_valid_gpr_pair(r2));
+
    IRTemp addr1 = newTemp(Ity_I64);
    IRTemp addr2 = newTemp(Ity_I64);
    IRTemp addr1_load = newTemp(Ity_I64);
@@ -12992,6 +13344,9 @@ s390_irgen_CLCL(UChar r1, UChar r2)
 static const HChar *
 s390_irgen_CLCLE(UChar r1, UChar r3, IRTemp pad2)
 {
+   s390_insn_assert("clcle", is_valid_gpr_pair(r1));
+   s390_insn_assert("clcle", is_valid_gpr_pair(r3));
+
    IRTemp addr1, addr3, addr1_load, addr3_load, len1, len3, single1, single3;
 
    addr1 = newTemp(Ity_I64);
@@ -13707,7 +14062,7 @@ s390_irgen_XC_sameloc(UChar length, UChar b, UShort d)
    s390_cc_set_val(0);
 
    if (UNLIKELY(vex_traceflags & VEX_TRACE_FE))
-      s390_disasm(ENC3(MNM, UDLB, UDXB), "xc", d, length, b, d, 0, b);
+      S390_DISASM(MNM("xc"), UDLB(d, length, b), UDXB(d, 0, b));
 }
 
 static const HChar *
@@ -13783,6 +14138,9 @@ s390_irgen_MVCRL(IRTemp op1addr, IRTemp op2addr)
 static const HChar *
 s390_irgen_MVCL(UChar r1, UChar r2)
 {
+   s390_insn_assert("mvcl", is_valid_gpr_pair(r1));
+   s390_insn_assert("mvcl", is_valid_gpr_pair(r2));
+
    IRTemp addr1 = newTemp(Ity_I64);
    IRTemp addr2 = newTemp(Ity_I64);
    IRTemp addr2_load = newTemp(Ity_I64);
@@ -13866,6 +14224,9 @@ s390_irgen_MVCL(UChar r1, UChar r2)
 static const HChar *
 s390_irgen_MVCLE(UChar r1, UChar r3, IRTemp pad2)
 {
+   s390_insn_assert("mvcle", is_valid_gpr_pair(r1));
+   s390_insn_assert("mvcle", is_valid_gpr_pair(r3));
+
    IRTemp addr1, addr3, addr3_load, len1, len3, single;
 
    addr1 = newTemp(Ity_I64);
@@ -13987,6 +14348,8 @@ s390_irgen_divide_64to64(IROp op, UChar r1, IRTemp op2)
 static const HChar *
 s390_irgen_DR(UChar r1, UChar r2)
 {
+   s390_insn_assert("dr", is_valid_gpr_pair(r1));
+
    IRTemp op2 = newTemp(Ity_I32);
 
    assign(op2, get_gpr_w1(r2));
@@ -13999,6 +14362,8 @@ s390_irgen_DR(UChar r1, UChar r2)
 static const HChar *
 s390_irgen_D(UChar r1, IRTemp op2addr)
 {
+   s390_insn_assert("d", is_valid_gpr_pair(r1));
+
    IRTemp op2 = newTemp(Ity_I32);
 
    assign(op2, load(Ity_I32, mkexpr(op2addr)));
@@ -14011,6 +14376,8 @@ s390_irgen_D(UChar r1, IRTemp op2addr)
 static const HChar *
 s390_irgen_DLR(UChar r1, UChar r2)
 {
+   s390_insn_assert("dlr", is_valid_gpr_pair(r1));
+
    IRTemp op2 = newTemp(Ity_I32);
 
    assign(op2, get_gpr_w1(r2));
@@ -14023,6 +14390,8 @@ s390_irgen_DLR(UChar r1, UChar r2)
 static const HChar *
 s390_irgen_DL(UChar r1, IRTemp op2addr)
 {
+   s390_insn_assert("dl", is_valid_gpr_pair(r1));
+
    IRTemp op2 = newTemp(Ity_I32);
 
    assign(op2, load(Ity_I32, mkexpr(op2addr)));
@@ -14035,6 +14404,8 @@ s390_irgen_DL(UChar r1, IRTemp op2addr)
 static const HChar *
 s390_irgen_DLG(UChar r1, IRTemp op2addr)
 {
+   s390_insn_assert("dlg", is_valid_gpr_pair(r1));
+
    IRTemp op2 = newTemp(Ity_I64);
 
    assign(op2, load(Ity_I64, mkexpr(op2addr)));
@@ -14047,6 +14418,8 @@ s390_irgen_DLG(UChar r1, IRTemp op2addr)
 static const HChar *
 s390_irgen_DLGR(UChar r1, UChar r2)
 {
+   s390_insn_assert("dlgr", is_valid_gpr_pair(r1));
+
    IRTemp op2 = newTemp(Ity_I64);
 
    assign(op2, get_gpr_dw0(r2));
@@ -14059,6 +14432,8 @@ s390_irgen_DLGR(UChar r1, UChar r2)
 static const HChar *
 s390_irgen_DSGR(UChar r1, UChar r2)
 {
+   s390_insn_assert("dsgr", is_valid_gpr_pair(r1));
+
    IRTemp op2 = newTemp(Ity_I64);
 
    assign(op2, get_gpr_dw0(r2));
@@ -14071,6 +14446,8 @@ s390_irgen_DSGR(UChar r1, UChar r2)
 static const HChar *
 s390_irgen_DSG(UChar r1, IRTemp op2addr)
 {
+   s390_insn_assert("dsg", is_valid_gpr_pair(r1));
+
    IRTemp op2 = newTemp(Ity_I64);
 
    assign(op2, load(Ity_I64, mkexpr(op2addr)));
@@ -14083,6 +14460,8 @@ s390_irgen_DSG(UChar r1, IRTemp op2addr)
 static const HChar *
 s390_irgen_DSGFR(UChar r1, UChar r2)
 {
+   s390_insn_assert("dsgfr", is_valid_gpr_pair(r1));
+
    IRTemp op2 = newTemp(Ity_I64);
 
    assign(op2, unop(Iop_32Sto64, get_gpr_w1(r2)));
@@ -14095,6 +14474,8 @@ s390_irgen_DSGFR(UChar r1, UChar r2)
 static const HChar *
 s390_irgen_DSGF(UChar r1, IRTemp op2addr)
 {
+   s390_insn_assert("dsgf", is_valid_gpr_pair(r1));
+
    IRTemp op2 = newTemp(Ity_I64);
 
    assign(op2, unop(Iop_32Sto64, load(Ity_I32, mkexpr(op2addr))));
@@ -14300,6 +14681,8 @@ s390_irgen_cdas_32(UChar r1, UChar r3, IRTemp op2addr)
 static const HChar *
 s390_irgen_CDS(UChar r1, UChar r3, IRTemp op2addr)
 {
+   s390_insn_assert("cds", is_valid_gpr_pair(r1));
+   s390_insn_assert("cds", is_valid_gpr_pair(r3));
    s390_irgen_cdas_32(r1, r3, op2addr);
 
    return "cds";
@@ -14308,6 +14691,8 @@ s390_irgen_CDS(UChar r1, UChar r3, IRTemp op2addr)
 static const HChar *
 s390_irgen_CDSY(UChar r1, UChar r3, IRTemp op2addr)
 {
+   s390_insn_assert("cdsy", is_valid_gpr_pair(r1));
+   s390_insn_assert("cdsy", is_valid_gpr_pair(r3));
    s390_irgen_cdas_32(r1, r3, op2addr);
 
    return "cdsy";
@@ -14316,6 +14701,9 @@ s390_irgen_CDSY(UChar r1, UChar r3, IRTemp op2addr)
 static const HChar *
 s390_irgen_CDSG(UChar r1, UChar r3, IRTemp op2addr)
 {
+   s390_insn_assert("cdsg", is_valid_gpr_pair(r1));
+   s390_insn_assert("cdsg", is_valid_gpr_pair(r3));
+
    IRCAS *cas;
    IRTemp op1_high = newTemp(Ity_I64);
    IRTemp op1_low  = newTemp(Ity_I64);
@@ -14365,6 +14753,9 @@ s390_irgen_CDSG(UChar r1, UChar r3, IRTemp op2addr)
 static const HChar *
 s390_irgen_AXBR(UChar r1, UChar r2)
 {
+   s390_insn_assert("axbr", is_valid_fpr_pair(r1));
+   s390_insn_assert("axbr", is_valid_fpr_pair(r2));
+
    IRTemp op1 = newTemp(Ity_F128);
    IRTemp op2 = newTemp(Ity_F128);
    IRTemp result = newTemp(Ity_F128);
@@ -14427,12 +14818,18 @@ s390_irgen_KDBR(UChar r1, UChar r2)
 static const HChar *
 s390_irgen_CXBR(UChar r1, UChar r2)
 {
+   s390_insn_assert("cxbr", is_valid_fpr_pair(r1));
+   s390_insn_assert("cxbr", is_valid_fpr_pair(r2));
+
    return s390_irgen_CxBR("cxbr", r1, r2, Ity_F128, Iop_CmpF128);
 }
 
 static const HChar *
 s390_irgen_KXBR(UChar r1, UChar r2)
 {
+   s390_insn_assert("kxbr", is_valid_fpr_pair(r1));
+   s390_insn_assert("kxbr", is_valid_fpr_pair(r2));
+
    return s390_irgen_CxBR("kxbr", r1, r2, Ity_F128, Iop_CmpF128);
 }
 
@@ -14480,24 +14877,30 @@ s390_irgen_KDB(UChar r1, IRTemp op2addr)
 }
 
 static const HChar *
-s390_irgen_CXFBR(UChar m3 __attribute__((unused)),
-                 UChar m4 __attribute__((unused)), UChar r1, UChar r2)
+s390_irgen_CXFBRA(UChar m3,
+                  UChar m4 __attribute__((unused)), UChar r1, UChar r2)
 {
+   s390_insn_assert("cxfbra", is_valid_fpr_pair(r1));
+   s390_insn_assert("cxfbra", is_valid_rounding_mode(m3));
+
    IRTemp op2 = newTemp(Ity_I32);
 
    assign(op2, get_gpr_w1(r2));
    put_fpr_pair(r1, unop(Iop_I32StoF128, mkexpr(op2)));
 
-   return "cxfbr";
+   return "cxfbra";
 }
 
 static const HChar *
-s390_irgen_CXLFBR(UChar m3 __attribute__((unused)),
+s390_irgen_CXLFBR(UChar m3,
                   UChar m4 __attribute__((unused)), UChar r1, UChar r2)
 {
    if (! s390_host_has_fpext) {
       emulation_failure(EmFail_S390X_fpext);
    } else {
+      s390_insn_assert("cxlfbr", is_valid_fpr_pair(r1));
+      s390_insn_assert("cxlfbr", is_valid_rounding_mode(m3));
+
       IRTemp op2 = newTemp(Ity_I32);
 
       assign(op2, get_gpr_w1(r2));
@@ -14508,24 +14911,30 @@ s390_irgen_CXLFBR(UChar m3 __attribute__((unused)),
 
 
 static const HChar *
-s390_irgen_CXGBR(UChar m3 __attribute__((unused)),
-                 UChar m4 __attribute__((unused)), UChar r1, UChar r2)
+s390_irgen_CXGBRA(UChar m3,
+                  UChar m4 __attribute__((unused)), UChar r1, UChar r2)
 {
+   s390_insn_assert("cxgbra", is_valid_fpr_pair(r1));
+   s390_insn_assert("cxgbra", is_valid_rounding_mode(m3));
+
    IRTemp op2 = newTemp(Ity_I64);
 
    assign(op2, get_gpr_dw0(r2));
    put_fpr_pair(r1, unop(Iop_I64StoF128, mkexpr(op2)));
 
-   return "cxgbr";
+   return "cxgbra";
 }
 
 static const HChar *
-s390_irgen_CXLGBR(UChar m3 __attribute__((unused)),
+s390_irgen_CXLGBR(UChar m3,
                   UChar m4 __attribute__((unused)), UChar r1, UChar r2)
 {
    if (! s390_host_has_fpext) {
       emulation_failure(EmFail_S390X_fpext);
    } else {
+      s390_insn_assert("cxlgbr", is_valid_fpr_pair(r1));
+      s390_insn_assert("cxlgbr", is_valid_rounding_mode(m3));
+
       IRTemp op2 = newTemp(Ity_I64);
 
       assign(op2, get_gpr_dw0(r2));
@@ -14535,9 +14944,12 @@ s390_irgen_CXLGBR(UChar m3 __attribute__((unused)),
 }
 
 static const HChar *
-s390_irgen_CFXBR(UChar m3, UChar m4 __attribute__((unused)),
-                 UChar r1, UChar r2)
+s390_irgen_CFXBRA(UChar m3, UChar m4 __attribute__((unused)),
+                  UChar r1, UChar r2)
 {
+   s390_insn_assert("cfxbra", is_valid_fpr_pair(r2));
+   s390_insn_assert("cfxbra", is_valid_rounding_mode(m3));
+
    IRTemp op = newTemp(Ity_F128);
    IRTemp result = newTemp(Ity_I32);
    IRTemp rounding_mode = encode_bfp_rounding_mode(m3);
@@ -14548,7 +14960,7 @@ s390_irgen_CFXBR(UChar m3, UChar m4 __attribute__((unused)),
    put_gpr_w1(r1, mkexpr(result));
    s390_cc_thunk_put1f128Z(S390_CC_OP_BFP_128_TO_INT_32, op, rounding_mode);
 
-   return "cfxbr";
+   return "cfxbra";
 }
 
 static const HChar *
@@ -14558,6 +14970,9 @@ s390_irgen_CLFXBR(UChar m3, UChar m4 __attribute__((unused)),
    if (! s390_host_has_fpext) {
       emulation_failure(EmFail_S390X_fpext);
    } else {
+      s390_insn_assert("clfxbr", is_valid_fpr_pair(r2));
+      s390_insn_assert("clfxbr", is_valid_rounding_mode(m3));
+
       IRTemp op = newTemp(Ity_F128);
       IRTemp result = newTemp(Ity_I32);
       IRTemp rounding_mode = encode_bfp_rounding_mode(m3);
@@ -14573,9 +14988,12 @@ s390_irgen_CLFXBR(UChar m3, UChar m4 __attribute__((unused)),
 
 
 static const HChar *
-s390_irgen_CGXBR(UChar m3, UChar m4 __attribute__((unused)),
-                 UChar r1, UChar r2)
+s390_irgen_CGXBRA(UChar m3, UChar m4 __attribute__((unused)),
+                  UChar r1, UChar r2)
 {
+   s390_insn_assert("cgxbra", is_valid_fpr_pair(r2));
+   s390_insn_assert("cgxbra", is_valid_rounding_mode(m3));
+
    IRTemp op = newTemp(Ity_F128);
    IRTemp result = newTemp(Ity_I64);
    IRTemp rounding_mode = encode_bfp_rounding_mode(m3);
@@ -14586,7 +15004,7 @@ s390_irgen_CGXBR(UChar m3, UChar m4 __attribute__((unused)),
    put_gpr_dw0(r1, mkexpr(result));
    s390_cc_thunk_put1f128Z(S390_CC_OP_BFP_128_TO_INT_64, op, rounding_mode);
 
-   return "cgxbr";
+   return "cgxbra";
 }
 
 static const HChar *
@@ -14596,6 +15014,9 @@ s390_irgen_CLGXBR(UChar m3, UChar m4 __attribute__((unused)),
    if (! s390_host_has_fpext) {
       emulation_failure(EmFail_S390X_fpext);
    } else {
+      s390_insn_assert("clgxbr", is_valid_fpr_pair(r2));
+      s390_insn_assert("clgxbr", is_valid_rounding_mode(m3));
+
       IRTemp op = newTemp(Ity_F128);
       IRTemp result = newTemp(Ity_I64);
       IRTemp rounding_mode = encode_bfp_rounding_mode(m3);
@@ -14613,6 +15034,9 @@ s390_irgen_CLGXBR(UChar m3, UChar m4 __attribute__((unused)),
 static const HChar *
 s390_irgen_DXBR(UChar r1, UChar r2)
 {
+   s390_insn_assert("dxbr", is_valid_fpr_pair(r1));
+   s390_insn_assert("dxbr", is_valid_fpr_pair(r2));
+
    IRTemp op1 = newTemp(Ity_F128);
    IRTemp op2 = newTemp(Ity_F128);
    IRTemp result = newTemp(Ity_F128);
@@ -14630,6 +15054,9 @@ s390_irgen_DXBR(UChar r1, UChar r2)
 static const HChar *
 s390_irgen_LTXBR(UChar r1, UChar r2)
 {
+   s390_insn_assert("ltxbr", is_valid_fpr_pair(r1));
+   s390_insn_assert("ltxbr", is_valid_fpr_pair(r2));
+
    IRTemp result = newTemp(Ity_F128);
 
    assign(result, get_fpr_pair(r2));
@@ -14642,6 +15069,9 @@ s390_irgen_LTXBR(UChar r1, UChar r2)
 static const HChar *
 s390_irgen_LCXBR(UChar r1, UChar r2)
 {
+   s390_insn_assert("lcxbr", is_valid_fpr_pair(r1));
+   s390_insn_assert("lcxbr", is_valid_fpr_pair(r2));
+
    IRTemp result = newTemp(Ity_F128);
 
    assign(result, unop(Iop_NegF128, get_fpr_pair(r2)));
@@ -14654,6 +15084,8 @@ s390_irgen_LCXBR(UChar r1, UChar r2)
 static const HChar *
 s390_irgen_LXDBR(UChar r1, UChar r2)
 {
+   s390_insn_assert("lxdbr", is_valid_fpr_pair(r1));
+
    IRTemp op = newTemp(Ity_F64);
 
    assign(op, get_fpr_dw0(r2));
@@ -14665,6 +15097,8 @@ s390_irgen_LXDBR(UChar r1, UChar r2)
 static const HChar *
 s390_irgen_LXEBR(UChar r1, UChar r2)
 {
+   s390_insn_assert("lxebr", is_valid_fpr_pair(r1));
+
    IRTemp op = newTemp(Ity_F32);
 
    assign(op, get_fpr_w0(r2));
@@ -14676,6 +15110,8 @@ s390_irgen_LXEBR(UChar r1, UChar r2)
 static const HChar *
 s390_irgen_LXDB(UChar r1, IRTemp op2addr)
 {
+   s390_insn_assert("lxdb", is_valid_fpr_pair(r1));
+
    IRTemp op = newTemp(Ity_F64);
 
    assign(op, load(Ity_F64, mkexpr(op2addr)));
@@ -14687,6 +15123,8 @@ s390_irgen_LXDB(UChar r1, IRTemp op2addr)
 static const HChar *
 s390_irgen_LXEB(UChar r1, IRTemp op2addr)
 {
+   s390_insn_assert("lxeb", is_valid_fpr_pair(r1));
+
    IRTemp op = newTemp(Ity_F32);
 
    assign(op, load(Ity_F32, mkexpr(op2addr)));
@@ -14699,6 +15137,8 @@ static const HChar *
 s390_irgen_FIEBRA(UChar m3, UChar m4 __attribute__((unused)),
                   UChar r1, UChar r2)
 {
+   s390_insn_assert("fiebra", is_valid_rounding_mode(m3));
+
    IRTemp result = newTemp(Ity_F32);
 
    assign(result, binop(Iop_RoundF32toInt, mkexpr(encode_bfp_rounding_mode(m3)),
@@ -14712,6 +15152,8 @@ static const HChar *
 s390_irgen_FIDBRA(UChar m3, UChar m4 __attribute__((unused)),
                   UChar r1, UChar r2)
 {
+   s390_insn_assert("fidbra", is_valid_rounding_mode(m3));
+
    IRTemp result = newTemp(Ity_F64);
 
    assign(result, binop(Iop_RoundF64toInt, mkexpr(encode_bfp_rounding_mode(m3)),
@@ -14725,6 +15167,10 @@ static const HChar *
 s390_irgen_FIXBRA(UChar m3, UChar m4 __attribute__((unused)),
                   UChar r1, UChar r2)
 {
+   s390_insn_assert("fixbra", is_valid_fpr_pair(r1));
+   s390_insn_assert("fixbra", is_valid_fpr_pair(r2));
+   s390_insn_assert("fixbra", is_valid_rounding_mode(m3));
+
    IRTemp result = newTemp(Ity_F128);
 
    assign(result, binop(Iop_RoundF128toInt, mkexpr(encode_bfp_rounding_mode(m3)),
@@ -14761,6 +15207,9 @@ s390_irgen_LNDBR(UChar r1, UChar r2)
 static const HChar *
 s390_irgen_LNXBR(UChar r1, UChar r2)
 {
+   s390_insn_assert("lnxbr", is_valid_fpr_pair(r1));
+   s390_insn_assert("lnxbr", is_valid_fpr_pair(r2));
+
    IRTemp result = newTemp(Ity_F128);
 
    assign(result, unop(Iop_NegF128, unop(Iop_AbsF128, get_fpr_pair(r2))));
@@ -14797,6 +15246,9 @@ s390_irgen_LPDBR(UChar r1, UChar r2)
 static const HChar *
 s390_irgen_LPXBR(UChar r1, UChar r2)
 {
+   s390_insn_assert("lpxbr", is_valid_fpr_pair(r1));
+   s390_insn_assert("lpxbr", is_valid_fpr_pair(r2));
+
    IRTemp result = newTemp(Ity_F128);
 
    assign(result, unop(Iop_AbsF128, get_fpr_pair(r2)));
@@ -14807,42 +15259,53 @@ s390_irgen_LPXBR(UChar r1, UChar r2)
 }
 
 static const HChar *
-s390_irgen_LDXBR(UChar m3, UChar m4 __attribute__((unused)),
-                 UChar r1, UChar r2)
+s390_irgen_LDXBRA(UChar m3, UChar m4 __attribute__((unused)),
+                  UChar r1, UChar r2)
 {
    if (! s390_host_has_fpext && m3 != S390_BFP_ROUND_PER_FPC) {
       emulation_warning(EmWarn_S390X_fpext_rounding);
       m3 = S390_BFP_ROUND_PER_FPC;
    }
+   s390_insn_assert("ldxbra", is_valid_fpr_pair(r1));
+   s390_insn_assert("ldxbra", is_valid_fpr_pair(r2));
+   s390_insn_assert("ldxbra", is_valid_rounding_mode(m3));
+
    IRTemp result = newTemp(Ity_F64);
 
    assign(result, binop(Iop_F128toF64, mkexpr(encode_bfp_rounding_mode(m3)),
                         get_fpr_pair(r2)));
    put_fpr_dw0(r1, mkexpr(result));
 
-   return "ldxbr";
+   return "ldxbra";
 }
 
 static const HChar *
-s390_irgen_LEXBR(UChar m3, UChar m4 __attribute__((unused)),
-                 UChar r1, UChar r2)
+s390_irgen_LEXBRA(UChar m3, UChar m4 __attribute__((unused)),
+                  UChar r1, UChar r2)
 {
    if (! s390_host_has_fpext && m3 != S390_BFP_ROUND_PER_FPC) {
       emulation_warning(EmWarn_S390X_fpext_rounding);
       m3 = S390_BFP_ROUND_PER_FPC;
    }
+   s390_insn_assert("lexbra", is_valid_fpr_pair(r1));
+   s390_insn_assert("lexbra", is_valid_fpr_pair(r2));
+   s390_insn_assert("lexbra", is_valid_rounding_mode(m3));
+
    IRTemp result = newTemp(Ity_F32);
 
    assign(result, binop(Iop_F128toF32, mkexpr(encode_bfp_rounding_mode(m3)),
                         get_fpr_pair(r2)));
    put_fpr_w0(r1, mkexpr(result));
 
-   return "lexbr";
+   return "lexbra";
 }
 
 static const HChar *
 s390_irgen_MXBR(UChar r1, UChar r2)
 {
+   s390_insn_assert("mxbr", is_valid_fpr_pair(r1));
+   s390_insn_assert("mxbr", is_valid_fpr_pair(r2));
+
    IRTemp op1 = newTemp(Ity_F128);
    IRTemp op2 = newTemp(Ity_F128);
    IRTemp result = newTemp(Ity_F128);
@@ -14976,6 +15439,9 @@ s390_irgen_SQDBR(UChar r1, UChar r2)
 static const HChar *
 s390_irgen_SQXBR(UChar r1, UChar r2)
 {
+   s390_insn_assert("sqxbr", is_valid_fpr_pair(r1));
+   s390_insn_assert("sqxbr", is_valid_fpr_pair(r2));
+
    IRTemp result = newTemp(Ity_F128);
    IRTemp rounding_mode = encode_bfp_rounding_mode(S390_BFP_ROUND_PER_FPC);
 
@@ -15013,6 +15479,9 @@ s390_irgen_SQDB(UChar r1, IRTemp op2addr)
 static const HChar *
 s390_irgen_SXBR(UChar r1, UChar r2)
 {
+   s390_insn_assert("sxbr", is_valid_fpr_pair(r1));
+   s390_insn_assert("sxbr", is_valid_fpr_pair(r2));
+
    IRTemp op1 = newTemp(Ity_F128);
    IRTemp op2 = newTemp(Ity_F128);
    IRTemp result = newTemp(Ity_F128);
@@ -15055,6 +15524,8 @@ s390_irgen_TCDB(UChar r1, IRTemp op2addr)
 static const HChar *
 s390_irgen_TCXB(UChar r1, IRTemp op2addr)
 {
+   s390_insn_assert("tcxb", is_valid_fpr_pair(r1));
+
    IRTemp value = newTemp(Ity_F128);
 
    assign(value, get_fpr_pair(r1));
@@ -15197,6 +15668,8 @@ s390_irgen_CVDY(UChar r1, IRTemp op2addr)
 static const HChar *
 s390_irgen_FLOGR(UChar r1, UChar r2)
 {
+   s390_insn_assert("flogr", is_valid_gpr_pair(r1));
+
    IRTemp input    = newTemp(Ity_I64);
    IRTemp num      = newTemp(Ity_I64);
    IRTemp shift_amount = newTemp(Ity_I8);
@@ -15339,6 +15812,8 @@ s390_irgen_STFLE(UChar b2, UShort d2)
 static const HChar *
 s390_irgen_CKSM(UChar r1,UChar r2)
 {
+   s390_insn_assert("cksm", is_valid_gpr_pair(r2));
+
    IRTemp addr = newTemp(Ity_I64);
    IRTemp op = newTemp(Ity_I32);
    IRTemp len = newTemp(Ity_I64);
@@ -15403,6 +15878,8 @@ s390_irgen_CKSM(UChar r1,UChar r2)
 static const HChar *
 s390_irgen_TROO(UChar m3, UChar r1, UChar r2)
 {
+   s390_insn_assert("troo", is_valid_gpr_pair(r1));
+
    IRTemp src_addr, des_addr, tab_addr, src_len, test_byte;
    src_addr = newTemp(Ity_I64);
    des_addr = newTemp(Ity_I64);
@@ -15450,6 +15927,8 @@ s390_irgen_TROO(UChar m3, UChar r1, UChar r2)
 static const HChar *
 s390_irgen_TRTO(UChar m3, UChar r1, UChar r2)
 {
+   s390_insn_assert("trto", is_valid_gpr_pair(r1));
+
    IRTemp src_addr, des_addr, tab_addr, src_len, test_byte;
    src_addr = newTemp(Ity_I64);
    des_addr = newTemp(Ity_I64);
@@ -15498,6 +15977,8 @@ s390_irgen_TRTO(UChar m3, UChar r1, UChar r2)
 static const HChar *
 s390_irgen_TROT(UChar m3, UChar r1, UChar r2)
 {
+   s390_insn_assert("trot", is_valid_gpr_pair(r1));
+
    IRTemp src_addr, des_addr, tab_addr, src_len, test_byte;
    src_addr = newTemp(Ity_I64);
    des_addr = newTemp(Ity_I64);
@@ -15545,6 +16026,8 @@ s390_irgen_TROT(UChar m3, UChar r1, UChar r2)
 static const HChar *
 s390_irgen_TRTT(UChar m3, UChar r1, UChar r2)
 {
+   s390_insn_assert("trtt", is_valid_gpr_pair(r1));
+
    IRTemp src_addr, des_addr, tab_addr, src_len, test_byte;
    src_addr = newTemp(Ity_I64);
    des_addr = newTemp(Ity_I64);
@@ -15604,6 +16087,8 @@ s390_irgen_TR(UChar length, IRTemp start1, IRTemp start2)
 static const HChar *
 s390_irgen_TRE(UChar r1,UChar r2)
 {
+   s390_insn_assert("tre", is_valid_gpr_pair(r1));
+
    IRTemp src_addr, tab_addr, src_len, test_byte;
    src_addr = newTemp(Ity_I64);
    tab_addr = newTemp(Ity_I64);
@@ -15660,6 +16145,9 @@ s390_call_cu21(IRExpr *srcval, IRExpr *low_surrogate)
 static const HChar *
 s390_irgen_CU21(UChar m3, UChar r1, UChar r2)
 {
+   s390_insn_assert("cu21", is_valid_gpr_pair(r1));
+   s390_insn_assert("cu21", is_valid_gpr_pair(r2));
+
    IRTemp addr1 = newTemp(Ity_I64);
    IRTemp addr2 = newTemp(Ity_I64);
    IRTemp len1 = newTemp(Ity_I64);
@@ -15787,6 +16275,9 @@ s390_call_cu24(IRExpr *srcval, IRExpr *low_surrogate)
 static const HChar *
 s390_irgen_CU24(UChar m3, UChar r1, UChar r2)
 {
+   s390_insn_assert("cu24", is_valid_gpr_pair(r1));
+   s390_insn_assert("cu24", is_valid_gpr_pair(r2));
+
    IRTemp addr1 = newTemp(Ity_I64);
    IRTemp addr2 = newTemp(Ity_I64);
    IRTemp len1 = newTemp(Ity_I64);
@@ -15894,6 +16385,9 @@ s390_call_cu42(IRExpr *srcval)
 static const HChar *
 s390_irgen_CU42(UChar r1, UChar r2)
 {
+   s390_insn_assert("cu42", is_valid_gpr_pair(r1));
+   s390_insn_assert("cu42", is_valid_gpr_pair(r2));
+
    IRTemp addr1 = newTemp(Ity_I64);
    IRTemp addr2 = newTemp(Ity_I64);
    IRTemp len1 = newTemp(Ity_I64);
@@ -15988,6 +16482,9 @@ s390_call_cu41(IRExpr *srcval)
 static const HChar *
 s390_irgen_CU41(UChar r1, UChar r2)
 {
+   s390_insn_assert("cu41", is_valid_gpr_pair(r1));
+   s390_insn_assert("cu41", is_valid_gpr_pair(r2));
+
    IRTemp addr1 = newTemp(Ity_I64);
    IRTemp addr2 = newTemp(Ity_I64);
    IRTemp len1 = newTemp(Ity_I64);
@@ -16235,6 +16732,9 @@ s390_irgen_cu12_cu14(UChar m3, UChar r1, UChar r2, Bool is_cu12)
 static const HChar *
 s390_irgen_CU12(UChar m3, UChar r1, UChar r2)
 {
+   s390_insn_assert("cu12", is_valid_gpr_pair(r1));
+   s390_insn_assert("cu12", is_valid_gpr_pair(r2));
+
    s390_irgen_cu12_cu14(m3, r1, r2, /* is_cu12 = */ 1);
 
    return "cu12";
@@ -16243,6 +16743,9 @@ s390_irgen_CU12(UChar m3, UChar r1, UChar r2)
 static const HChar *
 s390_irgen_CU14(UChar m3, UChar r1, UChar r2)
 {
+   s390_insn_assert("cu14", is_valid_gpr_pair(r1));
+   s390_insn_assert("cu14", is_valid_gpr_pair(r2));
+
    s390_irgen_cu12_cu14(m3, r1, r2, /* is_cu12 = */ 0);
 
    return "cu14";
@@ -16302,6 +16805,8 @@ s390_irgen_VST(UChar v1, IRTemp op2addr)
 static const HChar *
 s390_irgen_VLREP(UChar v1, IRTemp op2addr, UChar m3)
 {
+   s390_insn_assert("vlrep", m3 <= 3);
+
    IRType o2type = s390_vr_get_type(m3);
    IRExpr* o2 = load(o2type, mkexpr(op2addr));
    s390_vr_fill(v1, o2);
@@ -16311,6 +16816,7 @@ s390_irgen_VLREP(UChar v1, IRTemp op2addr, UChar m3)
 static const HChar *
 s390_irgen_VLEB(UChar v1, IRTemp op2addr, UChar m3)
 {
+   /* Specification exception cannot occur. */
    IRExpr* o2 = load(Ity_I8, mkexpr(op2addr));
    put_vr(v1, Ity_I8, m3, o2);
 
@@ -16320,6 +16826,8 @@ s390_irgen_VLEB(UChar v1, IRTemp op2addr, UChar m3)
 static const HChar *
 s390_irgen_VLEH(UChar v1, IRTemp op2addr, UChar m3)
 {
+   s390_insn_assert("vleh", m3 < 8);
+
    IRExpr* o2 = load(Ity_I16, mkexpr(op2addr));
    put_vr(v1, Ity_I16, m3, o2);
 
@@ -16329,6 +16837,8 @@ s390_irgen_VLEH(UChar v1, IRTemp op2addr, UChar m3)
 static const HChar *
 s390_irgen_VLEF(UChar v1, IRTemp op2addr, UChar m3)
 {
+   s390_insn_assert("vlef", m3 < 4);
+
    IRExpr* o2 = load(Ity_I32, mkexpr(op2addr));
    put_vr(v1, Ity_I32, m3, o2);
 
@@ -16338,6 +16848,8 @@ s390_irgen_VLEF(UChar v1, IRTemp op2addr, UChar m3)
 static const HChar *
 s390_irgen_VLEG(UChar v1, IRTemp op2addr, UChar m3)
 {
+   s390_insn_assert("vleg", m3 < 2);
+
    IRExpr* o2 = load(Ity_I64, mkexpr(op2addr));
    put_vr(v1, Ity_I64, m3, o2);
 
@@ -16347,6 +16859,7 @@ s390_irgen_VLEG(UChar v1, IRTemp op2addr, UChar m3)
 static const HChar *
 s390_irgen_VLEIB(UChar v1, UShort i2, UChar m3)
 {
+   /* Specification exception cannot occur. */
    IRExpr* o2 = unop(Iop_16to8, mkU16(i2));
    put_vr(v1, Ity_I8, m3, o2);
 
@@ -16356,6 +16869,8 @@ s390_irgen_VLEIB(UChar v1, UShort i2, UChar m3)
 static const HChar *
 s390_irgen_VLEIH(UChar v1, UShort i2, UChar m3)
 {
+   s390_insn_assert("vleih", m3 < 8);
+
    IRExpr* o2 = mkU16(i2);
    put_vr(v1, Ity_I16, m3, o2);
 
@@ -16365,6 +16880,8 @@ s390_irgen_VLEIH(UChar v1, UShort i2, UChar m3)
 static const HChar *
 s390_irgen_VLEIF(UChar v1, UShort i2, UChar m3)
 {
+   s390_insn_assert("vleif", m3 < 4);
+
    IRExpr* o2 = unop(Iop_16Sto32, mkU16(i2));
    put_vr(v1, Ity_I32, m3, o2);
 
@@ -16374,6 +16891,8 @@ s390_irgen_VLEIF(UChar v1, UShort i2, UChar m3)
 static const HChar *
 s390_irgen_VLEIG(UChar v1, UShort i2, UChar m3)
 {
+   s390_insn_assert("vleig", m3 < 2);
+
    IRExpr* o2 = unop(Iop_16Sto64, mkU16(i2));
    put_vr(v1, Ity_I64, m3, o2);
 
@@ -16383,6 +16902,8 @@ s390_irgen_VLEIG(UChar v1, UShort i2, UChar m3)
 static const HChar *
 s390_irgen_VLGV(UChar r1, IRTemp op2addr, UChar v3, UChar m4)
 {
+   s390_insn_assert("vlgv", m4 <= 3);
+
    IRType o2type = s390_vr_get_type(m4);
    IRExpr* index = unop(Iop_64to8, binop(Iop_And64, mkexpr(op2addr), mkU64(0xf)));
    IRExpr* o2;
@@ -16413,7 +16934,7 @@ s390_irgen_VLGV(UChar r1, IRTemp op2addr, UChar v3, UChar m4)
 }
 
 static const HChar *
-s390_irgen_VGBM(UChar v1, UShort i2, UChar m3 __attribute__((unused)))
+s390_irgen_VGBM(UChar v1, UShort i2)
 {
    put_vr_qw(v1, mkV128(i2));
 
@@ -16421,13 +16942,13 @@ s390_irgen_VGBM(UChar v1, UShort i2, UChar m3 __attribute__((unused)))
 }
 
 static const HChar *
-s390_irgen_VGM(UChar v1, UShort i2, UChar m3)
+s390_irgen_VGM(UChar v1, UChar i2, UChar i3, UChar m4)
 {
-   s390_insn_assert("vgm", m3 <= 3);
+   s390_insn_assert("vgm", m4 <= 3);
 
-   UChar  max_idx = (8 << m3) - 1;
-   UChar  from    = max_idx & (i2 >> 8);
-   UChar  to      = max_idx & i2;
+   UChar  max_idx = (8 << m4) - 1;
+   UChar  from    = max_idx & i2;
+   UChar  to      = max_idx & i3;
    ULong  all_one = (1ULL << max_idx << 1) - 1;
    ULong  value   = (all_one >> from) ^ (all_one >> to >> 1);
 
@@ -16439,7 +16960,7 @@ s390_irgen_VGM(UChar v1, UShort i2, UChar m3)
       value ^= all_one;
 
    IRExpr* fillValue;
-   switch (m3) {
+   switch (m4) {
    case 0:
       fillValue = mkU8(value);
       break;
@@ -16499,6 +17020,8 @@ s390_irgen_VLLEZ(UChar v1, IRTemp op2addr, UChar m3)
 static const HChar *
 s390_irgen_VGEF(UChar v1, IRTemp op2addr, UChar m3)
 {
+   s390_insn_assert("vgef", m3 < 4);
+
    put_vr(v1, Ity_I32, m3, load(Ity_I32, mkexpr(op2addr)));
    return "vgef";
 }
@@ -16506,6 +17029,8 @@ s390_irgen_VGEF(UChar v1, IRTemp op2addr, UChar m3)
 static const HChar *
 s390_irgen_VGEG(UChar v1, IRTemp op2addr, UChar m3)
 {
+   s390_insn_assert("vgeg", m3 < 2);
+
    put_vr(v1, Ity_I64, m3, load(Ity_I64, mkexpr(op2addr)));
    return "vgeg";
 }
@@ -16513,9 +17038,10 @@ s390_irgen_VGEG(UChar v1, IRTemp op2addr, UChar m3)
 static const HChar *
 s390_irgen_VLM(UChar v1, IRTemp op2addr, UChar v3)
 {
+   s390_insn_assert("vlm", v3 >= v1);
+   s390_insn_assert("vlm", v3 - v1 <= 16);
+
    IRExpr* current = mkexpr(op2addr);
-   vassert(v3 >= v1);
-   vassert(v3 - v1 <= 16);
 
    for(UChar vr = v1; vr <= v3; vr++) {
          IRExpr* next = binop(Iop_Add64, current, mkU64(16));
@@ -16537,6 +17063,8 @@ s390_irgen_VLVGP(UChar v1, UChar r2, UChar r3)
 static const HChar *
 s390_irgen_VLVG(UChar v1, IRTemp op2addr, UChar r3, UChar m4)
 {
+   s390_insn_assert("vlvg", m4 <= 3);
+
    IRType type = s390_vr_get_type(m4);
    IRExpr* index = unop(Iop_64to8, mkexpr(op2addr));
    IRExpr* vr = get_vr_qw(v1);
@@ -16568,9 +17096,10 @@ s390_irgen_VLVG(UChar v1, IRTemp op2addr, UChar r3, UChar m4)
 static const HChar *
 s390_irgen_VMRH(UChar v1, UChar v2, UChar v3, UChar m4)
 {
+   s390_insn_assert("vmrh", m4 <= 3);
+
    const IROp ops[] = { Iop_InterleaveHI8x16, Iop_InterleaveHI16x8,
                         Iop_InterleaveHI32x4, Iop_InterleaveHI64x2 };
-   vassert(m4 < sizeof(ops) / sizeof(ops[0]));
    put_vr_qw(v1, binop(ops[m4], get_vr_qw(v2), get_vr_qw(v3)));
 
    return "vmrh";
@@ -16579,9 +17108,10 @@ s390_irgen_VMRH(UChar v1, UChar v2, UChar v3, UChar m4)
 static const HChar *
 s390_irgen_VMRL(UChar v1, UChar v2, UChar v3, UChar m4)
 {
+   s390_insn_assert("vmrl", m4 <= 3);
+
    const IROp ops[] = { Iop_InterleaveLO8x16, Iop_InterleaveLO16x8,
                         Iop_InterleaveLO32x4, Iop_InterleaveLO64x2 };
-   vassert(m4 < sizeof(ops) / sizeof(ops[0]));
    put_vr_qw(v1, binop(ops[m4], get_vr_qw(v2), get_vr_qw(v3)));
 
    return "vmrl";
@@ -16590,10 +17120,11 @@ s390_irgen_VMRL(UChar v1, UChar v2, UChar v3, UChar m4)
 static const HChar *
 s390_irgen_VPK(UChar v1, UChar v2, UChar v3, UChar m4)
 {
+   s390_insn_assert("vpk", m4 >= 1 && m4 <= 3);
+
    const IROp ops[] = { Iop_NarrowBin16to8x16, Iop_NarrowBin32to16x8,
                         Iop_NarrowBin64to32x4 };
    Char index = m4 - 1;
-   vassert((index >= 0) && (index < sizeof(ops) / sizeof(ops[0])));
    put_vr_qw(v1, binop(ops[index], get_vr_qw(v2), get_vr_qw(v3)));
    return "vpk";
 }
@@ -16610,6 +17141,8 @@ s390_irgen_VPERM(UChar v1, UChar v2, UChar v3, UChar v4)
 static const HChar *
 s390_irgen_VSCEF(UChar v1, IRTemp op2addr, UChar m3)
 {
+   s390_insn_assert("vscef", m3 < 4);
+
    store(mkexpr(op2addr), get_vr(v1, Ity_I32, m3));
    return "vscef";
 }
@@ -16617,6 +17150,8 @@ s390_irgen_VSCEF(UChar v1, IRTemp op2addr, UChar m3)
 static const HChar *
 s390_irgen_VSCEG(UChar v1, IRTemp op2addr, UChar m3)
 {
+   s390_insn_assert("vsceg", m3 < 2);
+
    store(mkexpr(op2addr), get_vr(v1, Ity_I64, m3));
    return "vsceg";
 }
@@ -16624,9 +17159,6 @@ s390_irgen_VSCEG(UChar v1, IRTemp op2addr, UChar m3)
 static const HChar *
 s390_irgen_VPDI(UChar v1, UChar v2, UChar v3, UChar m4)
 {
-   /* These bits are reserved by specification */
-   s390_insn_assert("vpdi", (m4 & 2) == 0 && (m4 & 8) == 0);
-
    put_vr_qw(v1, binop(Iop_64HLtoV128, m4 & 4 ? get_vr_dw1(v2) : get_vr_dw0(v2),
                        m4 & 1 ? get_vr_dw1(v3) : get_vr_dw0(v3)));
    return "vpdi";
@@ -16635,6 +17167,8 @@ s390_irgen_VPDI(UChar v1, UChar v2, UChar v3, UChar m4)
 static const HChar *
 s390_irgen_VSEG(UChar v1, UChar v2, UChar m3)
 {
+   s390_insn_assert("vseg", m3 <= 2);
+
    IRType type = s390_vr_get_type(m3);
    switch(type) {
    case Ity_I8:
@@ -16660,6 +17194,7 @@ s390_irgen_VSEG(UChar v1, UChar v2, UChar m3)
 static const HChar *
 s390_irgen_VSTEB(UChar v1, IRTemp op2addr, UChar m3)
 {
+   /* Specification exception cannot occur. */
    store(mkexpr(op2addr), get_vr(v1, Ity_I8, m3));
 
    return "vsteb";
@@ -16668,6 +17203,8 @@ s390_irgen_VSTEB(UChar v1, IRTemp op2addr, UChar m3)
 static const HChar *
 s390_irgen_VSTEH(UChar v1, IRTemp op2addr, UChar m3)
 {
+   s390_insn_assert("vsteh", m3 < 8);
+
    store(mkexpr(op2addr), get_vr(v1, Ity_I16, m3));
 
    return "vsteh";
@@ -16676,6 +17213,8 @@ s390_irgen_VSTEH(UChar v1, IRTemp op2addr, UChar m3)
 static const HChar *
 s390_irgen_VSTEF(UChar v1, IRTemp op2addr, UChar m3)
 {
+   s390_insn_assert("vstef", m3 < 8);
+
    store(mkexpr(op2addr), get_vr(v1, Ity_I32, m3));
 
    return "vstef";
@@ -16684,6 +17223,8 @@ s390_irgen_VSTEF(UChar v1, IRTemp op2addr, UChar m3)
 static const HChar *
 s390_irgen_VSTEG(UChar v1, IRTemp op2addr, UChar m3)
 {
+   s390_insn_assert("vsteg", m3 < 2);
+
    store(mkexpr(op2addr), get_vr(v1, Ity_I64, m3));
 
    return "vsteg";
@@ -16692,9 +17233,10 @@ s390_irgen_VSTEG(UChar v1, IRTemp op2addr, UChar m3)
 static const HChar *
 s390_irgen_VSTM(UChar v1, IRTemp op2addr, UChar v3)
 {
+   s390_insn_assert("vstm", v3 >= v1);
+   s390_insn_assert("vstm", v3 - v1 <= 16);
+
    IRExpr* current = mkexpr(op2addr);
-   vassert(v3 >= v1);
-   vassert(v3 - v1 <= 16);
 
    for(UChar vr = v1; vr <= v3; vr++) {
          IRExpr* next = binop(Iop_Add64, current, mkU64(16));
@@ -16708,8 +17250,9 @@ s390_irgen_VSTM(UChar v1, IRTemp op2addr, UChar v3)
 static const HChar *
 s390_irgen_VUPH(UChar v1, UChar v2, UChar m3)
 {
+   s390_insn_assert("vuph", m3 <= 2);
+
    const IROp ops[] = { Iop_Widen8Sto16x8, Iop_Widen16Sto32x4, Iop_Widen32Sto64x2 };
-   vassert(m3 < sizeof(ops) / sizeof(ops[0]));
    put_vr_qw(v1, unop(ops[m3], get_vr_dw0(v2)));
 
    return "vuph";
@@ -16718,8 +17261,9 @@ s390_irgen_VUPH(UChar v1, UChar v2, UChar m3)
 static const HChar *
 s390_irgen_VUPLH(UChar v1, UChar v2, UChar m3)
 {
+   s390_insn_assert("vuplh", m3 <= 2);
+
    const IROp ops[] = { Iop_Widen8Uto16x8, Iop_Widen16Uto32x4, Iop_Widen32Uto64x2 };
-   vassert(m3 < sizeof(ops) / sizeof(ops[0]));
    put_vr_qw(v1, unop(ops[m3], get_vr_dw0(v2)));
    return "vuplh";
 }
@@ -16727,8 +17271,9 @@ s390_irgen_VUPLH(UChar v1, UChar v2, UChar m3)
 static const HChar *
 s390_irgen_VUPL(UChar v1, UChar v2, UChar m3)
 {
+   s390_insn_assert("vupl", m3 <= 2);
+
    const IROp ops[] = { Iop_Widen8Sto16x8, Iop_Widen16Sto32x4, Iop_Widen32Sto64x2 };
-   vassert(m3 < sizeof(ops) / sizeof(ops[0]));
    put_vr_qw(v1, unop(ops[m3], get_vr_dw1(v2)));
 
    return "vupl";
@@ -16737,8 +17282,9 @@ s390_irgen_VUPL(UChar v1, UChar v2, UChar m3)
 static const HChar *
 s390_irgen_VUPLL(UChar v1, UChar v2, UChar m3)
 {
+   s390_insn_assert("vupll", m3 <= 2);
+
    const IROp ops[] = { Iop_Widen8Uto16x8, Iop_Widen16Uto32x4, Iop_Widen32Uto64x2 };
-   vassert(m3 < sizeof(ops) / sizeof(ops[0]));
    put_vr_qw(v1, unop(ops[m3], get_vr_dw1(v2)));
 
    return "vupll";
@@ -16747,6 +17293,10 @@ s390_irgen_VUPLL(UChar v1, UChar v2, UChar m3)
 static const HChar *
 s390_irgen_VREP(UChar v1, UChar v3, UShort i2, UChar m4)
 {
+   s390_insn_assert("vrep", m4 <= 3);
+   s390_insn_assert("vrep", (m4 == 0 && i2 < 16) || (m4 == 1 && i2 < 8) ||
+                    (m4 == 2 && i2 < 4) || (m4 == 3 && i2 < 2));
+
    IRType type = s390_vr_get_type(m4);
    IRExpr* arg = get_vr(v3, type, i2);
    s390_vr_fill(v1, arg);
@@ -16757,6 +17307,8 @@ s390_irgen_VREP(UChar v1, UChar v3, UShort i2, UChar m4)
 static const HChar *
 s390_irgen_VREPI(UChar v1, UShort i2, UChar m3)
 {
+   s390_insn_assert("vrepi", m3 <= 3);
+
    IRType type = s390_vr_get_type(m3);
    IRExpr *value;
    switch (type) {
@@ -16784,11 +17336,12 @@ s390_irgen_VREPI(UChar v1, UShort i2, UChar m3)
 static const HChar *
 s390_irgen_VPKS(UChar v1, UChar v2, UChar v3, UChar m4, UChar m5)
 {
+   s390_insn_assert("vpks", m4 >= 1 && m4 <= 3);
+
    if (!s390_vr_is_cs_set(m5)) {
       const IROp ops[] = { Iop_QNarrowBin16Sto8Sx16, Iop_QNarrowBin32Sto16Sx8,
                            Iop_QNarrowBin64Sto32Sx4 };
       Char index = m4 - 1;
-      vassert((index >= 0) && (index < sizeof(ops) / sizeof(ops[0])));
       put_vr_qw(v1, binop(ops[index], get_vr_qw(v2), get_vr_qw(v3)));
 
    } else {
@@ -16830,11 +17383,12 @@ s390_irgen_VPKS(UChar v1, UChar v2, UChar v3, UChar m4, UChar m5)
 static const HChar *
 s390_irgen_VPKLS(UChar v1, UChar v2, UChar v3, UChar m4, UChar m5)
 {
+   s390_insn_assert("vpkls", m4 >= 1 && m4 <= 3);
+
    if (!s390_vr_is_cs_set(m5)) {
       const IROp ops[] = { Iop_QNarrowBin16Uto8Ux16, Iop_QNarrowBin32Uto16Ux8,
                            Iop_QNarrowBin64Uto32Ux4 };
       Char index = m4 - 1;
-      vassert((index >= 0) && (index < sizeof(ops) / sizeof(ops[0])));
       put_vr_qw(v1, binop(ops[index], get_vr_qw(v2), get_vr_qw(v3)));
 
    } else {
@@ -16887,6 +17441,8 @@ s390_irgen_VSEL(UChar v1, UChar v2, UChar v3, UChar v4)
 static const HChar *
 s390_irgen_VLBB(UChar v1, IRTemp addr, UChar m3)
 {
+   s390_insn_assert("vlbb", m3 <= 6);
+
    IRExpr* maxIndex = binop(Iop_Sub32,
                             s390_getCountToBlockBoundary(addr, m3),
                             mkU32(1));
@@ -16907,7 +17463,13 @@ s390_irgen_VLL(UChar v1, IRTemp addr, UChar r3)
 static const HChar *
 s390_irgen_VLRL(UChar v1, IRTemp addr, UChar i3)
 {
+   if (! s390_host_has_vxd) {
+      emulation_failure(EmFail_S390X_vxd);
+      return "vlrl";
+   }
+
    s390_insn_assert("vlrl", (i3 & 0xf0) == 0);
+
    s390_vr_loadWithLength(v1, addr, mkU32((UInt) i3), True);
 
    return "vlrl";
@@ -16916,6 +17478,11 @@ s390_irgen_VLRL(UChar v1, IRTemp addr, UChar i3)
 static const HChar *
 s390_irgen_VLRLR(UChar v1, UChar r3, IRTemp addr)
 {
+   if (! s390_host_has_vxd) {
+      emulation_failure(EmFail_S390X_vxd);
+      return "vlrlr";
+   }
+
    s390_vr_loadWithLength(v1, addr, get_gpr_w1(r3), True);
 
    return "vlrlr";
@@ -16931,7 +17498,13 @@ s390_irgen_VSTL(UChar v1, IRTemp addr, UChar r3)
 static const HChar *
 s390_irgen_VSTRL(UChar v1, IRTemp addr, UChar i3)
 {
+   if (! s390_host_has_vxd) {
+      emulation_failure(EmFail_S390X_vxd);
+      return "vstrl";
+   }
+
    s390_insn_assert("vstrl", (i3 & 0xf0) == 0);
+
    s390_vr_storeWithLength(v1, addr, mkU32((UInt) i3), True);
    return "vstrl";
 }
@@ -16939,6 +17512,11 @@ s390_irgen_VSTRL(UChar v1, IRTemp addr, UChar i3)
 static const HChar *
 s390_irgen_VSTRLR(UChar v1, UChar r3, IRTemp addr)
 {
+   if (! s390_host_has_vxd) {
+      emulation_failure(EmFail_S390X_vxd);
+      return "vstrlr";
+   }
+
    s390_vr_storeWithLength(v1, addr, get_gpr_w1(r3), True);
    return "vstrlr";
 }
@@ -16970,6 +17548,11 @@ s390_irgen_VO(UChar v1, UChar v2, UChar v3)
 static const HChar *
 s390_irgen_VOC(UChar v1, UChar v2, UChar v3)
 {
+   if (! s390_host_has_vxe) {
+      emulation_failure(EmFail_S390X_vxe);
+      return "voc";
+   }
+
    put_vr_qw(v1, binop(Iop_OrV128, get_vr_qw(v2),
                        unop(Iop_NotV128, get_vr_qw(v3))));
 
@@ -16979,6 +17562,11 @@ s390_irgen_VOC(UChar v1, UChar v2, UChar v3)
 static const HChar *
 s390_irgen_VNN(UChar v1, UChar v2, UChar v3)
 {
+   if (! s390_host_has_vxe) {
+      emulation_failure(EmFail_S390X_vxe);
+      return "vnn";
+   }
+
    put_vr_qw(v1, unop(Iop_NotV128,
                       binop(Iop_AndV128, get_vr_qw(v2), get_vr_qw(v3))));
 
@@ -16997,6 +17585,11 @@ s390_irgen_VNO(UChar v1, UChar v2, UChar v3)
 static const HChar *
 s390_irgen_VNX(UChar v1, UChar v2, UChar v3)
 {
+   if (! s390_host_has_vxe) {
+      emulation_failure(EmFail_S390X_vxe);
+      return "vnx";
+   }
+
    put_vr_qw(v1, unop(Iop_NotV128,
                       binop(Iop_XorV128, get_vr_qw(v2), get_vr_qw(v3))));
 
@@ -17094,6 +17687,8 @@ s390_irgen_STOCFH(UChar r1, IRTemp op2addr)
 static const HChar *
 s390_irgen_LCBB(UChar r1, IRTemp op2addr, UChar m3)
 {
+   s390_insn_assert("lcbb", m3 <= 6);
+
    IRTemp op2 = newTemp(Ity_I32);
    assign(op2, s390_getCountToBlockBoundary(op2addr, m3));
    put_gpr_w1(r1, mkexpr(op2));
@@ -17104,26 +17699,28 @@ s390_irgen_LCBB(UChar r1, IRTemp op2addr, UChar m3)
    return "lcbb";
 }
 
-/* Also known as "PRNO" */
 static const HChar *
-s390_irgen_PPNO(UChar r1, UChar r2)
+s390_irgen_PRNO(UChar r1, UChar r2)
 {
    if (!s390_host_has_msa5) {
-      emulation_failure(EmFail_S390X_ppno);
-      return "ppno";
+      emulation_failure(EmFail_S390X_prno);
+      return "prno";
    }
 
    /* Check for obvious specification exceptions */
-   s390_insn_assert("ppno", r1 % 2 == 0 && r2 % 2 == 0 && r1 != 0 && r2 != 0);
+   s390_insn_assert("prno", r1 % 2 == 0 && r2 % 2 == 0 && r1 != 0 && r2 != 0);
 
    extension(S390_EXT_PRNO, r1 | (r2 << 4));
-   return "ppno";
+   return "prno";
 }
 
 static const HChar *
 s390_irgen_DFLTCC(UChar r3, UChar r1, UChar r2)
 {
-   s390_insn_assert("dfltcc", s390_host_has_dflt);
+   if (!s390_host_has_dflt) {
+      emulation_failure(EmFail_S390X_dflt);
+      return "dfltcc";
+   }
 
    /* Check for obvious specification exceptions */
    s390_insn_assert("dfltcc", r1 % 2 == 0 && r1 != 0 &&
@@ -17466,6 +18063,11 @@ s390_irgen_VSTRC(UChar v1, UChar v2, UChar v3, UChar v4, UChar m5, UChar m6)
 static const HChar *
 s390_irgen_VSTRS(UChar v1, UChar v2, UChar v3, UChar v4, UChar m5, UChar m6)
 {
+   if (! s390_host_has_vxe2) {
+      emulation_failure(EmFail_S390X_vxe2);
+      return "vstrs";
+   }
+
    s390_insn_assert("vstrs", m5 <= 2 && m6 == (m6 & 2));
 
    IRTemp op2 = newTemp(Ity_V128);
@@ -17575,9 +18177,10 @@ s390_irgen_VNC(UChar v1, UChar v2, UChar v3)
 static const HChar *
 s390_irgen_VA(UChar v1, UChar v2, UChar v3, UChar m4)
 {
+   s390_insn_assert("va", m4 <= 4);
+
    const IROp ops[] = { Iop_Add8x16, Iop_Add16x8, Iop_Add32x4,
                         Iop_Add64x2, Iop_Add128x1 };
-   vassert(m4 < sizeof(ops) / sizeof(ops[0]));
    put_vr_qw(v1, binop(ops[m4], get_vr_qw(v2), get_vr_qw(v3)));
 
    return "va";
@@ -17586,9 +18189,10 @@ s390_irgen_VA(UChar v1, UChar v2, UChar v3, UChar m4)
 static const HChar *
 s390_irgen_VS(UChar v1, UChar v2, UChar v3, UChar m4)
 {
+   s390_insn_assert("vs", m4 <= 4);
+
    const IROp ops[] = { Iop_Sub8x16, Iop_Sub16x8, Iop_Sub32x4,
                         Iop_Sub64x2, Iop_Sub128x1 };
-   vassert(m4 < sizeof(ops) / sizeof(ops[0]));
    put_vr_qw(v1, binop(ops[m4], get_vr_qw(v2), get_vr_qw(v3)));
 
    return "vs";
@@ -17597,8 +18201,9 @@ s390_irgen_VS(UChar v1, UChar v2, UChar v3, UChar m4)
 static const HChar *
 s390_irgen_VMX(UChar v1, UChar v2, UChar v3, UChar m4)
 {
+   s390_insn_assert("vmx", m4 <= 3);
+
    const IROp ops[] = { Iop_Max8Sx16, Iop_Max16Sx8, Iop_Max32Sx4, Iop_Max64Sx2 };
-   vassert(m4 < sizeof(ops) / sizeof(ops[0]));
    put_vr_qw(v1, binop(ops[m4], get_vr_qw(v2), get_vr_qw(v3)));
 
    return "vmx";
@@ -17607,8 +18212,9 @@ s390_irgen_VMX(UChar v1, UChar v2, UChar v3, UChar m4)
 static const HChar *
 s390_irgen_VMXL(UChar v1, UChar v2, UChar v3, UChar m4)
 {
+   s390_insn_assert("vmxl", m4 <= 3);
+
    const IROp ops[] = { Iop_Max8Ux16, Iop_Max16Ux8, Iop_Max32Ux4, Iop_Max64Ux2 };
-   vassert(m4 < sizeof(ops) / sizeof(ops[0]));
    put_vr_qw(v1, binop(ops[m4], get_vr_qw(v2), get_vr_qw(v3)));
 
    return "vmxl";
@@ -17617,8 +18223,9 @@ s390_irgen_VMXL(UChar v1, UChar v2, UChar v3, UChar m4)
 static const HChar *
 s390_irgen_VMN(UChar v1, UChar v2, UChar v3, UChar m4)
 {
+   s390_insn_assert("vmn", m4 <= 3);
+
    const IROp ops[] = { Iop_Min8Sx16, Iop_Min16Sx8, Iop_Min32Sx4, Iop_Min64Sx2 };
-   vassert(m4 < sizeof(ops) / sizeof(ops[0]));
    put_vr_qw(v1, binop(ops[m4], get_vr_qw(v2), get_vr_qw(v3)));
 
    return "vmn";
@@ -17627,8 +18234,9 @@ s390_irgen_VMN(UChar v1, UChar v2, UChar v3, UChar m4)
 static const HChar *
 s390_irgen_VMNL(UChar v1, UChar v2, UChar v3, UChar m4)
 {
+   s390_insn_assert("vmnl", m4 <= 3);
+
    const IROp ops[] = { Iop_Min8Ux16, Iop_Min16Ux8, Iop_Min32Ux4, Iop_Min64Ux2 };
-   vassert(m4 < sizeof(ops) / sizeof(ops[0]));
    put_vr_qw(v1, binop(ops[m4], get_vr_qw(v2), get_vr_qw(v3)));
 
    return "vmnl";
@@ -17637,8 +18245,9 @@ s390_irgen_VMNL(UChar v1, UChar v2, UChar v3, UChar m4)
 static const HChar *
 s390_irgen_VAVG(UChar v1, UChar v2, UChar v3, UChar m4)
 {
+   s390_insn_assert("vavg", m4 <= 3);
+
    const IROp ops[] = { Iop_Avg8Sx16, Iop_Avg16Sx8, Iop_Avg32Sx4, Iop_Avg64Sx2 };
-   vassert(m4 < sizeof(ops) / sizeof(ops[0]));
    put_vr_qw(v1, binop(ops[m4], get_vr_qw(v2), get_vr_qw(v3)));
 
    return "vavg";
@@ -17647,8 +18256,9 @@ s390_irgen_VAVG(UChar v1, UChar v2, UChar v3, UChar m4)
 static const HChar *
 s390_irgen_VAVGL(UChar v1, UChar v2, UChar v3, UChar m4)
 {
+   s390_insn_assert("vavgl", m4 <= 3);
+
    const IROp ops[] = { Iop_Avg8Ux16, Iop_Avg16Ux8, Iop_Avg32Ux4, Iop_Avg64Ux2 };
-   vassert(m4 < sizeof(ops) / sizeof(ops[0]));
    put_vr_qw(v1, binop(ops[m4], get_vr_qw(v2), get_vr_qw(v3)));
 
    return "vavgl";
@@ -17657,7 +18267,8 @@ s390_irgen_VAVGL(UChar v1, UChar v2, UChar v3, UChar m4)
 static const HChar *
 s390_irgen_VLC(UChar v1, UChar v2, UChar m3)
 {
-   vassert(m3 < 4);
+   s390_insn_assert("vlc", m3 < 4);
+
    IRType type = s390_vr_get_type(m3);
    put_vr_qw(v1, s390_V128_get_complement(get_vr_qw(v2), type));
    return "vlc";
@@ -17666,8 +18277,9 @@ s390_irgen_VLC(UChar v1, UChar v2, UChar m3)
 static const HChar *
 s390_irgen_VLP(UChar v1, UChar v2, UChar m3)
 {
+   s390_insn_assert("vlp", m3 <= 3);
+
    const IROp ops[] = { Iop_Abs8x16, Iop_Abs16x8, Iop_Abs32x4, Iop_Abs64x2 };
-   vassert(m3 < sizeof(ops) / sizeof(ops[0]));
    put_vr_qw(v1, unop(ops[m3], get_vr_qw(v2)));
 
    return "vlp";
@@ -17676,10 +18288,11 @@ s390_irgen_VLP(UChar v1, UChar v2, UChar m3)
 static const HChar *
 s390_irgen_VCH(UChar v1, UChar v2, UChar v3, UChar m4, UChar m5)
 {
+   s390_insn_assert("vch", m4 <= 3);
+
    if (!s390_vr_is_cs_set(m5)) {
       const IROp ops[] = { Iop_CmpGT8Sx16, Iop_CmpGT16Sx8, Iop_CmpGT32Sx4,
                            Iop_CmpGT64Sx2 };
-      vassert(m4 < sizeof(ops) / sizeof(ops[0]));
       put_vr_qw(v1, binop(ops[m4], get_vr_qw(v2), get_vr_qw(v3)));
 
    } else {
@@ -17721,10 +18334,11 @@ s390_irgen_VCH(UChar v1, UChar v2, UChar v3, UChar m4, UChar m5)
 static const HChar *
 s390_irgen_VCHL(UChar v1, UChar v2, UChar v3, UChar m4, UChar m5)
 {
+   s390_insn_assert("vchl", m4 <= 3);
+
    if (!s390_vr_is_cs_set(m5)) {
       const IROp ops[] = { Iop_CmpGT8Ux16, Iop_CmpGT16Ux8, Iop_CmpGT32Ux4,
                            Iop_CmpGT64Ux2 };
-      vassert(m4 < sizeof(ops) / sizeof(ops[0]));
       put_vr_qw(v1, binop(ops[m4], get_vr_qw(v2), get_vr_qw(v3)));
 
    } else {
@@ -17766,8 +18380,9 @@ s390_irgen_VCHL(UChar v1, UChar v2, UChar v3, UChar m4, UChar m5)
 static const HChar *
 s390_irgen_VCLZ(UChar v1, UChar v2, UChar m3)
 {
+   s390_insn_assert("vclz", m3 <= 3);
+
    const IROp ops[] = { Iop_Clz8x16, Iop_Clz16x8, Iop_Clz32x4, Iop_Clz64x2 };
-   vassert(m3 < sizeof(ops) / sizeof(ops[0]));
    put_vr_qw(v1, unop(ops[m3], get_vr_qw(v2)));
 
    return "vclz";
@@ -17776,8 +18391,9 @@ s390_irgen_VCLZ(UChar v1, UChar v2, UChar m3)
 static const HChar *
 s390_irgen_VCTZ(UChar v1, UChar v2, UChar m3)
 {
+   s390_insn_assert("vctz", m3 <= 3);
+
    const IROp ops[] = { Iop_Ctz8x16, Iop_Ctz16x8, Iop_Ctz32x4, Iop_Ctz64x2 };
-   vassert(m3 < sizeof(ops) / sizeof(ops[0]));
    put_vr_qw(v1, unop(ops[m3], get_vr_qw(v2)));
 
    return "vctz";
@@ -17806,8 +18422,9 @@ s390_irgen_VPOPCT(UChar v1, UChar v2, UChar m3)
 static const HChar *
 s390_irgen_VML(UChar v1, UChar v2, UChar v3, UChar m4)
 {
+   s390_insn_assert("vml", m4 <= 2);
+
    const IROp ops[] = { Iop_Mul8x16, Iop_Mul16x8, Iop_Mul32x4 };
-   vassert(m4 < sizeof(ops) / sizeof(ops[0]));
    put_vr_qw(v1, binop(ops[m4], get_vr_qw(v2), get_vr_qw(v3)));
 
    return "vml";
@@ -17816,8 +18433,9 @@ s390_irgen_VML(UChar v1, UChar v2, UChar v3, UChar m4)
 static const HChar *
 s390_irgen_VMLH(UChar v1, UChar v2, UChar v3, UChar m4)
 {
+   s390_insn_assert("vmlh", m4 <= 2);
+
    const IROp ops[] = { Iop_MulHi8Ux16, Iop_MulHi16Ux8, Iop_MulHi32Ux4 };
-   vassert(m4 < sizeof(ops) / sizeof(ops[0]));
    put_vr_qw(v1, binop(ops[m4], get_vr_qw(v2), get_vr_qw(v3)));
 
    return "vmlh";
@@ -17826,8 +18444,9 @@ s390_irgen_VMLH(UChar v1, UChar v2, UChar v3, UChar m4)
 static const HChar *
 s390_irgen_VMH(UChar v1, UChar v2, UChar v3, UChar m4)
 {
+   s390_insn_assert("vmh", m4 <= 2);
+
    const IROp ops[] = { Iop_MulHi8Sx16, Iop_MulHi16Sx8, Iop_MulHi32Sx4 };
-   vassert(m4 < sizeof(ops) / sizeof(ops[0]));
    put_vr_qw(v1, binop(ops[m4], get_vr_qw(v2), get_vr_qw(v3)));
 
    return "vmh";
@@ -17836,8 +18455,9 @@ s390_irgen_VMH(UChar v1, UChar v2, UChar v3, UChar m4)
 static const HChar *
 s390_irgen_VME(UChar v1, UChar v2, UChar v3, UChar m4)
 {
+   s390_insn_assert("vme", m4 <= 2);
+
    const IROp ops[] = { Iop_MullEven8Sx16, Iop_MullEven16Sx8, Iop_MullEven32Sx4 };
-   vassert(m4 < sizeof(ops) / sizeof(ops[0]));
    put_vr_qw(v1, binop(ops[m4], get_vr_qw(v2), get_vr_qw(v3)));
 
    return "vme";
@@ -17846,8 +18466,9 @@ s390_irgen_VME(UChar v1, UChar v2, UChar v3, UChar m4)
 static const HChar *
 s390_irgen_VMLE(UChar v1, UChar v2, UChar v3, UChar m4)
 {
+   s390_insn_assert("vmle", m4 <= 2);
+
    const IROp ops[] = { Iop_MullEven8Ux16, Iop_MullEven16Ux8, Iop_MullEven32Ux4 };
-   vassert(m4 < sizeof(ops) / sizeof(ops[0]));
    put_vr_qw(v1, binop(ops[m4], get_vr_qw(v2), get_vr_qw(v3)));
 
    return "vmle";
@@ -17856,8 +18477,9 @@ s390_irgen_VMLE(UChar v1, UChar v2, UChar v3, UChar m4)
 static const HChar *
 s390_irgen_VESLV(UChar v1, UChar v2, UChar v3, UChar m4)
 {
+   s390_insn_assert("vselv", m4 <= 3);
+
    const IROp ops[] = { Iop_Shl8x16, Iop_Shl16x8, Iop_Shl32x4, Iop_Shl64x2};
-   vassert(m4 < sizeof(ops) / sizeof(ops[0]));
    put_vr_qw(v1, binop(ops[m4], get_vr_qw(v2), get_vr_qw(v3)));
 
    return "veslv";
@@ -17866,9 +18488,10 @@ s390_irgen_VESLV(UChar v1, UChar v2, UChar v3, UChar m4)
 static const HChar *
 s390_irgen_VESL(UChar v1, IRTemp op2addr, UChar v3, UChar m4)
 {
+   s390_insn_assert("vesl", m4 <= 3);
+
    IRExpr* shift_amount = unop(Iop_64to8, mkexpr(op2addr));
    const IROp ops[] = { Iop_ShlN8x16, Iop_ShlN16x8, Iop_ShlN32x4, Iop_ShlN64x2 };
-   vassert(m4 < sizeof(ops) / sizeof(ops[0]));
    put_vr_qw(v1, binop(ops[m4], get_vr_qw(v3), shift_amount));
 
    return "vesl";
@@ -17877,8 +18500,9 @@ s390_irgen_VESL(UChar v1, IRTemp op2addr, UChar v3, UChar m4)
 static const HChar *
 s390_irgen_VESRAV(UChar v1, UChar v2, UChar v3, UChar m4)
 {
+   s390_insn_assert("vesrav", m4 <= 3);
+
    const IROp ops[] = { Iop_Sar8x16, Iop_Sar16x8, Iop_Sar32x4, Iop_Sar64x2 };
-   vassert(m4 < sizeof(ops) / sizeof(ops[0]));
    put_vr_qw(v1, binop(ops[m4], get_vr_qw(v2), get_vr_qw(v3)));
 
    return "vesrav";
@@ -17887,9 +18511,10 @@ s390_irgen_VESRAV(UChar v1, UChar v2, UChar v3, UChar m4)
 static const HChar *
 s390_irgen_VESRA(UChar v1, IRTemp op2addr, UChar v3, UChar m4)
 {
+   s390_insn_assert("vesra", m4 <= 3);
+
    IRExpr* shift_amount = unop(Iop_64to8, mkexpr(op2addr));
    const IROp ops[] = { Iop_SarN8x16, Iop_SarN16x8, Iop_SarN32x4, Iop_SarN64x2 };
-   vassert(m4 < sizeof(ops) / sizeof(ops[0]));
    put_vr_qw(v1, binop(ops[m4], get_vr_qw(v3), shift_amount));
 
    return "vesra";
@@ -17898,8 +18523,9 @@ s390_irgen_VESRA(UChar v1, IRTemp op2addr, UChar v3, UChar m4)
 static const HChar *
 s390_irgen_VESRLV(UChar v1, UChar v2, UChar v3, UChar m4)
 {
+   s390_insn_assert("vesrlv", m4 <= 3);
+
    const IROp ops[] = { Iop_Shr8x16, Iop_Shr16x8, Iop_Shr32x4, Iop_Shr64x2 };
-   vassert(m4 < sizeof(ops) / sizeof(ops[0]));
    put_vr_qw(v1, binop(ops[m4], get_vr_qw(v2), get_vr_qw(v3)));
 
    return "vesrlv";
@@ -17908,9 +18534,10 @@ s390_irgen_VESRLV(UChar v1, UChar v2, UChar v3, UChar m4)
 static const HChar *
 s390_irgen_VESRL(UChar v1, IRTemp op2addr, UChar v3, UChar m4)
 {
+   s390_insn_assert("vesrl", m4 <= 3);
+
    IRExpr* shift_amount = unop(Iop_64to8, mkexpr(op2addr));
    const IROp ops[] = { Iop_ShrN8x16, Iop_ShrN16x8, Iop_ShrN32x4, Iop_ShrN64x2 };
-   vassert(m4 < sizeof(ops) / sizeof(ops[0]));
    put_vr_qw(v1, binop(ops[m4], get_vr_qw(v3), shift_amount));
 
    return "vesrl";
@@ -17919,8 +18546,9 @@ s390_irgen_VESRL(UChar v1, IRTemp op2addr, UChar v3, UChar m4)
 static const HChar *
 s390_irgen_VERLLV(UChar v1, UChar v2, UChar v3, UChar m4)
 {
+   s390_insn_assert("verllv", m4 <= 3);
+
    const IROp ops[] = { Iop_Rol8x16, Iop_Rol16x8, Iop_Rol32x4, Iop_Rol64x2 };
-   vassert(m4 < sizeof(ops) / sizeof(ops[0]));
    put_vr_qw(v1, binop(ops[m4], get_vr_qw(v2), get_vr_qw(v3)));
 
    return "verllv";
@@ -17929,13 +18557,13 @@ s390_irgen_VERLLV(UChar v1, UChar v2, UChar v3, UChar m4)
 static const HChar *
 s390_irgen_VERLL(UChar v1, IRTemp op2addr, UChar v3, UChar m4)
 {
+   s390_insn_assert("verll", m4 <= 3);
    /*
       There is no Iop_RolN?x?? operations
       so we have to use VECTOR x VECTOR variant.
     */
    IRExpr* shift_vector = unop(Iop_Dup8x16, unop(Iop_64to8, mkexpr(op2addr)));
    const IROp ops[] = { Iop_Rol8x16, Iop_Rol16x8, Iop_Rol32x4, Iop_Rol64x2 };
-   vassert(m4 < sizeof(ops) / sizeof(ops[0]));
    put_vr_qw(v1, binop(ops[m4], get_vr_qw(v3), shift_vector));
 
    return "verll";
@@ -18010,12 +18638,12 @@ s390_irgen_VSRA(UChar v1, UChar v2, UChar v3)
 static const HChar *
 s390_irgen_VERIM(UChar v1, UChar v2, UChar v3, UChar i4, UChar m5)
 {
+   s390_insn_assert("verim", m5 <= 3);
    /*
       There is no Iop_RolN?x?? operations
       so we have to use VECTOR x VECTOR variant.
     */
    const IROp ops[] = { Iop_Rol8x16, Iop_Rol16x8, Iop_Rol32x4, Iop_Rol64x2 };
-   vassert(m5 < sizeof(ops) / sizeof(ops[0]));
    IRExpr* shift_vector = unop(Iop_Dup8x16, mkU8(i4));
    IRExpr* rotated_vector = binop(ops[m5], get_vr_qw(v2), shift_vector);
 
@@ -18030,6 +18658,8 @@ s390_irgen_VERIM(UChar v1, UChar v2, UChar v3, UChar i4, UChar m5)
 static const HChar *
 s390_irgen_VEC(UChar v1, UChar v2, UChar m3)
 {
+   s390_insn_assert("vec", m3 <= 3);
+
    IRType type = s390_vr_get_type(m3);
    IRTemp op1 = newTemp(type);
    IRTemp op2 = newTemp(type);
@@ -18063,6 +18693,8 @@ s390_irgen_VEC(UChar v1, UChar v2, UChar m3)
 static const HChar *
 s390_irgen_VECL(UChar v1, UChar v2, UChar m3)
 {
+   s390_insn_assert("vecl", m3 <= 3);
+
    IRType type = s390_vr_get_type(m3);
    IRTemp op1 = newTemp(type);
    IRTemp op2 = newTemp(type);
@@ -18096,10 +18728,11 @@ s390_irgen_VECL(UChar v1, UChar v2, UChar m3)
 static const HChar *
 s390_irgen_VCEQ(UChar v1, UChar v2, UChar v3, UChar m4, UChar m5)
 {
+   s390_insn_assert("vceq", m4 <= 3);
+
    if (!s390_vr_is_cs_set(m5)) {
       const IROp ops[] = { Iop_CmpEQ8x16, Iop_CmpEQ16x8, Iop_CmpEQ32x4,
                            Iop_CmpEQ64x2 };
-      vassert(m4 < sizeof(ops) / sizeof(ops[0]));
       put_vr_qw(v1, binop(ops[m4], get_vr_qw(v2), get_vr_qw(v3)));
 
    } else {
@@ -18192,6 +18825,11 @@ s390_irgen_VSLDB(UChar v1, UChar v2, UChar v3, UChar i4)
 static const HChar *
 s390_irgen_VSLD(UChar v1, UChar v2, UChar v3, UChar i4)
 {
+   if (! s390_host_has_vxe2) {
+      emulation_failure(EmFail_S390X_vxe2);
+      return "vsld";
+   }
+
    s390_insn_assert("vsld", i4 <= 7);
 
    if (i4 == 0) {
@@ -18213,6 +18851,11 @@ s390_irgen_VSLD(UChar v1, UChar v2, UChar v3, UChar i4)
 static const HChar *
 s390_irgen_VSRD(UChar v1, UChar v2, UChar v3, UChar i4)
 {
+   if (! s390_host_has_vxe2) {
+      emulation_failure(EmFail_S390X_vxe2);
+      return "vsrd";
+   }
+
    s390_insn_assert("vsrd", i4 <= 7);
 
    if (i4 == 0) {
@@ -18234,10 +18877,11 @@ s390_irgen_VSRD(UChar v1, UChar v2, UChar v3, UChar i4)
 static const HChar *
 s390_irgen_VMO(UChar v1, UChar v2, UChar v3, UChar m4)
 {
+   s390_insn_assert("vmo", m4 <= 2);
+
    const IROp ops[] = { Iop_MullEven8Sx16, Iop_MullEven16Sx8,
                         Iop_MullEven32Sx4 };
    UChar shifts[] = { 8, 16, 32 };
-   vassert(m4 < sizeof(ops) / sizeof(ops[0]));
    IRExpr* result = binop(ops[m4],
                           binop(Iop_ShlV128, get_vr_qw(v2), mkU8(shifts[m4])),
                           binop(Iop_ShlV128, get_vr_qw(v3), mkU8(shifts[m4]))
@@ -18250,10 +18894,11 @@ s390_irgen_VMO(UChar v1, UChar v2, UChar v3, UChar m4)
 static const HChar *
 s390_irgen_VMLO(UChar v1, UChar v2, UChar v3, UChar m4)
 {
+   s390_insn_assert("vmlo", m4 <= 2);
+
    const IROp ops[] = { Iop_MullEven8Ux16, Iop_MullEven16Ux8,
                         Iop_MullEven32Ux4 };
    UChar shifts[] = { 8, 16, 32 };
-   vassert(m4 < sizeof(ops) / sizeof(ops[0]));
    IRExpr* result = binop(ops[m4],
                           binop(Iop_ShlV128, get_vr_qw(v2), mkU8(shifts[m4])),
                           binop(Iop_ShlV128, get_vr_qw(v3), mkU8(shifts[m4]))
@@ -18266,10 +18911,11 @@ s390_irgen_VMLO(UChar v1, UChar v2, UChar v3, UChar m4)
 static const HChar *
 s390_irgen_VMAE(UChar v1, UChar v2, UChar v3, UChar v4, UChar m5)
 {
+   s390_insn_assert("vmae", m5 <= 2);
+
    const IROp mul_ops[] = { Iop_MullEven8Sx16, Iop_MullEven16Sx8,
                             Iop_MullEven32Sx4 };
    const IROp add_ops[] = { Iop_Add16x8, Iop_Add32x4, Iop_Add64x2};
-   vassert(m5 < sizeof(mul_ops) / sizeof(mul_ops[0]));
 
    IRExpr* mul_result = binop(mul_ops[m5], get_vr_qw(v2), get_vr_qw(v3));
    IRExpr* result = binop(add_ops[m5], mul_result, get_vr_qw(v4));
@@ -18281,10 +18927,11 @@ s390_irgen_VMAE(UChar v1, UChar v2, UChar v3, UChar v4, UChar m5)
 static const HChar *
 s390_irgen_VMALE(UChar v1, UChar v2, UChar v3, UChar v4, UChar m5)
 {
+   s390_insn_assert("vmale", m5 <= 2);
+
    const IROp mul_ops[] = { Iop_MullEven8Ux16, Iop_MullEven16Ux8,
                             Iop_MullEven32Ux4 };
    const IROp add_ops[] = { Iop_Add16x8, Iop_Add32x4, Iop_Add64x2 };
-   vassert(m5 < sizeof(mul_ops) / sizeof(mul_ops[0]));
 
    IRExpr* mul_result = binop(mul_ops[m5], get_vr_qw(v2), get_vr_qw(v3));
    IRExpr* result = binop(add_ops[m5], mul_result, get_vr_qw(v4));
@@ -18296,11 +18943,12 @@ s390_irgen_VMALE(UChar v1, UChar v2, UChar v3, UChar v4, UChar m5)
 static const HChar *
 s390_irgen_VMAO(UChar v1, UChar v2, UChar v3, UChar v4, UChar m5)
 {
+   s390_insn_assert("vmao", m5 <= 2);
+
    const IROp mul_ops[] = { Iop_MullEven8Sx16, Iop_MullEven16Sx8,
                             Iop_MullEven32Sx4 };
    const IROp add_ops[] = { Iop_Add16x8, Iop_Add32x4, Iop_Add64x2 };
    UChar shifts[] = { 8, 16, 32 };
-   vassert(m5 < sizeof(mul_ops) / sizeof(mul_ops[0]));
 
    IRExpr* mul_result =
       binop(mul_ops[m5],
@@ -18315,11 +18963,12 @@ s390_irgen_VMAO(UChar v1, UChar v2, UChar v3, UChar v4, UChar m5)
 static const HChar *
 s390_irgen_VMALO(UChar v1, UChar v2, UChar v3, UChar v4, UChar m5)
 {
+   s390_insn_assert("vmalo", m5 <= 2);
+
    const IROp mul_ops[] = { Iop_MullEven8Ux16, Iop_MullEven16Ux8,
                             Iop_MullEven32Ux4 };
    const IROp add_ops[] = { Iop_Add16x8, Iop_Add32x4, Iop_Add64x2 };
    UChar shifts[] = { 8, 16, 32 };
-   vassert(m5 < sizeof(mul_ops) / sizeof(mul_ops[0]));
 
    IRExpr* mul_result = binop(mul_ops[m5],
                               binop(Iop_ShlV128,
@@ -18337,9 +18986,10 @@ s390_irgen_VMALO(UChar v1, UChar v2, UChar v3, UChar v4, UChar m5)
 static const HChar *
 s390_irgen_VMAL(UChar v1, UChar v2, UChar v3, UChar v4, UChar m5)
 {
+   s390_insn_assert("vmal", m5 <= 2);
+
    const IROp mul_ops[] = { Iop_Mul8x16, Iop_Mul16x8, Iop_Mul32x4 };
    const IROp add_ops[] = { Iop_Add8x16, Iop_Add16x8, Iop_Add32x4 };
-   vassert(m5 < sizeof(mul_ops) / sizeof(mul_ops[0]));
 
    IRExpr* mul_result = binop(mul_ops[m5], get_vr_qw(v2), get_vr_qw(v3));
    IRExpr* result = binop(add_ops[m5], mul_result, get_vr_qw(v4));
@@ -18351,6 +19001,8 @@ s390_irgen_VMAL(UChar v1, UChar v2, UChar v3, UChar v4, UChar m5)
 static const HChar *
 s390_irgen_VSUM(UChar v1, UChar v2, UChar v3, UChar m4)
 {
+   s390_insn_assert("vsum", m4 <= 1);
+
    IRType type = s390_vr_get_type(m4);
    IRExpr* mask;
    IRExpr* sum;
@@ -18376,6 +19028,8 @@ s390_irgen_VSUM(UChar v1, UChar v2, UChar v3, UChar m4)
 static const HChar *
 s390_irgen_VSUMG(UChar v1, UChar v2, UChar v3, UChar m4)
 {
+   s390_insn_assert("vsumg", m4 == 1 || m4 == 2);
+
    IRType type = s390_vr_get_type(m4);
    IRExpr* mask;
    IRExpr* sum;
@@ -18401,6 +19055,8 @@ s390_irgen_VSUMG(UChar v1, UChar v2, UChar v3, UChar m4)
 static const HChar *
 s390_irgen_VSUMQ(UChar v1, UChar v2, UChar v3, UChar m4)
 {
+   s390_insn_assert("vsumq", m4 == 2 || m4 == 3);
+
    IRType type = s390_vr_get_type(m4);
    IRExpr* mask;
    IRExpr* sum;
@@ -18426,39 +19082,36 @@ s390_irgen_VSUMQ(UChar v1, UChar v2, UChar v3, UChar m4)
 static const HChar *
 s390_irgen_VTM(UChar v1, UChar v2)
 {
-   IRDirty* d;
-   IRTemp cc = newTemp(Ity_I64);
-
-   s390x_vec_op_details_t details = { .serialized = 0ULL };
-   details.op = S390_VEC_OP_VTM;
-   details.v2 = v1;
-   details.v3 = v2;
-   details.read_only = 1;
-
-   d = unsafeIRDirty_1_N(cc, 0, "s390x_dirtyhelper_vec_op",
-                         &s390x_dirtyhelper_vec_op,
-                         mkIRExprVec_2(IRExpr_GSPTR(),
-                                       mkU64(details.serialized)));
+   IRTemp  op1    = newTemp(Ity_V128);
+   IRTemp  op2    = newTemp(Ity_V128);
+   IRTemp  masked = newTemp(Ity_V128);
+   IRTemp  diff   = newTemp(Ity_V128);
+   IRTemp  cc     = newTemp(Ity_I64);
+   IRExpr* masked_is_zero;
+   IRExpr* diff_is_zero;
 
-   d->nFxState = 2;
-   vex_bzero(&d->fxState, sizeof(d->fxState));
-   d->fxState[0].fx     = Ifx_Read;
-   d->fxState[0].offset = S390X_GUEST_OFFSET(guest_v0) + v1 * sizeof(V128);
-   d->fxState[0].size   = sizeof(V128);
-   d->fxState[1].fx     = Ifx_Read;
-   d->fxState[1].offset = S390X_GUEST_OFFSET(guest_v0) + v2 * sizeof(V128);
-   d->fxState[1].size   = sizeof(V128);
-
-   stmt(IRStmt_Dirty(d));
+   assign(op1, get_vr_qw(v1));
+   assign(op2, get_vr_qw(v2));
+   assign(masked, binop(Iop_AndV128, mkexpr(op1), mkexpr(op2)));
+   assign(diff, binop(Iop_XorV128, mkexpr(op2), mkexpr(masked)));
+   masked_is_zero = binop(Iop_CmpEQ64,
+                          binop(Iop_Or64, unop(Iop_V128to64, mkexpr(masked)),
+                                unop(Iop_V128HIto64, mkexpr(masked))),
+                          mkU64(0));
+   diff_is_zero   = binop(Iop_CmpEQ64,
+                          binop(Iop_Or64, unop(Iop_V128to64, mkexpr(diff)),
+                                unop(Iop_V128HIto64, mkexpr(diff))),
+                          mkU64(0));
+   assign(cc, mkite(masked_is_zero, mkU64(0),
+                    mkite(diff_is_zero, mkU64(3), mkU64(1))));
    s390_cc_set(cc);
-
    return "vtm";
 }
 
 static const HChar *
 s390_irgen_VAC(UChar v1, UChar v2, UChar v3, UChar v4, UChar m5)
 {
-   vassert(m5 == 4); /* specification exception otherwise */
+   s390_insn_assert("vac", m5 == 4);
 
    IRTemp sum = newTemp(Ity_V128);
    assign(sum, binop(Iop_Add128x1, get_vr_qw(v2), get_vr_qw(v3)));
@@ -18473,6 +19126,8 @@ s390_irgen_VAC(UChar v1, UChar v2, UChar v3, UChar v4, UChar m5)
 static const HChar *
 s390_irgen_VACC(UChar v1, UChar v2, UChar v3, UChar m4)
 {
+   s390_insn_assert("vacc", m4 <= 4);
+
    IRType type = s390_vr_get_type(m4);
    IRExpr* arg1 = get_vr_qw(v2);
    IRExpr* arg2 = get_vr_qw(v3);
@@ -18484,7 +19139,8 @@ s390_irgen_VACC(UChar v1, UChar v2, UChar v3, UChar m4)
 static const HChar *
 s390_irgen_VACCC(UChar v1, UChar v2, UChar v3, UChar v4, UChar m5)
 {
-   vassert(m5 == 4); /* specification exception otherwise */
+   s390_insn_assert("vaccc", m5 == 4);
+
    IRExpr* result =
          s390_V128_calculate_carry_out_with_carry(get_vr_qw(v2),
                                                   get_vr_qw(v3),
@@ -18513,6 +19169,8 @@ s390_irgen_VCKSM(UChar v1, UChar v2, UChar v3)
 static const HChar *
 s390_irgen_VGFM(UChar v1, UChar v2, UChar v3, UChar m4)
 {
+   s390_insn_assert("vgfm", m4 <= 3);
+
    IRDirty* d;
    IRTemp cc = newTemp(Ity_I64);
 
@@ -18547,6 +19205,8 @@ s390_irgen_VGFM(UChar v1, UChar v2, UChar v3, UChar m4)
 static const HChar *
 s390_irgen_VGFMA(UChar v1, UChar v2, UChar v3, UChar v4, UChar m5)
 {
+   s390_insn_assert("vgfma", m5 <= 3);
+
    IRDirty* d;
    IRTemp cc = newTemp(Ity_I64);
 
@@ -18585,7 +19245,7 @@ s390_irgen_VGFMA(UChar v1, UChar v2, UChar v3, UChar v4, UChar m5)
 static const HChar *
 s390_irgen_VSBI(UChar v1, UChar v2, UChar v3, UChar v4, UChar m5)
 {
-   vassert(m5 == 4); /* specification exception otherwise */
+   s390_insn_assert("vsbi", m5 == 4);
 
    IRExpr* mask = binop(Iop_64HLtoV128, mkU64(0ULL), mkU64(1ULL));
    IRExpr* carry_in = binop(Iop_AndV128, get_vr_qw(v4), mask);
@@ -18604,6 +19264,8 @@ s390_irgen_VSBI(UChar v1, UChar v2, UChar v3, UChar v4, UChar m5)
 static const HChar *
 s390_irgen_VSCBI(UChar v1, UChar v2, UChar v3, UChar m4)
 {
+   s390_insn_assert("vscbi", m4 <= 4);
+
    IRType type = s390_vr_get_type(m4);
    IRExpr* arg1 = get_vr_qw(v2);
    IRExpr* arg2 = s390_V128_get_complement(get_vr_qw(v3), type);
@@ -18616,7 +19278,8 @@ s390_irgen_VSCBI(UChar v1, UChar v2, UChar v3, UChar m4)
 static const HChar *
 s390_irgen_VSBCBI(UChar v1, UChar v2, UChar v3, UChar v4, UChar m5)
 {
-   vassert(m5 == 4); /* specification exception otherwise */
+   s390_insn_assert("vsbcbi", m5 == 4);
+
    IRExpr* result =
       s390_V128_calculate_carry_out_with_carry(get_vr_qw(v2),
                                                unop(Iop_NotV128, get_vr_qw(v3)),
@@ -18629,12 +19292,11 @@ s390_irgen_VSBCBI(UChar v1, UChar v2, UChar v3, UChar v4, UChar m5)
 static const HChar *
 s390_irgen_VMAH(UChar v1, UChar v2, UChar v3, UChar v4, UChar m5)
 {
+   s390_insn_assert("vmah", m5 < 3);
+
    IRDirty* d;
    IRTemp cc = newTemp(Ity_I64);
 
-   /* Check for specification exception */
-   vassert(m5 < 3);
-
    s390x_vec_op_details_t details = { .serialized = 0ULL };
    details.op = S390_VEC_OP_VMAH;
    details.v1 = v1;
@@ -18671,12 +19333,11 @@ s390_irgen_VMAH(UChar v1, UChar v2, UChar v3, UChar v4, UChar m5)
 static const HChar *
 s390_irgen_VMALH(UChar v1, UChar v2, UChar v3, UChar v4, UChar m5)
 {
+   s390_insn_assert("vmalh", m5 < 3);
+
    IRDirty* d;
    IRTemp cc = newTemp(Ity_I64);
 
-   /* Check for specification exception */
-   vassert(m5 < 3);
-
    s390x_vec_op_details_t details = { .serialized = 0ULL };
    details.op = S390_VEC_OP_VMALH;
    details.v1 = v1;
@@ -18713,6 +19374,11 @@ s390_irgen_VMALH(UChar v1, UChar v2, UChar v3, UChar v4, UChar m5)
 static const HChar *
 s390_irgen_VMSL(UChar v1, UChar v2, UChar v3, UChar v4, UChar m5, UChar m6)
 {
+   if (! s390_host_has_vxe) {
+      emulation_failure(EmFail_S390X_vxe);
+      return "vmsl";
+   }
+
    s390_insn_assert("vmsl", m5 == 3 && (m6 & 3) == 0);
 
    IRDirty* d;
@@ -18797,7 +19463,9 @@ s390_vector_fp_convert(IROp op, IRType fromType, IRType toType, Bool rounding,
 static const HChar *
 s390_irgen_VCDG(UChar v1, UChar v2, UChar m3, UChar m4, UChar m5)
 {
-   s390_insn_assert("vcdg", m3 == 2 || m3 == 3);
+   s390_insn_assert("vcdg", m3 == 3 || (m3 == 2 && s390_host_has_vxe2));
+   s390_insn_assert("vcdg", (m4 & 0x3) == 0);
+   s390_insn_assert("vcdg", m5 != 2 && m5 <= 7);
 
    s390_vector_fp_convert(m3 == 2 ? Iop_I32StoF32 : Iop_I64StoF64,
                           m3 == 2 ? Ity_I32       : Ity_I64,
@@ -18809,7 +19477,9 @@ s390_irgen_VCDG(UChar v1, UChar v2, UChar m3, UChar m4, UChar m5)
 static const HChar *
 s390_irgen_VCDLG(UChar v1, UChar v2, UChar m3, UChar m4, UChar m5)
 {
-   s390_insn_assert("vcdlg", m3 == 2 || m3 == 3);
+   s390_insn_assert("vcdlg", m3 == 3 || (m3 == 2 && s390_host_has_vxe2));
+   s390_insn_assert("vcdlg", (m4 & 0x3) == 0);
+   s390_insn_assert("vcdlg", m5 != 2 && m5 <= 7);
 
    s390_vector_fp_convert(m3 == 2 ? Iop_I32UtoF32 : Iop_I64UtoF64,
                           m3 == 2 ? Ity_I32       : Ity_I64,
@@ -18821,7 +19491,9 @@ s390_irgen_VCDLG(UChar v1, UChar v2, UChar m3, UChar m4, UChar m5)
 static const HChar *
 s390_irgen_VCGD(UChar v1, UChar v2, UChar m3, UChar m4, UChar m5)
 {
-   s390_insn_assert("vcgd", m3 == 2 || m3 == 3);
+   s390_insn_assert("vcgd", m3 == 3 || (m3 == 2 && s390_host_has_vxe2));
+   s390_insn_assert("vcgd", (m4 & 0x3) == 0);
+   s390_insn_assert("vcgd", m5 != 2 && m5 <= 7);
 
    s390_vector_fp_convert(m3 == 2 ? Iop_F32toI32S : Iop_F64toI64S,
                           m3 == 2 ? Ity_F32       : Ity_F64,
@@ -18833,7 +19505,9 @@ s390_irgen_VCGD(UChar v1, UChar v2, UChar m3, UChar m4, UChar m5)
 static const HChar *
 s390_irgen_VCLGD(UChar v1, UChar v2, UChar m3, UChar m4, UChar m5)
 {
-   s390_insn_assert("vclgd", m3 == 2 || m3 == 3);
+   s390_insn_assert("vclgd", m3 == 3 || (m3 == 2 && s390_host_has_vxe2));
+   s390_insn_assert("vclgd", (m4 & 0x3) == 0);
+   s390_insn_assert("vclgd", m5 != 2 && m5 <= 7);
 
    s390_vector_fp_convert(m3 == 2 ? Iop_F32toI32U : Iop_F64toI64U,
                           m3 == 2 ? Ity_F32       : Ity_F64,
@@ -18847,6 +19521,8 @@ s390_irgen_VFI(UChar v1, UChar v2, UChar m3, UChar m4, UChar m5)
 {
    s390_insn_assert("vfi",
                     (m3 == 3 || (s390_host_has_vxe && m3 >= 2 && m3 <= 4)));
+   s390_insn_assert("vfi", (m4 & 0x3) == 0);
+   s390_insn_assert("vfi", m5 != 2 && m5 <= 7);
 
    switch (m3) {
    case 2: s390_vector_fp_convert(Iop_RoundF32toInt, Ity_F32, Ity_F32, True,
@@ -18861,16 +19537,17 @@ s390_irgen_VFI(UChar v1, UChar v2, UChar m3, UChar m4, UChar m5)
 }
 
 static const HChar *
-s390_irgen_VFLL(UChar v1, UChar v2, UChar m3, UChar m4, UChar m5)
+s390_irgen_VFLL(UChar v1, UChar v2, UChar m3, UChar m4)
 {
    s390_insn_assert("vfll", m3 == 2 || (s390_host_has_vxe && m3 == 3));
+   s390_insn_assert("vfll", (m4 & 0x7) == 0);
 
    if (m3 == 2)
       s390_vector_fp_convert(Iop_F32toF64, Ity_F32, Ity_F64, False,
-                             v1, v2, m3, m4, m5);
+                             v1, v2, m3, m4, /* don't care */ 0);
    else
       s390_vector_fp_convert(Iop_F64toF128, Ity_F64, Ity_F128, False,
-                             v1, v2, m3, m4, m5);
+                             v1, v2, m3, m4, /* don't care */ 0);
 
    return "vfll";
 }
@@ -18879,6 +19556,8 @@ static const HChar *
 s390_irgen_VFLR(UChar v1, UChar v2, UChar m3, UChar m4, UChar m5)
 {
    s390_insn_assert("vflr", m3 == 3 || (s390_host_has_vxe && m3 == 4));
+   s390_insn_assert("vflr", (m4 & 0x3) == 0);
+   s390_insn_assert("vflr", m5 != 2 && m5 <= 7);
 
    if (m3 == 3)
       s390_vector_fp_convert(Iop_F64toF32, Ity_F64, Ity_F32, True,
@@ -18893,8 +19572,9 @@ s390_irgen_VFLR(UChar v1, UChar v2, UChar m3, UChar m4, UChar m5)
 static const HChar *
 s390_irgen_VFPSO(UChar v1, UChar v2, UChar m3, UChar m4, UChar m5)
 {
-   s390_insn_assert("vfpso", m5 <= 2 &&
-                    (m3 == 3 || (s390_host_has_vxe && m3 >= 2 && m3 <= 4)));
+   s390_insn_assert("vfpso", m3 == 3 || (s390_host_has_vxe && m3 >= 2 && m3 <= 4));
+   s390_insn_assert("vfpso", (m4 & 0x7) == 0);
+   s390_insn_assert("vfpso", m5 <= 2);
 
    Bool single = s390_vr_is_single_element_control_set(m4) || m3 == 4;
    IRType type = single ? s390_vr_get_ftype(m3) : Ity_V128;
@@ -18978,12 +19658,13 @@ s390_vector_fp_mulAddOrSub(UChar v1, UChar v2, UChar v3, UChar v4,
                            const HChar* mnm, const IROp single_ops[],
                            Bool negate)
 {
+   s390_insn_assert(mnm, (m5 & 0x7) == 0);
    s390_insn_assert(mnm, m6 == 3 || (s390_host_has_vxe && m6 >= 2 && m6 <= 4));
 
    static const IROp negate_ops[] = { Iop_NegF32, Iop_NegF64, Iop_NegF128 };
    IRType type = s390_vr_get_ftype(m6);
    Bool single = s390_vr_is_single_element_control_set(m5) || m6 == 4;
-   UChar n_elem = single ? 1 : s390_vr_get_n_elem(m6);
+   UChar n_elem = single ? 1 : (1 << (4 - m6));
    IRTemp irrm_temp = newTemp(Ity_I32);
    assign(irrm_temp, get_bfp_rounding_mode_from_fpc());
    IRExpr* irrm = mkexpr(irrm_temp);
@@ -19068,6 +19749,11 @@ s390_irgen_VFMA(UChar v1, UChar v2, UChar v3, UChar v4, UChar m5, UChar m6)
 static const HChar *
 s390_irgen_VFNMA(UChar v1, UChar v2, UChar v3, UChar v4, UChar m5, UChar m6)
 {
+   if (! s390_host_has_vxe) {
+      emulation_failure(EmFail_S390X_vxe);
+      return "vfnma";
+   }
+
    return s390_vector_fp_mulAddOrSub(v1, v2, v3, v4, m5, m6,
                                      "vfnma", FMA_single_ops, True);
 }
@@ -19086,6 +19772,11 @@ s390_irgen_VFMS(UChar v1, UChar v2, UChar v3, UChar v4, UChar m5, UChar m6)
 static const HChar *
 s390_irgen_VFNMS(UChar v1, UChar v2, UChar v3, UChar v4, UChar m5, UChar m6)
 {
+   if (! s390_host_has_vxe) {
+      emulation_failure(EmFail_S390X_vxe);
+      return "vfnms";
+   }
+
    return s390_vector_fp_mulAddOrSub(v1, v2, v3, v4, m5, m6,
                                      "vfnms", FMS_single_ops, True);
 }
@@ -19112,6 +19803,9 @@ s390_irgen_WFC(UChar v1, UChar v2, UChar m3, UChar m4)
 static const HChar *
 s390_irgen_WFK(UChar v1, UChar v2, UChar m3, UChar m4)
 {
+   s390_insn_assert("wfk", m4 == 0 &&
+                    (m3 == 3 || (s390_host_has_vxe && m3 >= 2 && m3 <= 4)));
+
    s390_irgen_WFC(v1, v2, m3, m4);
 
    return "wfk";
@@ -19208,6 +19902,7 @@ s390_irgen_VFTCI(UChar v1, UChar v2, UShort i3, UChar m4, UChar m5)
 {
    s390_insn_assert("vftci",
                     (m4 == 3 || (s390_host_has_vxe && m4 >= 2 && m4 <= 4)));
+   s390_insn_assert("vftci", (m5 & 0x7) == 0);
 
    Bool isSingleElementOp = s390_vr_is_single_element_control_set(m5);
 
@@ -19247,8 +19942,14 @@ s390_irgen_VFTCI(UChar v1, UChar v2, UShort i3, UChar m4, UChar m5)
 static const HChar *
 s390_irgen_VFMIN(UChar v1, UChar v2, UChar v3, UChar m4, UChar m5, UChar m6)
 {
-   s390_insn_assert("vfmin",
-                    (m4 == 3 || (s390_host_has_vxe && m4 >= 2 && m4 <= 4)));
+   if (! s390_host_has_vxe) {
+      emulation_failure(EmFail_S390X_vxe);
+      return "vfmin";
+   }
+
+   s390_insn_assert("vfmin", m4 >= 2 && m4 <= 4);
+   s390_insn_assert("vfmin", (m5 & 0x7) == 0);
+   s390_insn_assert("vfmin", m6 <= 4 || (m6 >= 8 && m6 <= 12));
 
    Bool isSingleElementOp = s390_vr_is_single_element_control_set(m5);
    IRDirty* d;
@@ -19288,8 +19989,14 @@ s390_irgen_VFMIN(UChar v1, UChar v2, UChar v3, UChar m4, UChar m5, UChar m6)
 static const HChar *
 s390_irgen_VFMAX(UChar v1, UChar v2, UChar v3, UChar m4, UChar m5, UChar m6)
 {
-   s390_insn_assert("vfmax",
-                    (m4 == 3 || (s390_host_has_vxe && m4 >= 2 && m4 <= 4)));
+   if (! s390_host_has_vxe) {
+      emulation_failure(EmFail_S390X_vxe);
+      return "vfmax";
+   }
+
+   s390_insn_assert("vfmax", m4 >= 2 && m4 <= 4);
+   s390_insn_assert("vfmax", (m5 & 0x7) == 0);
+   s390_insn_assert("vfmax", m6 <= 4 || (m6 >= 8 && m6 <= 12));
 
    Bool isSingleElementOp = s390_vr_is_single_element_control_set(m5);
    IRDirty* d;
@@ -19329,6 +20036,11 @@ s390_irgen_VFMAX(UChar v1, UChar v2, UChar v3, UChar m4, UChar m5, UChar m6)
 static const HChar *
 s390_irgen_VBPERM(UChar v1, UChar v2, UChar v3)
 {
+   if (! s390_host_has_vxe) {
+      emulation_failure(EmFail_S390X_vxe);
+      return "vbperm";
+   }
+
    IRDirty* d;
    IRTemp cc = newTemp(Ity_I64);
 
@@ -19419,7 +20131,13 @@ s390_reverse_elements(IRExpr* v, UChar m)
 static const HChar *
 s390_irgen_VLBR(UChar v1, IRTemp op2addr, UChar m3)
 {
+   if (! s390_host_has_vxe2) {
+      emulation_failure(EmFail_S390X_vxe2);
+      return "vlbr";
+   }
+
    s390_insn_assert("vlbr", m3 >= 1 && m3 <= 4);
+
    put_vr_qw(v1, s390_byteswap_elements(load(Ity_V128, mkexpr(op2addr)), m3));
    return "vlbr";
 }
@@ -19427,7 +20145,13 @@ s390_irgen_VLBR(UChar v1, IRTemp op2addr, UChar m3)
 static const HChar *
 s390_irgen_VSTBR(UChar v1, IRTemp op2addr, UChar m3)
 {
+   if (! s390_host_has_vxe2) {
+      emulation_failure(EmFail_S390X_vxe2);
+      return "vstbr";
+   }
+
    s390_insn_assert("vstbr", m3 >= 1 && m3 <= 4);
+
    store(mkexpr(op2addr), s390_byteswap_elements(get_vr_qw(v1), m3));
    return "vstbr";
 }
@@ -19435,7 +20159,13 @@ s390_irgen_VSTBR(UChar v1, IRTemp op2addr, UChar m3)
 static const HChar *
 s390_irgen_VLER(UChar v1, IRTemp op2addr, UChar m3)
 {
+   if (! s390_host_has_vxe2) {
+      emulation_failure(EmFail_S390X_vxe2);
+      return "vler";
+   }
+
    s390_insn_assert("vler", m3 >= 1 && m3 <= 3);
+
    put_vr_qw(v1, s390_reverse_elements(load(Ity_V128, mkexpr(op2addr)), m3));
    return "vler";
 }
@@ -19443,9 +20173,15 @@ s390_irgen_VLER(UChar v1, IRTemp op2addr, UChar m3)
 static const HChar *
 s390_irgen_VSTER(UChar v1, IRTemp op2addr, UChar m3)
 {
-   s390_insn_assert("vstbr", m3 >= 1 && m3 <= 4);
+   if (! s390_host_has_vxe2) {
+      emulation_failure(EmFail_S390X_vxe2);
+      return "vster";
+   }
+
+   s390_insn_assert("vster", m3 >= 1 && m3 <= 4);
+
    store(mkexpr(op2addr), s390_reverse_elements(get_vr_qw(v1), m3));
-   return "vstbr";
+   return "vster";
 }
 
 /* Helper function that combines its two V128 operands by replacing element 'to'
@@ -19474,7 +20210,13 @@ s390_insert_byteswapped(IRExpr* a, IRExpr* b, UChar m, UChar to, UChar from)
 static const HChar *
 s390_irgen_VLEBRH(UChar v1, IRTemp op2addr, UChar m3)
 {
+   if (! s390_host_has_vxe2) {
+      emulation_failure(EmFail_S390X_vxe2);
+      return "vlebrh";
+   }
+
    s390_insn_assert("vlebrh", m3 <= 7);
+
    IRTemp op2 = newTemp(Ity_I16);
    assign(op2, load(Ity_I16, mkexpr(op2addr)));
    put_vr(v1, Ity_I16, m3, binop(Iop_Or16,
@@ -19486,7 +20228,13 @@ s390_irgen_VLEBRH(UChar v1, IRTemp op2addr, UChar m3)
 static const HChar *
 s390_irgen_VLEBRF(UChar v1, IRTemp op2addr, UChar m3)
 {
+   if (! s390_host_has_vxe2) {
+      emulation_failure(EmFail_S390X_vxe2);
+      return "vlebrf";
+   }
+
    s390_insn_assert("vlebrf", m3 <= 3);
+
    IRTemp op1 = newTemp(Ity_V128);
    assign(op1, get_vr_qw(v1));
    IRTemp op2 = newTemp(Ity_I64);
@@ -19499,7 +20247,13 @@ s390_irgen_VLEBRF(UChar v1, IRTemp op2addr, UChar m3)
 static const HChar *
 s390_irgen_VLEBRG(UChar v1, IRTemp op2addr, UChar m3)
 {
+   if (! s390_host_has_vxe2) {
+      emulation_failure(EmFail_S390X_vxe2);
+      return "vlebrg";
+   }
+
    s390_insn_assert("vlebrg", m3 <= 1);
+
    IRTemp op1 = newTemp(Ity_V128);
    assign(op1, get_vr_qw(v1));
    IRTemp op2 = newTemp(Ity_I64);
@@ -19512,7 +20266,13 @@ s390_irgen_VLEBRG(UChar v1, IRTemp op2addr, UChar m3)
 static const HChar *
 s390_irgen_VLBRREP(UChar v1, IRTemp op2addr, UChar m3)
 {
+   if (! s390_host_has_vxe2) {
+      emulation_failure(EmFail_S390X_vxe2);
+      return "vlbrrep";
+   }
+
    s390_insn_assert("vlbrrep", m3 >= 1 && m3 <= 3);
+
    static const ULong perm[3] = {
       0x0f0e0f0e0f0e0f0e,       /* 2-byte element */
       0x0f0e0d0c0f0e0d0c,       /* 4-byte element */
@@ -19534,7 +20294,13 @@ s390_irgen_VLBRREP(UChar v1, IRTemp op2addr, UChar m3)
 static const HChar *
 s390_irgen_VLLEBRZ(UChar v1, IRTemp op2addr, UChar m3)
 {
+   if (! s390_host_has_vxe2) {
+      emulation_failure(EmFail_S390X_vxe2);
+      return "vllebrz";
+   }
+
    s390_insn_assert("vllebrz", (m3 >= 1 && m3 <= 3) || m3 == 6);
+
    static const ULong perm[6] = {
       0x0000000000000f0e,       /* 2-byte element */
       0x000000000f0e0d0c,       /* 4-byte element */
@@ -19559,7 +20325,13 @@ s390_irgen_VLLEBRZ(UChar v1, IRTemp op2addr, UChar m3)
 static const HChar *
 s390_irgen_VSTEBRH(UChar v1, IRTemp op2addr, UChar m3)
 {
+   if (! s390_host_has_vxe2) {
+      emulation_failure(EmFail_S390X_vxe2);
+      return "vstebrh";
+   }
+
    s390_insn_assert("vstebrh", m3 <= 7);
+
    IRTemp op1 = newTemp(Ity_I16);
    assign(op1, get_vr(v1, Ity_I16, m3));
    store(mkexpr(op2addr), binop(Iop_Or16,
@@ -19571,7 +20343,13 @@ s390_irgen_VSTEBRH(UChar v1, IRTemp op2addr, UChar m3)
 static const HChar *
 s390_irgen_VSTEBRF(UChar v1, IRTemp op2addr, UChar m3)
 {
+   if (! s390_host_has_vxe2) {
+      emulation_failure(EmFail_S390X_vxe2);
+      return "vstebrf";
+   }
+
    s390_insn_assert("vstebrf", m3 <= 3);
+
    IRTemp op1 = newTemp(Ity_V128);
    assign(op1, get_vr_qw(v1));
    IRExpr* b = s390_insert_byteswapped(mkexpr(op1), mkexpr(op1), 2, 3, m3);
@@ -19582,7 +20360,13 @@ s390_irgen_VSTEBRF(UChar v1, IRTemp op2addr, UChar m3)
 static const HChar *
 s390_irgen_VSTEBRG(UChar v1, IRTemp op2addr, UChar m3)
 {
+   if (! s390_host_has_vxe2) {
+      emulation_failure(EmFail_S390X_vxe2);
+      return "vstebrg";
+   }
+
    s390_insn_assert("vstebrg", m3 <= 1);
+
    IRTemp op1 = newTemp(Ity_V128);
    assign(op1, get_vr_qw(v1));
    IRExpr* b = s390_insert_byteswapped(mkexpr(op1), mkexpr(op1), 3, 1, m3);
@@ -19594,7 +20378,10 @@ static const HChar *
 s390_irgen_VCxx(const HChar *mnem, s390x_vec_op_details_t details,
                 UShort v2_offs, UShort v2_size)
 {
-   s390_insn_assert(mnem, s390_host_has_nnpa);
+   if (! s390_host_has_nnpa) {
+      emulation_failure(EmFail_S390X_nnpa);
+      return mnem;
+   }
 
    IRDirty* d = unsafeIRDirty_0_N(0, "s390x_dirtyhelper_vec_op",
                                   &s390x_dirtyhelper_vec_op,
@@ -19667,7 +20454,10 @@ s390_irgen_VCLFNL(UChar v1, UChar v2, UChar m3, UChar m4)
 static const HChar *
 s390_irgen_VCRNF(UChar v1, UChar v2, UChar v3, UChar m4, UChar m5)
 {
-   s390_insn_assert("vcrnf", s390_host_has_nnpa);
+   if (! s390_host_has_nnpa) {
+      emulation_failure(EmFail_S390X_nnpa);
+      return "vcrnf";
+   }
 
    s390x_vec_op_details_t details = { .serialized = 0ULL };
    details.op = S390_VEC_OP_VCRNF;
@@ -19700,7 +20490,10 @@ s390_irgen_VCRNF(UChar v1, UChar v2, UChar v3, UChar m4, UChar m5)
 static const HChar *
 s390_irgen_NNPA(void)
 {
-   s390_insn_assert("nnpa", s390_host_has_nnpa);
+   if (! s390_host_has_nnpa) {
+      emulation_failure(EmFail_S390X_nnpa);
+      return "nnpa";
+   }
    extension(S390_EXT_NNPA, 0);
    return "nnpa";
 }
@@ -19708,6 +20501,10 @@ s390_irgen_NNPA(void)
 static const HChar *
 s390_irgen_KM(UChar r1, UChar r2)
 {
+   if (! s390_host_has_msa) {
+      emulation_failure(EmFail_S390X_msa);
+      return "km";
+   }
    s390_insn_assert("km", r1 != 0 && r1 % 2 == 0 && r2 != 0 && r2 % 2 == 0);
    extension(S390_EXT_KM, r1 | (r2 << 4));
    return "km";
@@ -19716,6 +20513,10 @@ s390_irgen_KM(UChar r1, UChar r2)
 static const HChar *
 s390_irgen_KMC(UChar r1, UChar r2)
 {
+   if (! s390_host_has_msa) {
+      emulation_failure(EmFail_S390X_msa);
+      return "kmc";
+   }
    s390_insn_assert("kmc", r1 != 0 && r1 % 2 == 0 && r2 != 0 && r2 % 2 == 0);
    extension(S390_EXT_KMC, r1 | (r2 << 4));
    return "kmc";
@@ -19724,6 +20525,10 @@ s390_irgen_KMC(UChar r1, UChar r2)
 static const HChar *
 s390_irgen_KIMD(UChar r1, UChar r2)
 {
+   if (! s390_host_has_msa) {
+      emulation_failure(EmFail_S390X_msa);
+      return "kimd";
+   }
    /* r1 is reserved */
    s390_insn_assert("kimd", r2 != 0 && r2 % 2 == 0);
    extension(S390_EXT_KIMD, r1 | (r2 << 4));
@@ -19733,6 +20538,10 @@ s390_irgen_KIMD(UChar r1, UChar r2)
 static const HChar *
 s390_irgen_KLMD(UChar r1, UChar r2)
 {
+   if (! s390_host_has_msa) {
+      emulation_failure(EmFail_S390X_msa);
+      return "klmd";
+   }
    /* r1 is only used by some functions */
    s390_insn_assert("klmd", r2 != 0 && r2 % 2 == 0);
    extension(S390_EXT_KLMD, r1 | (r2 << 4));
@@ -19742,6 +20551,10 @@ s390_irgen_KLMD(UChar r1, UChar r2)
 static const HChar *
 s390_irgen_KMAC(UChar r1, UChar r2)
 {
+   if (! s390_host_has_msa) {
+      emulation_failure(EmFail_S390X_msa);
+      return "kmac";
+   }
    /* r1 is ignored */
    s390_insn_assert("kmac", r2 != 0 && r2 % 2 == 0);
    extension(S390_EXT_KMAC, r1 | (r2 << 4));
@@ -19751,6 +20564,10 @@ s390_irgen_KMAC(UChar r1, UChar r2)
 static const HChar *
 s390_irgen_PCC(void)
 {
+   if (! s390_host_has_msa4) {
+      emulation_failure(EmFail_S390X_msa4);
+      return "pcc";
+   }
    extension(S390_EXT_PCC, 0);
    return "pcc";
 }
@@ -19758,6 +20575,10 @@ s390_irgen_PCC(void)
 static const HChar *
 s390_irgen_KMCTR(UChar r3, UChar r1, UChar r2)
 {
+   if (! s390_host_has_msa4) {
+      emulation_failure(EmFail_S390X_msa4);
+      return "kmctr";
+   }
    s390_insn_assert("kmctr", r1 % 2 == 0 && r1 != 0 && r2 % 2 == 0 && r2 != 0 &&
                     r3 % 2 == 0 && r3 != 0);
    extension(S390_EXT_KMCTR, r1 | (r2 << 4) | (r3 << 8));
@@ -19767,6 +20588,10 @@ s390_irgen_KMCTR(UChar r3, UChar r1, UChar r2)
 static const HChar *
 s390_irgen_KMO(UChar r1, UChar r2)
 {
+   if (! s390_host_has_msa4) {
+      emulation_failure(EmFail_S390X_msa4);
+      return "kmo";
+   }
    s390_insn_assert("kmo", r1 != 0 && r1 % 2 == 0 && r2 != 0 && r2 % 2 == 0);
    extension(S390_EXT_KMO, r1 | (r2 << 4));
    return "kmo";
@@ -19775,6 +20600,10 @@ s390_irgen_KMO(UChar r1, UChar r2)
 static const HChar *
 s390_irgen_KMF(UChar r1, UChar r2)
 {
+   if (! s390_host_has_msa4) {
+      emulation_failure(EmFail_S390X_msa4);
+      return "kmf";
+   }
    s390_insn_assert("kmf", r1 != 0 && r1 % 2 == 0 && r2 != 0 && r2 % 2 == 0);
    extension(S390_EXT_KMF, r1 | (r2 << 4));
    return "kmf";
@@ -19783,8 +20612,12 @@ s390_irgen_KMF(UChar r1, UChar r2)
 static const HChar *
 s390_irgen_KMA(UChar r3, UChar r1, UChar r2)
 {
+   if (! s390_host_has_msa8) {
+      emulation_failure(EmFail_S390X_msa8);
+      return "kma";
+   }
    s390_insn_assert("kma", r1 % 2 == 0 && r1 != 0 && r2 % 2 == 0 && r2 != 0 &&
-                    r3 % 2 == 0 && r3 != 0);
+                    r3 % 2 == 0 && r3 != 0 && r3 != r1 && r3 != r2);
    extension(S390_EXT_KMA, r1 | (r2 << 4) | (r3 << 8));
    return "kma";
 }
@@ -19792,16 +20625,49 @@ s390_irgen_KMA(UChar r3, UChar r1, UChar r2)
 static const HChar *
 s390_irgen_KDSA(UChar r1, UChar r2)
 {
+   if (! s390_host_has_msa9) {
+      emulation_failure(EmFail_S390X_msa9);
+      return "kdsa";
+   }
    /* r1 is reserved */
    s390_insn_assert("kdsa", r2 != 0 && r2 % 2 == 0);
    extension(S390_EXT_KDSA, r1 | (r2 << 4));
    return "kdsa";
 }
 
+static const HChar *
+s390_irgen_BPP(UChar m1, UShort i2, IRTemp op3addr)
+{
+   /* Treat as a no-op */
+   return "bpp";
+}
+
+static const HChar *
+s390_irgen_BPRP(UChar m1, UShort i2, UShort i3)
+{
+   /* Treat as a no-op */
+   return "bprp";
+}
+
+static const HChar *
+s390_irgen_NIAI(UChar i1, UChar i2)
+{
+   /* Treat as a no-op */
+   return "niai";
+}
+
+static const HChar *
+s390_irgen_PPA(UChar m3, UChar r1, UChar r2)
+{
+   /* Treat as a no-op.  m3 could indicate one of the following:
+       1: transaction-abort assist -- fine, we don't support transactions
+      15: in-order-execution assist -- we don't claim support */
+   return "ppa";
+}
+
 /* New insns are added here.
    If an insn is contingent on a facility being installed also
-   check whether the list of supported facilities in function
-   s390x_dirtyhelper_STFLE needs updating */
+   check whether function do_extension_STFLE needs updating. */
 
 /*------------------------------------------------------------*/
 /*--- Build IR for special instructions                    ---*/
@@ -19877,10 +20743,10 @@ s390_decode_2byte_and_irgen(const UChar *bytes)
    case 0x0c: /* BASSM */ goto unimplemented;
    case 0x0d: s390_format_RR_RR(s390_irgen_BASR, RR_r1(ovl), RR_r2(ovl));
                                 goto ok;
-   case 0x0e: s390_format_RR(s390_irgen_MVCL, RR_r1(ovl), RR_r2(ovl));
-                             goto ok;
-   case 0x0f: s390_format_RR(s390_irgen_CLCL, RR_r1(ovl), RR_r2(ovl));
-                             goto ok;
+   case 0x0e: s390_format_RR_RR(s390_irgen_MVCL, RR_r1(ovl), RR_r2(ovl));
+                                goto ok;
+   case 0x0f: s390_format_RR_RR(s390_irgen_CLCL, RR_r1(ovl), RR_r2(ovl));
+                                goto ok;
    case 0x10: s390_format_RR_RR(s390_irgen_LPR, RR_r1(ovl), RR_r2(ovl));
                                 goto ok;
    case 0x11: s390_format_RR_RR(s390_irgen_LNR, RR_r1(ovl), RR_r2(ovl));
@@ -20084,8 +20950,8 @@ s390_decode_4byte_and_irgen(const UChar *bytes)
    case 0xb23b: /* RCHP */ goto unimplemented;
    case 0xb23c: /* SCHM */ goto unimplemented;
    case 0xb240: /* BAKR */ goto unimplemented;
-   case 0xb241: s390_format_RRE(s390_irgen_CKSM, RRE_r1(ovl),
-                                RRE_r2(ovl));  goto ok;
+   case 0xb241: s390_format_RRE_RR(s390_irgen_CKSM, RRE_r1(ovl),
+                                   RRE_r2(ovl));  goto ok;
    case 0xb244: /* SQDR */ goto unimplemented;
    case 0xb245: /* SQER */ goto unimplemented;
    case 0xb246: /* STURA */ goto unimplemented;
@@ -20135,7 +21001,7 @@ s390_decode_4byte_and_irgen(const UChar *bytes)
                                  goto ok;
    case 0xb29d: s390_format_S_RD(s390_irgen_LFPC, S_b2(ovl), S_d2(ovl));
                                  goto ok;
-   case 0xb2a5: s390_format_RRE_FF(s390_irgen_TRE, RRE_r1(ovl), RRE_r2(ovl));  goto ok;
+   case 0xb2a5: s390_format_RRE_RR(s390_irgen_TRE, RRE_r1(ovl), RRE_r2(ovl));  goto ok;
    case 0xb2a6: s390_format_RRF_M0RERE(s390_irgen_CU21, RRF3_r3(ovl),
                                        RRF3_r1(ovl), RRF3_r2(ovl));
       goto ok;
@@ -20146,7 +21012,7 @@ s390_decode_4byte_and_irgen(const UChar *bytes)
                                  goto ok;
    case 0xb2b1: /* STFL */ goto unimplemented;
    case 0xb2b2: /* LPSWE */ goto unimplemented;
-   case 0xb2b8: s390_irgen_srnmb_wrapper(S_b2(ovl), S_d2(ovl));
+   case 0xb2b8: s390_format_S_RD_raw(s390_irgen_SRNMB, S_b2(ovl), S_d2(ovl));
       goto ok;
    case 0xb2b9: s390_format_S_RD(s390_irgen_SRNMT, S_b2(ovl), S_d2(ovl));
       goto ok;
@@ -20155,11 +21021,13 @@ s390_decode_4byte_and_irgen(const UChar *bytes)
    case 0xb2e1: /* SPCTR */ goto unimplemented;
    case 0xb2e4: /* ECCTR */ goto unimplemented;
    case 0xb2e5: /* EPCTR */ goto unimplemented;
-   case 0xb2e8: /* PPA */ goto unimplemented;
+   case 0xb2e8: s390_format_RRFa_U0RR(s390_irgen_PPA, RRF2_m3(ovl),
+                                      RRF2_r1(ovl), RRF2_r2(ovl));  goto ok;
    case 0xb2ec: /* ETND */ goto unimplemented;
    case 0xb2ed: /* ECPGA */ goto unimplemented;
    case 0xb2f8: /* TEND */ goto unimplemented;
-   case 0xb2fa: /* NIAI */ goto unimplemented;
+   case 0xb2fa: s390_format_IE(s390_irgen_NIAI, IE_i1(ovl),
+                               IE_i2(ovl));  goto ok;
    case 0xb2fc: /* TABORT */ goto unimplemented;
    case 0xb2ff: /* TRAP4 */ goto unimplemented;
    case 0xb300: s390_format_RRE_FF(s390_irgen_LPEBR, RRE_r1(ovl),
@@ -20248,18 +21116,18 @@ s390_decode_4byte_and_irgen(const UChar *bytes)
                                    RRE_r2(ovl));  goto ok;
    case 0xb343: s390_format_RRE_FF(s390_irgen_LCXBR, RRE_r1(ovl),
                                    RRE_r2(ovl));  goto ok;
-   case 0xb344: s390_format_RRF_UUFF(s390_irgen_LEDBR, RRF2_m3(ovl),
-                                     RRF2_m4(ovl), RRF2_r1(ovl),
-                                     RRF2_r2(ovl));  goto ok;
-   case 0xb345: s390_format_RRF_UUFF(s390_irgen_LDXBR, RRF2_m3(ovl),
+   case 0xb344: s390_format_RRF_UUFF(s390_irgen_LEDBRA, RRF2_m3(ovl),
                                      RRF2_m4(ovl), RRF2_r1(ovl),
                                      RRF2_r2(ovl));  goto ok;
-   case 0xb346: s390_format_RRF_UUFF(s390_irgen_LEXBR, RRF2_m3(ovl),
+   case 0xb345: s390_format_RRF_UUFF(s390_irgen_LDXBRA, RRF2_m3(ovl),
                                      RRF2_m4(ovl), RRF2_r1(ovl),
                                      RRF2_r2(ovl));  goto ok;
-   case 0xb347: s390_format_RRF_UUFF(s390_irgen_FIXBRA, RRF2_m3(ovl),
+   case 0xb346: s390_format_RRF_UUFF(s390_irgen_LEXBRA, RRF2_m3(ovl),
                                      RRF2_m4(ovl), RRF2_r1(ovl),
                                      RRF2_r2(ovl));  goto ok;
+   case 0xb347: s390_format_RRF_UUFF2(s390_irgen_FIXBRA, RRF2_m3(ovl),
+                                      RRF2_m4(ovl), RRF2_r1(ovl),
+                                      RRF2_r2(ovl));  goto ok;
    case 0xb348: s390_format_RRE_FF(s390_irgen_KXBR, RRE_r1(ovl),
                                    RRE_r2(ovl));  goto ok;
    case 0xb349: s390_format_RRE_FF(s390_irgen_CXBR, RRE_r1(ovl),
@@ -20275,15 +21143,15 @@ s390_decode_4byte_and_irgen(const UChar *bytes)
    case 0xb350: /* TBEDR */ goto unimplemented;
    case 0xb351: /* TBDR */ goto unimplemented;
    case 0xb353: /* DIEBR */ goto unimplemented;
-   case 0xb357: s390_format_RRF_UUFF(s390_irgen_FIEBRA, RRF2_m3(ovl),
-                                     RRF2_m4(ovl), RRF2_r1(ovl),
-                                     RRF2_r2(ovl));  goto ok;
+   case 0xb357: s390_format_RRF_UUFF2(s390_irgen_FIEBRA, RRF2_m3(ovl),
+                                      RRF2_m4(ovl), RRF2_r1(ovl),
+                                      RRF2_r2(ovl));  goto ok;
    case 0xb358: /* THDER */ goto unimplemented;
    case 0xb359: /* THDR */ goto unimplemented;
    case 0xb35b: /* DIDBR */ goto unimplemented;
-   case 0xb35f: s390_format_RRF_UUFF(s390_irgen_FIDBRA, RRF2_m3(ovl),
-                                     RRF2_m4(ovl), RRF2_r1(ovl),
-                                     RRF2_r2(ovl));  goto ok;
+   case 0xb35f: s390_format_RRF_UUFF2(s390_irgen_FIDBRA, RRF2_m3(ovl),
+                                      RRF2_m4(ovl), RRF2_r1(ovl),
+                                      RRF2_r2(ovl));  goto ok;
    case 0xb360: /* LPXR */ goto unimplemented;
    case 0xb361: /* LNXR */ goto unimplemented;
    case 0xb362: /* LTXR */ goto unimplemented;
@@ -20319,22 +21187,22 @@ s390_decode_4byte_and_irgen(const UChar *bytes)
    case 0xb392: s390_format_RRF_UUFR(s390_irgen_CXLFBR, RRF2_m3(ovl),
                                      RRF2_m4(ovl), RRF2_r1(ovl),
                                      RRF2_r2(ovl));  goto ok;
-   case 0xb394: s390_format_RRF_UUFR(s390_irgen_CEFBR, RRF2_m3(ovl),
+   case 0xb394: s390_format_RRF_UUFR(s390_irgen_CEFBRA, RRF2_m3(ovl),
                                      RRF2_m4(ovl), RRF2_r1(ovl),
                                      RRF2_r2(ovl));  goto ok;
-   case 0xb395: s390_format_RRF_UUFR(s390_irgen_CDFBR, RRF2_m3(ovl),
+   case 0xb395: s390_format_RRF_UUFR(s390_irgen_CDFBRA, RRF2_m3(ovl),
                                      RRF2_m4(ovl), RRF2_r1(ovl),
                                      RRF2_r2(ovl));  goto ok;
-   case 0xb396: s390_format_RRF_UUFR(s390_irgen_CXFBR, RRF2_m3(ovl),
+   case 0xb396: s390_format_RRF_UUFR(s390_irgen_CXFBRA, RRF2_m3(ovl),
                                      RRF2_m4(ovl), RRF2_r1(ovl),
                                      RRF2_r2(ovl));  goto ok;
-   case 0xb398: s390_format_RRF_UURF(s390_irgen_CFEBR, RRF2_m3(ovl),
+   case 0xb398: s390_format_RRF_UURF(s390_irgen_CFEBRA, RRF2_m3(ovl),
                                      RRF2_m4(ovl), RRF2_r1(ovl),
                                      RRF2_r2(ovl));  goto ok;
-   case 0xb399: s390_format_RRF_UURF(s390_irgen_CFDBR, RRF2_m3(ovl),
+   case 0xb399: s390_format_RRF_UURF(s390_irgen_CFDBRA, RRF2_m3(ovl),
                                      RRF2_m4(ovl), RRF2_r1(ovl),
                                      RRF2_r2(ovl));  goto ok;
-   case 0xb39a: s390_format_RRF_UURF(s390_irgen_CFXBR, RRF2_m3(ovl),
+   case 0xb39a: s390_format_RRF_UURF(s390_irgen_CFXBRA, RRF2_m3(ovl),
                                      RRF2_m4(ovl), RRF2_r1(ovl),
                                      RRF2_r2(ovl));  goto ok;
    case 0xb39c: s390_format_RRF_UURF(s390_irgen_CLFEBR, RRF2_m3(ovl),
@@ -20355,22 +21223,22 @@ s390_decode_4byte_and_irgen(const UChar *bytes)
    case 0xb3a2: s390_format_RRF_UUFR(s390_irgen_CXLGBR, RRF2_m3(ovl),
                                      RRF2_m4(ovl), RRF2_r1(ovl),
                                      RRF2_r2(ovl));  goto ok;
-   case 0xb3a4: s390_format_RRF_UUFR(s390_irgen_CEGBR, RRF2_m3(ovl),
+   case 0xb3a4: s390_format_RRF_UUFR(s390_irgen_CEGBRA, RRF2_m3(ovl),
                                      RRF2_m4(ovl), RRF2_r1(ovl),
                                      RRF2_r2(ovl));  goto ok;
-   case 0xb3a5: s390_format_RRF_UUFR(s390_irgen_CDGBR, RRF2_m3(ovl),
+   case 0xb3a5: s390_format_RRF_UUFR(s390_irgen_CDGBRA, RRF2_m3(ovl),
                                      RRF2_m4(ovl), RRF2_r1(ovl),
                                      RRF2_r2(ovl));  goto ok;
-   case 0xb3a6: s390_format_RRF_UUFR(s390_irgen_CXGBR, RRF2_m3(ovl),
+   case 0xb3a6: s390_format_RRF_UUFR(s390_irgen_CXGBRA, RRF2_m3(ovl),
                                      RRF2_m4(ovl), RRF2_r1(ovl),
                                      RRF2_r2(ovl));  goto ok;
-   case 0xb3a8: s390_format_RRF_UURF(s390_irgen_CGEBR, RRF2_m3(ovl),
+   case 0xb3a8: s390_format_RRF_UURF(s390_irgen_CGEBRA, RRF2_m3(ovl),
                                      RRF2_m4(ovl), RRF2_r1(ovl),
                                      RRF2_r2(ovl));  goto ok;
-   case 0xb3a9: s390_format_RRF_UURF(s390_irgen_CGDBR, RRF2_m3(ovl),
+   case 0xb3a9: s390_format_RRF_UURF(s390_irgen_CGDBRA, RRF2_m3(ovl),
                                      RRF2_m4(ovl), RRF2_r1(ovl),
                                      RRF2_r2(ovl));  goto ok;
-   case 0xb3aa: s390_format_RRF_UURF(s390_irgen_CGXBR, RRF2_m3(ovl),
+   case 0xb3aa: s390_format_RRF_UURF(s390_irgen_CGXBRA, RRF2_m3(ovl),
                                      RRF2_m4(ovl), RRF2_r1(ovl),
                                      RRF2_r2(ovl));  goto ok;
    case 0xb3ac: s390_format_RRF_UURF(s390_irgen_CLGEBR, RRF2_m3(ovl),
@@ -20439,7 +21307,7 @@ s390_decode_4byte_and_irgen(const UChar *bytes)
                                    RRE_r2(ovl));  goto ok;
    case 0xb3df: /* FIXTR */ goto unimplemented;
    case 0xb3e0: /* KDTR */ goto unimplemented;
-   case 0xb3e1: s390_format_RRF_UURF(s390_irgen_CGDTR, RRF2_m3(ovl),
+   case 0xb3e1: s390_format_RRF_UURF(s390_irgen_CGDTRA, RRF2_m3(ovl),
                                      RRF2_m4(ovl), RRF2_r1(ovl),
                                      RRF2_r2(ovl));  goto ok;
    case 0xb3e2: /* CUDTR */ goto unimplemented;
@@ -20451,7 +21319,7 @@ s390_decode_4byte_and_irgen(const UChar *bytes)
    case 0xb3e7: s390_format_RRE_RF(s390_irgen_ESDTR, RRE_r1(ovl),
                                    RRE_r2(ovl));  goto ok;
    case 0xb3e8: /* KXTR */ goto unimplemented;
-   case 0xb3e9: s390_format_RRF_UURF(s390_irgen_CGXTR, RRF2_m3(ovl),
+   case 0xb3e9: s390_format_RRF_UURF(s390_irgen_CGXTRA, RRF2_m3(ovl),
                                      RRF2_m4(ovl), RRF2_r1(ovl),
                                      RRF2_r2(ovl));  goto ok;
    case 0xb3ea: /* CUXTR */ goto unimplemented;
@@ -20477,7 +21345,7 @@ s390_decode_4byte_and_irgen(const UChar *bytes)
    case 0xb3f7: s390_format_RRF_FFRU(s390_irgen_RRDTR, RRF4_r3(ovl),
                                      RRF4_m4(ovl), RRF4_r1(ovl),
                                      RRF4_r2(ovl)); goto ok;
-   case 0xb3f9: s390_format_RRF_UUFR(s390_irgen_CXGTR, RRF2_m3(ovl),
+   case 0xb3f9: s390_format_RRF_UUFR(s390_irgen_CXGTRA, RRF2_m3(ovl),
                                      RRF2_m4(ovl), RRF2_r1(ovl),
                                      RRF2_r2(ovl));  goto ok;
    case 0xb3fa: /* CXUTR */ goto unimplemented;
@@ -20585,7 +21453,7 @@ s390_decode_4byte_and_irgen(const UChar *bytes)
    case 0xb93a: s390_format_RRE_RR(s390_irgen_KDSA, RRE_r1(ovl),
                                    RRE_r2(ovl));  goto ok;
    case 0xb93b: s390_format_E(s390_irgen_NNPA);  goto ok;
-   case 0xb93c: s390_format_RRE_RR(s390_irgen_PPNO, RRE_r1(ovl),
+   case 0xb93c: s390_format_RRE_RR(s390_irgen_PRNO, RRE_r1(ovl),
                                    RRE_r2(ovl));  goto ok;
    case 0xb93e: s390_format_RRE_RR(s390_irgen_KIMD, RRE_r1(ovl),
                                    RRE_r2(ovl));  goto ok;
@@ -20631,10 +21499,10 @@ s390_decode_4byte_and_irgen(const UChar *bytes)
                                      RRF2_r2(ovl));  goto ok;
    case 0xb960: s390_format_RRF_U0RR(s390_irgen_CGRT, RRF2_m3(ovl),
                                      RRF2_r1(ovl), RRF2_r2(ovl),
-                                     S390_XMNM_CAB); goto ok;
+                                     cabt_disasm); goto ok;
    case 0xb961: s390_format_RRF_U0RR(s390_irgen_CLGRT, RRF2_m3(ovl),
                                      RRF2_r1(ovl), RRF2_r2(ovl),
-                                     S390_XMNM_CAB); goto ok;
+                                     cabt_disasm); goto ok;
    case 0xb964: s390_format_RRF_R0RR2(s390_irgen_NNGRK, RRF4_r3(ovl),
                                       RRF4_r1(ovl), RRF4_r2(ovl)); goto ok;
    case 0xb965: s390_format_RRF_R0RR2(s390_irgen_OCGRK, RRF4_r3(ovl),
@@ -20643,12 +21511,16 @@ s390_decode_4byte_and_irgen(const UChar *bytes)
                                       RRF4_r1(ovl), RRF4_r2(ovl)); goto ok;
    case 0xb967: s390_format_RRF_R0RR2(s390_irgen_NXGRK, RRF4_r3(ovl),
                                       RRF4_r1(ovl), RRF4_r2(ovl)); goto ok;
+   case 0xb968: /* CLZG */ goto unimplemented;
+   case 0xb969: /* CTZG */ goto unimplemented;
+   case 0xb96c: /* BEXTG */ goto unimplemented;
+   case 0xb96d: /* BDEPG */ goto unimplemented;
    case 0xb972: s390_format_RRF_U0RR(s390_irgen_CRT, RRF2_m3(ovl),
                                      RRF2_r1(ovl), RRF2_r2(ovl),
-                                     S390_XMNM_CAB); goto ok;
+                                     cabt_disasm); goto ok;
    case 0xb973: s390_format_RRF_U0RR(s390_irgen_CLRT, RRF2_m3(ovl),
                                      RRF2_r1(ovl), RRF2_r2(ovl),
-                                     S390_XMNM_CAB); goto ok;
+                                     cabt_disasm); goto ok;
    case 0xb974: s390_format_RRF_R0RR2(s390_irgen_NNRK, RRF4_r3(ovl),
                                       RRF4_r1(ovl), RRF4_r2(ovl)); goto ok;
    case 0xb975: s390_format_RRF_R0RR2(s390_irgen_OCRK, RRF4_r3(ovl),
@@ -20763,12 +21635,12 @@ s390_decode_4byte_and_irgen(const UChar *bytes)
                                    RRE_r2(ovl));  goto ok;
    case 0xb9e0: s390_format_RRF_U0RR(s390_irgen_LOCFHR, RRF3_r3(ovl),
                                      RRF3_r1(ovl), RRF3_r2(ovl),
-                                     S390_XMNM_LOCFHR);  goto ok;
+                                     cls_disasm);  goto ok;
    case 0xb9e1: s390_format_RRFa_U0RR(s390_irgen_POPCNT, RRF3_r3(ovl),
                                       RRF3_r1(ovl), RRF3_r2(ovl));  goto ok;
    case 0xb9e2: s390_format_RRF_U0RR(s390_irgen_LOCGR, RRF3_r3(ovl),
                                      RRF3_r1(ovl), RRF3_r2(ovl),
-                                     S390_XMNM_LOCGR);  goto ok;
+                                     cls_disasm);  goto ok;
    case 0xb9e3: s390_format_RRF_RURR(s390_irgen_SELGR, RRF4_r3(ovl),
                                      RRF4_m4(ovl), RRF4_r1(ovl),
                                      RRF4_r2(ovl)); goto ok;
@@ -20807,7 +21679,7 @@ s390_decode_4byte_and_irgen(const UChar *bytes)
                                      RRF4_r2(ovl)); goto ok;
    case 0xb9f2: s390_format_RRF_U0RR(s390_irgen_LOCR, RRF3_r3(ovl),
                                      RRF3_r1(ovl), RRF3_r2(ovl),
-                                     S390_XMNM_LOCR);  goto ok;
+                                     cls_disasm);  goto ok;
    case 0xb9f4: s390_format_RRF_R0RR2(s390_irgen_NRK, RRF4_r3(ovl),
                                       RRF4_r1(ovl), RRF4_r2(ovl));
                                       goto ok;
@@ -21247,6 +22119,16 @@ s390_decode_6byte_and_irgen(const UChar *bytes)
                                                 RXY_x2(ovl), RXY_b2(ovl),
                                                 RXY_dl2(ovl),
                                                 RXY_dh2(ovl));  goto ok;
+   case 0xe30000000060ULL: /* LXAB */ goto unimplemented;
+   case 0xe30000000061ULL: /* LLXAB */ goto unimplemented;
+   case 0xe30000000062ULL: /* LXAH */ goto unimplemented;
+   case 0xe30000000063ULL: /* LLXAH */ goto unimplemented;
+   case 0xe30000000064ULL: /* LXAF */ goto unimplemented;
+   case 0xe30000000065ULL: /* LLXAF */ goto unimplemented;
+   case 0xe30000000066ULL: /* LXAG */ goto unimplemented;
+   case 0xe30000000067ULL: /* LLXAG */ goto unimplemented;
+   case 0xe30000000068ULL: /* LXAQ */ goto unimplemented;
+   case 0xe30000000069ULL: /* LLXAQ */ goto unimplemented;
    case 0xe30000000070ULL: s390_format_RXY_RRRD(s390_irgen_STHY, RXY_r1(ovl),
                                                 RXY_x2(ovl), RXY_b2(ovl),
                                                 RXY_dl2(ovl),
@@ -21435,56 +22317,56 @@ s390_decode_6byte_and_irgen(const UChar *bytes)
    case 0xe60000000001ULL: s390_format_VRX_VRRDM(s390_irgen_VLEBRH, VRX_v1(ovl),
                                                  VRX_x2(ovl), VRX_b2(ovl),
                                                  VRX_d2(ovl), VRX_m3(ovl),
-                                                 VRX_rxb(ovl));  goto ok;
+                                                 VRX_rxb(ovl), NULL);  goto ok;
    case 0xe60000000002ULL: s390_format_VRX_VRRDM(s390_irgen_VLEBRG, VRX_v1(ovl),
                                                  VRX_x2(ovl), VRX_b2(ovl),
                                                  VRX_d2(ovl), VRX_m3(ovl),
-                                                 VRX_rxb(ovl));  goto ok;
+                                                 VRX_rxb(ovl), NULL);  goto ok;
    case 0xe60000000003ULL: s390_format_VRX_VRRDM(s390_irgen_VLEBRF, VRX_v1(ovl),
                                                  VRX_x2(ovl), VRX_b2(ovl),
                                                  VRX_d2(ovl), VRX_m3(ovl),
-                                                 VRX_rxb(ovl));  goto ok;
+                                                 VRX_rxb(ovl), NULL);  goto ok;
    case 0xe60000000004ULL: s390_format_VRX_VRRDM(s390_irgen_VLLEBRZ,
                                                  VRX_v1(ovl),
                                                  VRX_x2(ovl), VRX_b2(ovl),
                                                  VRX_d2(ovl), VRX_m3(ovl),
-                                                 VRX_rxb(ovl));  goto ok;
+                                                 VRX_rxb(ovl), vllebrz_disasm);  goto ok;
    case 0xe60000000005ULL: s390_format_VRX_VRRDM(s390_irgen_VLBRREP,
                                                  VRX_v1(ovl),
                                                  VRX_x2(ovl), VRX_b2(ovl),
                                                  VRX_d2(ovl), VRX_m3(ovl),
-                                                 VRX_rxb(ovl));  goto ok;
+                                                 VRX_rxb(ovl), va_like_disasm);  goto ok;
    case 0xe60000000006ULL: s390_format_VRX_VRRDM(s390_irgen_VLBR, VRX_v1(ovl),
                                                  VRX_x2(ovl), VRX_b2(ovl),
                                                  VRX_d2(ovl), VRX_m3(ovl),
-                                                 VRX_rxb(ovl));  goto ok;
+                                                 VRX_rxb(ovl), va_like_disasm);  goto ok;
    case 0xe60000000007ULL: s390_format_VRX_VRRDM(s390_irgen_VLER, VRX_v1(ovl),
                                                  VRX_x2(ovl), VRX_b2(ovl),
                                                  VRX_d2(ovl), VRX_m3(ovl),
-                                                 VRX_rxb(ovl));  goto ok;
+                                                 VRX_rxb(ovl), va_like_disasm);  goto ok;
    case 0xe60000000009ULL: s390_format_VRX_VRRDM(s390_irgen_VSTEBRH,
                                                  VRX_v1(ovl),
                                                  VRX_x2(ovl), VRX_b2(ovl),
                                                  VRX_d2(ovl), VRX_m3(ovl),
-                                                 VRX_rxb(ovl));  goto ok;
+                                                 VRX_rxb(ovl), NULL);  goto ok;
    case 0xe6000000000aULL: s390_format_VRX_VRRDM(s390_irgen_VSTEBRG,
                                                  VRX_v1(ovl),
                                                  VRX_x2(ovl), VRX_b2(ovl),
                                                  VRX_d2(ovl), VRX_m3(ovl),
-                                                 VRX_rxb(ovl));  goto ok;
+                                                 VRX_rxb(ovl), vstebrg_disasm);  goto ok;
    case 0xe6000000000bULL: s390_format_VRX_VRRDM(s390_irgen_VSTEBRF,
                                                  VRX_v1(ovl),
                                                  VRX_x2(ovl), VRX_b2(ovl),
                                                  VRX_d2(ovl), VRX_m3(ovl),
-                                                 VRX_rxb(ovl));  goto ok;
+                                                 VRX_rxb(ovl), vstebrf_disasm);  goto ok;
    case 0xe6000000000eULL: s390_format_VRX_VRRDM(s390_irgen_VSTBR, VRX_v1(ovl),
                                                  VRX_x2(ovl), VRX_b2(ovl),
                                                  VRX_d2(ovl), VRX_m3(ovl),
-                                                 VRX_rxb(ovl));  goto ok;
+                                                 VRX_rxb(ovl), va_like_disasm);  goto ok;
    case 0xe6000000000fULL: s390_format_VRX_VRRDM(s390_irgen_VSTER, VRX_v1(ovl),
                                                  VRX_x2(ovl), VRX_b2(ovl),
                                                  VRX_d2(ovl), VRX_m3(ovl),
-                                                 VRX_rxb(ovl));  goto ok;
+                                                 VRX_rxb(ovl), va_like_disasm);  goto ok;
    case 0xe60000000034ULL: /* VPKZ */ goto unimplemented;
    case 0xe60000000035ULL: s390_format_VSI_URDV(s390_irgen_VLRL, VSI_v1(ovl),
                                                 VSI_b2(ovl), VSI_d2(ovl),
@@ -21504,6 +22386,8 @@ s390_decode_6byte_and_irgen(const UChar *bytes)
                                                 VRS_d2(ovl),
                                                 VRS_rxb(ovl));  goto ok;
    case 0xe60000000049ULL: /* VLIP */ goto unimplemented;
+   case 0xe6000000004aULL: /* VCVDQ */ goto unimplemented;
+   case 0xe6000000004eULL: /* VCVBQ */ goto unimplemented;
    case 0xe60000000050ULL: /* VCVB */ goto unimplemented;
    case 0xe60000000051ULL: /* VCLZDP */ goto unimplemented;
    case 0xe60000000052ULL: /* VCVBG */ goto unimplemented;
@@ -21511,19 +22395,19 @@ s390_decode_6byte_and_irgen(const UChar *bytes)
    case 0xe60000000055ULL: s390_format_VRRa_VVMM(s390_irgen_VCNF,
                                                  VRRa_v1(ovl), VRRa_v2(ovl),
                                                  VRRa_m3(ovl), VRRa_m4(ovl),
-                                                 VRRa_rxb(ovl));  goto ok;
+                                                 VRRa_rxb(ovl), NULL);  goto ok;
    case 0xe60000000056ULL: s390_format_VRRa_VVMM(s390_irgen_VCLFNH,
                                                  VRRa_v1(ovl), VRRa_v2(ovl),
                                                  VRRa_m3(ovl), VRRa_m4(ovl),
-                                                 VRRa_rxb(ovl));  goto ok;
+                                                 VRRa_rxb(ovl), NULL);  goto ok;
    case 0xe6000000005dULL: s390_format_VRRa_VVMM(s390_irgen_VCFN,
                                                  VRRa_v1(ovl), VRRa_v2(ovl),
                                                  VRRa_m3(ovl), VRRa_m4(ovl),
-                                                 VRRa_rxb(ovl));  goto ok;
+                                                 VRRa_rxb(ovl), NULL);  goto ok;
    case 0xe6000000005eULL: s390_format_VRRa_VVMM(s390_irgen_VCLFNL,
                                                  VRRa_v1(ovl), VRRa_v2(ovl),
                                                  VRRa_m3(ovl), VRRa_m4(ovl),
-                                                 VRRa_rxb(ovl));  goto ok;
+                                                 VRRa_rxb(ovl), NULL);  goto ok;
    case 0xe60000000058ULL: /* VCVD */ goto unimplemented;
    case 0xe60000000059ULL: /* VSRP */ goto unimplemented;
    case 0xe6000000005aULL: /* VCVDG */ goto unimplemented;
@@ -21538,7 +22422,7 @@ s390_decode_6byte_and_irgen(const UChar *bytes)
                                                   VRRa_v1(ovl), VRRa_v2(ovl),
                                                   VRRa_v3(ovl),
                                                   VRRa_m3(ovl), VRRa_m4(ovl),
-                                                  VRRa_rxb(ovl)); goto ok;
+                                                  VRRa_rxb(ovl), NULL); goto ok;
    case 0xe60000000077ULL: /* VCP */ goto unimplemented;
    case 0xe60000000078ULL: /* VMP */ goto unimplemented;
    case 0xe60000000079ULL: /* VMSP */ goto unimplemented;
@@ -21548,75 +22432,78 @@ s390_decode_6byte_and_irgen(const UChar *bytes)
    case 0xe6000000007cULL: /* VSCSHP */ goto unimplemented;
    case 0xe6000000007dULL: /* VCSPH */ goto unimplemented;
    case 0xe6000000007eULL: /* VSDP */ goto unimplemented;
+   case 0xe6000000007fULL: /* VTZ */ goto unimplemented;
    case 0xe70000000000ULL: s390_format_VRX_VRRDM(s390_irgen_VLEB, VRX_v1(ovl),
                                                  VRX_x2(ovl), VRX_b2(ovl),
                                                  VRX_d2(ovl), VRX_m3(ovl),
-                                                 VRX_rxb(ovl));  goto ok;
+                                                 VRX_rxb(ovl), NULL);  goto ok;
    case 0xe70000000001ULL: s390_format_VRX_VRRDM(s390_irgen_VLEH, VRX_v1(ovl),
                                                  VRX_x2(ovl), VRX_b2(ovl),
                                                  VRX_d2(ovl), VRX_m3(ovl),
-                                                 VRX_rxb(ovl));  goto ok;
+                                                 VRX_rxb(ovl), NULL);  goto ok;
    case 0xe70000000002ULL: s390_format_VRX_VRRDM(s390_irgen_VLEG, VRX_v1(ovl),
                                                  VRX_x2(ovl), VRX_b2(ovl),
                                                  VRX_d2(ovl), VRX_m3(ovl),
-                                                 VRX_rxb(ovl));  goto ok;
+                                                 VRX_rxb(ovl), NULL);  goto ok;
    case 0xe70000000003ULL: s390_format_VRX_VRRDM(s390_irgen_VLEF, VRX_v1(ovl),
                                                  VRX_x2(ovl), VRX_b2(ovl),
                                                  VRX_d2(ovl), VRX_m3(ovl),
-                                                 VRX_rxb(ovl));  goto ok;
+                                                 VRX_rxb(ovl), NULL);  goto ok;
    case 0xe70000000004ULL: s390_format_VRX_VRRDM(s390_irgen_VLLEZ, VRX_v1(ovl),
                                                  VRX_x2(ovl), VRX_b2(ovl),
                                                  VRX_d2(ovl), VRX_m3(ovl),
-                                                 VRX_rxb(ovl));  goto ok;
+                                                 VRX_rxb(ovl), vllez_disasm);  goto ok;
    case 0xe70000000005ULL: s390_format_VRX_VRRDM(s390_irgen_VLREP, VRX_v1(ovl),
+                                                 VRX_x2(ovl), VRX_b2(ovl),
+                                                 VRX_d2(ovl), VRX_m3(ovl),
+                                                 VRX_rxb(ovl), va_like_disasm);  goto ok;
+   case 0xe70000000006ULL: s390_format_VRX_VRRD(s390_irgen_VL, VRX_v1(ovl),
                                                 VRX_x2(ovl), VRX_b2(ovl),
                                                 VRX_d2(ovl), VRX_m3(ovl),
                                                 VRX_rxb(ovl));  goto ok;
-   case 0xe70000000006ULL: s390_format_VRX_VRRD(s390_irgen_VL, VRX_v1(ovl),
-                                                VRX_x2(ovl), VRX_b2(ovl),
-                                                VRX_d2(ovl), VRX_rxb(ovl));  goto ok;
    case 0xe70000000007ULL: s390_format_VRX_VRRDM(s390_irgen_VLBB, VRX_v1(ovl),
                                                  VRX_x2(ovl), VRX_b2(ovl),
                                                  VRX_d2(ovl), VRX_m3(ovl),
-                                                 VRX_rxb(ovl));  goto ok;
+                                                 VRX_rxb(ovl), NULL);  goto ok;
    case 0xe70000000008ULL: s390_format_VRX_VRRDM(s390_irgen_VSTEB, VRX_v1(ovl),
                                                  VRX_x2(ovl), VRX_b2(ovl),
                                                  VRX_d2(ovl), VRX_m3(ovl),
-                                                 VRX_rxb(ovl));  goto ok;
+                                                 VRX_rxb(ovl), NULL);  goto ok;
    case 0xe70000000009ULL: s390_format_VRX_VRRDM(s390_irgen_VSTEH, VRX_v1(ovl),
                                                  VRX_x2(ovl), VRX_b2(ovl),
                                                  VRX_d2(ovl), VRX_m3(ovl),
-                                                 VRX_rxb(ovl));  goto ok;
+                                                 VRX_rxb(ovl), NULL);  goto ok;
    case 0xe7000000000aULL: s390_format_VRX_VRRDM(s390_irgen_VSTEG, VRX_v1(ovl),
                                                  VRX_x2(ovl), VRX_b2(ovl),
                                                  VRX_d2(ovl), VRX_m3(ovl),
-                                                 VRX_rxb(ovl));  goto ok;
+                                                 VRX_rxb(ovl), NULL);  goto ok;
    case 0xe7000000000bULL: s390_format_VRX_VRRDM(s390_irgen_VSTEF, VRX_v1(ovl),
                                                  VRX_x2(ovl), VRX_b2(ovl),
                                                  VRX_d2(ovl), VRX_m3(ovl),
-                                                 VRX_rxb(ovl));  goto ok;
+                                                 VRX_rxb(ovl), NULL);  goto ok;
    case 0xe7000000000eULL: s390_format_VRX_VRRD(s390_irgen_VST, VRX_v1(ovl),
                                                 VRX_x2(ovl), VRX_b2(ovl),
-                                                VRX_d2(ovl), VRX_rxb(ovl));  goto ok;
-   case 0xe70000000012ULL: s390_format_VRV_VVRDMT(s390_irgen_VGEG, VRX_v1(ovl),
-                                                  VRX_x2(ovl), VRX_b2(ovl),
-                                                  VRX_d2(ovl), VRX_m3(ovl),
-                                                  VRX_rxb(ovl), Ity_I64);  goto ok;
-   case 0xe70000000013ULL: s390_format_VRV_VVRDMT(s390_irgen_VGEF, VRX_v1(ovl),
-                                                  VRX_x2(ovl), VRX_b2(ovl),
-                                                  VRX_d2(ovl), VRX_m3(ovl),
-                                                  VRX_rxb(ovl), Ity_I32);  goto ok;
-   case 0xe7000000001aULL: s390_format_VRV_VVRDMT(s390_irgen_VSCEG, VRX_v1(ovl),
-                                                  VRX_x2(ovl), VRX_b2(ovl),
-                                                  VRX_d2(ovl), VRX_m3(ovl),
-                                                  VRX_rxb(ovl), Ity_I64);  goto ok;
-   case 0xe7000000001bULL: s390_format_VRV_VVRDMT(s390_irgen_VSCEF, VRX_v1(ovl),
-                                                  VRX_x2(ovl), VRX_b2(ovl),
-                                                  VRX_d2(ovl), VRX_m3(ovl),
-                                                  VRX_rxb(ovl), Ity_I32);  goto ok;
-   case 0xe70000000021ULL: s390_format_VRS_RRDVM(s390_irgen_VLGV, VRS_v1(ovl),
-                                                VRS_b2(ovl), VRS_d2(ovl), VRS_v3(ovl),
-                                                VRS_m4(ovl), VRS_rxb(ovl));  goto ok;
+                                                VRX_d2(ovl), VRX_m3(ovl),
+                                                VRX_rxb(ovl));  goto ok;
+   case 0xe70000000012ULL: s390_format_VRV_VVRDMT(s390_irgen_VGEG, VRV_v1(ovl),
+                                                  VRV_x2(ovl), VRV_b2(ovl),
+                                                  VRV_d2(ovl), VRV_m3(ovl),
+                                                  VRV_rxb(ovl), Ity_I64);  goto ok;
+   case 0xe70000000013ULL: s390_format_VRV_VVRDMT(s390_irgen_VGEF, VRV_v1(ovl),
+                                                  VRV_x2(ovl), VRV_b2(ovl),
+                                                  VRV_d2(ovl), VRV_m3(ovl),
+                                                  VRV_rxb(ovl), Ity_I32);  goto ok;
+   case 0xe7000000001aULL: s390_format_VRV_VVRDMT(s390_irgen_VSCEG, VRV_v1(ovl),
+                                                  VRV_x2(ovl), VRV_b2(ovl),
+                                                  VRV_d2(ovl), VRV_m3(ovl),
+                                                  VRV_rxb(ovl), Ity_I64);  goto ok;
+   case 0xe7000000001bULL: s390_format_VRV_VVRDMT(s390_irgen_VSCEF, VRV_v1(ovl),
+                                                  VRV_x2(ovl), VRV_b2(ovl),
+                                                  VRV_d2(ovl), VRV_m3(ovl),
+                                                  VRV_rxb(ovl), Ity_I32);  goto ok;
+   case 0xe70000000021ULL: s390_format_VRS_RRDVM(s390_irgen_VLGV, VRSc_r1(ovl),
+                                                 VRSc_b2(ovl), VRSc_d2(ovl), VRSc_v3(ovl),
+                                                 VRSc_m4(ovl), VRSc_rxb(ovl));  goto ok;
    case 0xe70000000022ULL: s390_format_VRS_VRRDM(s390_irgen_VLVG, VRS_v1(ovl),
                                                 VRS_b2(ovl), VRS_d2(ovl), VRS_v3(ovl),
                                                 VRS_m4(ovl), VRS_rxb(ovl));  goto ok;
@@ -21633,9 +22520,9 @@ s390_decode_6byte_and_irgen(const UChar *bytes)
                                                  VRS_rxb(ovl));  goto ok;
    case 0xe70000000036ULL: s390_format_VRS_VRDV(s390_irgen_VLM, VRS_v1(ovl),
                                                 VRS_b2(ovl), VRS_d2(ovl), VRS_v3(ovl),
-                                                VRS_rxb(ovl));  goto ok;
+                                                VRS_m4(ovl), VRS_rxb(ovl));  goto ok;
    case 0xe70000000037ULL: s390_format_VRS_VRRD(s390_irgen_VLL, VRS_v1(ovl),
-                                                VRS_b2(ovl), VRS_d2(ovl), VRS_v3(ovl),
+                                                VRS_b2(ovl), VRS_d2(ovl), VRS_r3(ovl),
                                                 VRS_rxb(ovl));  goto ok;
    case 0xe70000000038ULL: s390_format_VRS_VRDVM(s390_irgen_VESRL, VRS_v1(ovl),
                                                  VRS_b2(ovl), VRS_d2(ovl),
@@ -21647,9 +22534,9 @@ s390_decode_6byte_and_irgen(const UChar *bytes)
                                                  VRS_rxb(ovl));  goto ok;
    case 0xe7000000003eULL: s390_format_VRS_VRDV(s390_irgen_VSTM, VRS_v1(ovl),
                                                 VRS_b2(ovl), VRS_d2(ovl), VRS_v3(ovl),
-                                                VRS_rxb(ovl));  goto ok;
+                                                VRS_m4(ovl), VRS_rxb(ovl));  goto ok;
    case 0xe7000000003fULL: s390_format_VRS_VRRD(s390_irgen_VSTL, VRS_v1(ovl),
-                                                VRS_b2(ovl), VRS_d2(ovl), VRS_v3(ovl),
+                                                VRS_b2(ovl), VRS_d2(ovl), VRS_r3(ovl),
                                                 VRS_rxb(ovl));  goto ok;
    case 0xe70000000040ULL: s390_format_VRI_VIM(s390_irgen_VLEIB, VRI_v1(ovl),
                                                  VRI_i2(ovl), VRI_m3(ovl),
@@ -21663,138 +22550,139 @@ s390_decode_6byte_and_irgen(const UChar *bytes)
    case 0xe70000000043ULL: s390_format_VRI_VIM(s390_irgen_VLEIF, VRI_v1(ovl),
                                                VRI_i2(ovl), VRI_m3(ovl),
                                                VRI_rxb(ovl));  goto ok;break;
-   case 0xe70000000044ULL: s390_format_VRI_VIM(s390_irgen_VGBM, VRI_v1(ovl),
-                                               VRI_i2(ovl), VRI_m3(ovl),
-                                               VRI_rxb(ovl));  goto ok;
-   case 0xe70000000045ULL: s390_format_VRI_VIM(s390_irgen_VREPI, VRI_v1(ovl),
-                                               VRI_i2(ovl), VRI_m3(ovl),
-                                               VRI_rxb(ovl));  goto ok;
-   case 0xe70000000046ULL: s390_format_VRI_VIM(s390_irgen_VGM, VRI_v1(ovl),
-                                               VRI_i2(ovl), VRI_m3(ovl),
-                                               VRI_rxb(ovl));  goto ok;
+   case 0xe70000000044ULL: s390_format_VRI_V0U(s390_irgen_VGBM, VRI_v1(ovl),
+                                               VRI_i2(ovl), VRI_rxb(ovl),
+                                               vgbm_disasm);  goto ok;
+   case 0xe70000000045ULL: s390_format_VRI_V0IU(s390_irgen_VREPI, VRI_v1(ovl),
+                                                VRI_i2(ovl), VRI_m3(ovl),
+                                                VRI_rxb(ovl));  goto ok;
+   case 0xe70000000046ULL: s390_format_VRI_V0UUU(s390_irgen_VGM, VRIb_v1(ovl),
+                                                 VRIb_i2(ovl), VRIb_i3(ovl), VRIb_m4(ovl),
+                                                 VRIb_rxb(ovl));  goto ok;
    case 0xe7000000004aULL: s390_format_VRI_VVIMM(s390_irgen_VFTCI, VRIe_v1(ovl),
                                                  VRIe_v2(ovl), VRIe_i3(ovl),
                                                  VRIe_m4(ovl), VRIe_m5(ovl),
                                                  VRIe_rxb(ovl));  goto ok;
-   case 0xe7000000004dULL: s390_format_VRI_VVIM(s390_irgen_VREP, VRI_v1(ovl),
-                                               VRI_v3(ovl), VRI_i2(ovl),
-                                               VRI_m3(ovl), VRI_rxb(ovl));  goto ok;
-   case 0xe70000000050ULL: s390_format_VRR_VVM(s390_irgen_VPOPCT, VRR_v1(ovl),
-                                               VRR_v2(ovl), VRR_m4(ovl),
-                                               VRR_rxb(ovl));  goto ok;
-   case 0xe70000000052ULL: s390_format_VRR_VVM(s390_irgen_VCTZ, VRR_v1(ovl),
-                                               VRR_v2(ovl), VRR_m4(ovl),
-                                               VRR_rxb(ovl));  goto ok;
-   case 0xe70000000053ULL: s390_format_VRR_VVM(s390_irgen_VCLZ, VRR_v1(ovl),
-                                               VRR_v2(ovl), VRR_m4(ovl),
+   case 0xe7000000004dULL: s390_format_VRI_VVIM(s390_irgen_VREP, VRIc_v1(ovl),
+                                               VRIc_v3(ovl), VRIc_i2(ovl),
+                                               VRIc_m4(ovl), VRIc_rxb(ovl));  goto ok;
+   case 0xe70000000050ULL: s390_format_VRR_VVM(s390_irgen_VPOPCT, VRRa_v1(ovl),
+                                               VRRa_v2(ovl), VRRa_m3(ovl),
                                                VRR_rxb(ovl));  goto ok;
+   case 0xe70000000052ULL: s390_format_VRR_VVM(s390_irgen_VCTZ, VRRa_v1(ovl),
+                                               VRRa_v2(ovl), VRRa_m3(ovl),
+                                               VRRa_rxb(ovl));  goto ok;
+   case 0xe70000000053ULL: s390_format_VRR_VVM(s390_irgen_VCLZ, VRRa_v1(ovl),
+                                               VRRa_v2(ovl), VRRa_m3(ovl),
+                                               VRRa_rxb(ovl));  goto ok;
+   case 0xe70000000054ULL: /* VGEM */ goto unimplemented;
    case 0xe70000000056ULL: s390_format_VRR_VV(s390_irgen_VLR, VRR_v1(ovl),
                                               VRR_v2(ovl), VRR_rxb(ovl));  goto ok;
    case 0xe7000000005cULL: s390_format_VRR_VVMM(s390_irgen_VISTR, VRR_v1(ovl),
                                                 VRR_v2(ovl), VRR_m4(ovl),
                                                 VRR_m5(ovl), VRR_rxb(ovl));  goto ok;
-   case 0xe7000000005fULL: s390_format_VRR_VVM(s390_irgen_VSEG, VRR_v1(ovl),
-                                               VRR_v2(ovl), VRR_m4(ovl),
-                                               VRR_rxb(ovl));  goto ok;
+   case 0xe7000000005fULL: s390_format_VRR_VVM(s390_irgen_VSEG, VRRa_v1(ovl),
+                                               VRRa_v2(ovl), VRRa_m3(ovl),
+                                               VRRa_rxb(ovl));  goto ok;
    case 0xe70000000060ULL: s390_format_VRR_VVVM(s390_irgen_VMRL, VRR_v1(ovl),
-                                               VRR_v2(ovl), VRR_r3(ovl),
-                                               VRR_m4(ovl), VRR_rxb(ovl));  goto ok;
+                                                VRR_v2(ovl), VRR_v3(ovl),
+                                                VRR_m4(ovl), VRR_rxb(ovl));  goto ok;
    case 0xe70000000061ULL: s390_format_VRR_VVVM(s390_irgen_VMRH, VRR_v1(ovl),
-                                               VRR_v2(ovl), VRR_r3(ovl),
-                                               VRR_m4(ovl), VRR_rxb(ovl));  goto ok;
+                                                VRR_v2(ovl), VRR_v3(ovl),
+                                                VRR_m4(ovl), VRR_rxb(ovl));  goto ok;
    case 0xe70000000062ULL: s390_format_VRR_VRR(s390_irgen_VLVGP, VRR_v1(ovl),
-                                               VRR_v2(ovl), VRR_r3(ovl),
+                                               VRR_r2(ovl), VRR_r3(ovl),
                                                VRR_rxb(ovl));  goto ok;
    case 0xe70000000064ULL: s390_format_VRR_VVVM(s390_irgen_VSUM, VRR_v1(ovl),
-                                                VRR_v2(ovl), VRR_r3(ovl),
+                                                VRR_v2(ovl), VRR_v3(ovl),
                                                 VRR_m4(ovl), VRR_rxb(ovl));  goto ok;
    case 0xe70000000065ULL: s390_format_VRR_VVVM(s390_irgen_VSUMG, VRR_v1(ovl),
-                                                VRR_v2(ovl), VRR_r3(ovl),
+                                                VRR_v2(ovl), VRR_v3(ovl),
                                                 VRR_m4(ovl), VRR_rxb(ovl));  goto ok;
    case 0xe70000000066ULL: s390_format_VRR_VVV(s390_irgen_VCKSM, VRR_v1(ovl),
-                                               VRR_v2(ovl), VRR_r3(ovl),
+                                               VRR_v2(ovl), VRR_v3(ovl),
                                                VRR_rxb(ovl));  goto ok;
    case 0xe70000000067ULL: s390_format_VRR_VVVM(s390_irgen_VSUMQ, VRR_v1(ovl),
-                                                VRR_v2(ovl), VRR_r3(ovl),
+                                                VRR_v2(ovl), VRR_v3(ovl),
                                                 VRR_m4(ovl), VRR_rxb(ovl));  goto ok;
    case 0xe70000000068ULL: s390_format_VRR_VVV(s390_irgen_VN, VRR_v1(ovl),
-                                               VRR_v2(ovl), VRR_r3(ovl),
+                                               VRR_v2(ovl), VRR_v3(ovl),
                                                VRR_rxb(ovl));  goto ok;
    case 0xe70000000069ULL: s390_format_VRR_VVV(s390_irgen_VNC, VRR_v1(ovl),
-                                               VRR_v2(ovl), VRR_r3(ovl),
+                                               VRR_v2(ovl), VRR_v3(ovl),
                                                VRR_rxb(ovl));  goto ok;
    case 0xe7000000006aULL: s390_format_VRR_VVV(s390_irgen_VO, VRR_v1(ovl),
-                                               VRR_v2(ovl), VRR_r3(ovl),
+                                               VRR_v2(ovl), VRR_v3(ovl),
                                                VRR_rxb(ovl));  goto ok;
    case 0xe7000000006bULL: s390_format_VRR_VVV(s390_irgen_VNO, VRR_v1(ovl),
-                                               VRR_v2(ovl), VRR_r3(ovl),
+                                               VRR_v2(ovl), VRR_v3(ovl),
                                                VRR_rxb(ovl));  goto ok;
    case 0xe7000000006cULL: s390_format_VRR_VVV(s390_irgen_VNX, VRR_v1(ovl),
-                                               VRR_v2(ovl), VRR_r3(ovl),
+                                               VRR_v2(ovl), VRR_v3(ovl),
                                                VRR_rxb(ovl));  goto ok;
    case 0xe7000000006dULL: s390_format_VRR_VVV(s390_irgen_VX, VRR_v1(ovl),
-                                               VRR_v2(ovl), VRR_r3(ovl),
+                                               VRR_v2(ovl), VRR_v3(ovl),
                                                VRR_rxb(ovl));  goto ok;
    case 0xe7000000006eULL: s390_format_VRR_VVV(s390_irgen_VNN, VRR_v1(ovl),
-                                               VRR_v2(ovl), VRR_r3(ovl),
+                                               VRR_v2(ovl), VRR_v3(ovl),
                                                VRR_rxb(ovl));  goto ok;
    case 0xe7000000006fULL: s390_format_VRR_VVV(s390_irgen_VOC, VRR_v1(ovl),
-                                               VRR_v2(ovl), VRR_r3(ovl),
+                                               VRR_v2(ovl), VRR_v3(ovl),
                                                VRR_rxb(ovl));  goto ok;
    case 0xe70000000070ULL: s390_format_VRR_VVVM(s390_irgen_VESLV, VRR_v1(ovl),
-                                                VRR_v2(ovl), VRR_r3(ovl),
+                                                VRR_v2(ovl), VRR_v3(ovl),
                                                 VRR_m4(ovl), VRR_rxb(ovl));  goto ok;
    case 0xe70000000072ULL: s390_format_VRId_VVVIM(s390_irgen_VERIM, VRId_v1(ovl),
                                                   VRId_v2(ovl), VRId_v3(ovl),
                                                   VRId_i4(ovl), VRId_m5(ovl),
                                                   VRId_rxb(ovl));  goto ok;
    case 0xe70000000073ULL: s390_format_VRR_VVVM(s390_irgen_VERLLV, VRR_v1(ovl),
-                                                VRR_v2(ovl), VRR_r3(ovl),
+                                                VRR_v2(ovl), VRR_v3(ovl),
                                                 VRR_m4(ovl), VRR_rxb(ovl));  goto ok;
    case 0xe70000000074ULL: s390_format_VRR_VVV(s390_irgen_VSL, VRR_v1(ovl),
-                                               VRR_v2(ovl), VRR_r3(ovl),
+                                               VRR_v2(ovl), VRR_v3(ovl),
                                                VRR_rxb(ovl));  goto ok;
    case 0xe70000000075ULL: s390_format_VRR_VVV(s390_irgen_VSLB, VRR_v1(ovl),
-                                               VRR_v2(ovl), VRR_r3(ovl),
+                                               VRR_v2(ovl), VRR_v3(ovl),
                                                VRR_rxb(ovl));  goto ok;
    case 0xe70000000077ULL: s390_format_VRId_VVVI(s390_irgen_VSLDB, VRId_v1(ovl),
                                                  VRId_v2(ovl), VRId_v3(ovl),
                                                  VRId_i4(ovl), VRId_rxb(ovl));  goto ok;
    case 0xe70000000078ULL: s390_format_VRR_VVVM(s390_irgen_VESRLV, VRR_v1(ovl),
-                                                VRR_v2(ovl), VRR_r3(ovl),
+                                                VRR_v2(ovl), VRR_v3(ovl),
                                                 VRR_m4(ovl), VRR_rxb(ovl));  goto ok;
    case 0xe7000000007aULL: s390_format_VRR_VVVM(s390_irgen_VESRAV, VRR_v1(ovl),
-                                                VRR_v2(ovl), VRR_r3(ovl),
+                                                VRR_v2(ovl), VRR_v3(ovl),
                                                 VRR_m4(ovl), VRR_rxb(ovl));  goto ok;
    case 0xe7000000007cULL: s390_format_VRR_VVV(s390_irgen_VSRL, VRR_v1(ovl),
-                                               VRR_v2(ovl), VRR_r3(ovl),
+                                               VRR_v2(ovl), VRR_v3(ovl),
                                                VRR_rxb(ovl));  goto ok;
    case 0xe7000000007dULL: s390_format_VRR_VVV(s390_irgen_VSRLB, VRR_v1(ovl),
-                                               VRR_v2(ovl), VRR_r3(ovl),
+                                               VRR_v2(ovl), VRR_v3(ovl),
                                                VRR_rxb(ovl));  goto ok;
    case 0xe7000000007eULL: s390_format_VRR_VVV(s390_irgen_VSRA, VRR_v1(ovl),
-                                               VRR_v2(ovl), VRR_r3(ovl),
+                                               VRR_v2(ovl), VRR_v3(ovl),
                                                VRR_rxb(ovl));  goto ok;
    case 0xe7000000007fULL: s390_format_VRR_VVV(s390_irgen_VSRAB, VRR_v1(ovl),
-                                               VRR_v2(ovl), VRR_r3(ovl),
+                                               VRR_v2(ovl), VRR_v3(ovl),
                                                VRR_rxb(ovl));  goto ok;
    case 0xe70000000080ULL: s390_format_VRR_VVVMM(s390_irgen_VFEE, VRR_v1(ovl),
                                                  VRR_v2(ovl), VRR_r3(ovl),
-                                                 VRR_m4(ovl), VRR_m5(ovl),
-                                                 VRR_rxb(ovl));  goto ok;
+                                                 VRR_m4(ovl), VRR_m5(ovl), VRR_rxb(ovl),
+                                                 vfae_like_disasm);  goto ok;
    case 0xe70000000081ULL: s390_format_VRR_VVVMM(s390_irgen_VFENE, VRR_v1(ovl),
                                                  VRR_v2(ovl), VRR_r3(ovl),
-                                                 VRR_m4(ovl), VRR_m5(ovl),
-                                                 VRR_rxb(ovl));  goto ok;
+                                                 VRR_m4(ovl), VRR_m5(ovl), VRR_rxb(ovl),
+                                                 vfae_like_disasm);  goto ok;
    case 0xe70000000082ULL: s390_format_VRR_VVVMM(s390_irgen_VFAE, VRR_v1(ovl),
                                                  VRR_v2(ovl), VRR_r3(ovl),
-                                                 VRR_m4(ovl), VRR_m5(ovl),
-                                                 VRR_rxb(ovl));  goto ok;
+                                                 VRR_m4(ovl), VRR_m5(ovl), VRR_rxb(ovl),
+                                                 vfae_like_disasm);  goto ok;
    case 0xe70000000084ULL: s390_format_VRR_VVVM(s390_irgen_VPDI, VRR_v1(ovl),
                                                VRR_v2(ovl), VRR_r3(ovl),
                                                VRR_m4(ovl), VRR_rxb(ovl));  goto ok;
    case 0xe70000000085ULL: s390_format_VRR_VVV(s390_irgen_VBPERM, VRR_v1(ovl),
-                                               VRR_v2(ovl), VRR_r3(ovl),
+                                               VRR_v2(ovl), VRR_v3(ovl),
                                                VRR_rxb(ovl));  goto ok;
    case 0xe70000000086ULL: s390_format_VRId_VVVI(s390_irgen_VSLD, VRId_v1(ovl),
                                                  VRId_v2(ovl), VRId_v3(ovl),
@@ -21804,71 +22692,75 @@ s390_decode_6byte_and_irgen(const UChar *bytes)
                                                  VRId_v2(ovl), VRId_v3(ovl),
                                                  VRId_i4(ovl),
                                                  VRId_rxb(ovl));  goto ok;
+   case 0xe70000000088ULL: /* VEVAL */ goto unimplemented;
+   case 0xe70000000089ULL: /* VBLEND */ goto unimplemented;
    case 0xe7000000008aULL: s390_format_VRR_VVVVMM(s390_irgen_VSTRC, VRRd_v1(ovl),
                                                   VRRd_v2(ovl), VRRd_v3(ovl),
                                                   VRRd_v4(ovl), VRRd_m5(ovl),
-                                                  VRRd_m6(ovl),
-                                                  VRRd_rxb(ovl));  goto ok;
+                                                  VRRd_m6(ovl), VRRd_rxb(ovl),
+                                                  vstrc_disasm);  goto ok;
    case 0xe7000000008bULL: s390_format_VRR_VVVVMM(s390_irgen_VSTRS, VRRd_v1(ovl),
                                                   VRRd_v2(ovl), VRRd_v3(ovl),
                                                   VRRd_v4(ovl), VRRd_m5(ovl),
-                                                  VRRd_m6(ovl),
-                                                  VRRd_rxb(ovl));  goto ok;
+                                                  VRRd_m6(ovl), VRRd_rxb(ovl),
+                                                  vfae_like_disasm);  goto ok;
    case 0xe7000000008cULL: s390_format_VRR_VVVV(s390_irgen_VPERM, VRR_v1(ovl),
-                                               VRR_v2(ovl), VRR_r3(ovl),
-                                               VRR_m4(ovl), VRR_rxb(ovl));  goto ok;
+                                               VRR_v2(ovl), VRR_v3(ovl),
+                                               VRR_v4(ovl), VRR_rxb(ovl));  goto ok;
    case 0xe7000000008dULL: s390_format_VRR_VVVV(s390_irgen_VSEL, VRR_v1(ovl),
-                                               VRR_v2(ovl), VRR_r3(ovl),
-                                               VRR_m4(ovl), VRR_rxb(ovl));  goto ok;
+                                               VRR_v2(ovl), VRR_v3(ovl),
+                                               VRR_v4(ovl), VRR_rxb(ovl));  goto ok;
    case 0xe7000000008eULL: s390_format_VRR_VVVVMM(s390_irgen_VFMS, VRRe_v1(ovl),
                                                   VRRe_v2(ovl), VRRe_v3(ovl),
                                                   VRRe_v4(ovl), VRRe_m5(ovl),
-                                                  VRRe_m6(ovl),
-                                                  VRRe_rxb(ovl));  goto ok;
+                                                  VRRe_m6(ovl), VRRe_rxb(ovl),
+                                                  vfms_like_disasm);  goto ok;
    case 0xe7000000008fULL: s390_format_VRR_VVVVMM(s390_irgen_VFMA, VRRe_v1(ovl),
                                                   VRRe_v2(ovl), VRRe_v3(ovl),
                                                   VRRe_v4(ovl), VRRe_m5(ovl),
-                                                  VRRe_m6(ovl),
-                                                  VRRe_rxb(ovl));  goto ok;
+                                                  VRRe_m6(ovl), VRRe_rxb(ovl),
+                                                  vfms_like_disasm);  goto ok;
    case 0xe70000000094ULL: s390_format_VRR_VVVM(s390_irgen_VPK, VRR_v1(ovl),
-                                               VRR_v2(ovl), VRR_r3(ovl),
-                                               VRR_m4(ovl), VRR_rxb(ovl));  goto ok;
+                                                VRR_v2(ovl), VRR_v3(ovl),
+                                                VRR_m4(ovl), VRR_rxb(ovl));  goto ok;
    case 0xe70000000095ULL: s390_format_VRR_VVVMM(s390_irgen_VPKLS, VRR_v1(ovl),
-                                               VRR_v2(ovl), VRR_r3(ovl),
-                                               VRR_m4(ovl), VRR_m5(ovl), VRR_rxb(ovl));  goto ok;
+                                                 VRR_v2(ovl), VRR_v3(ovl),
+                                                 VRR_m4(ovl), VRR_m5(ovl), VRR_rxb(ovl),
+                                                 vch_like_disasm);  goto ok;
    case 0xe70000000097ULL: s390_format_VRR_VVVMM(s390_irgen_VPKS, VRR_v1(ovl),
-                                               VRR_v2(ovl), VRR_r3(ovl),
-                                               VRR_m4(ovl), VRR_m5(ovl), VRR_rxb(ovl));  goto ok;
+                                                 VRR_v2(ovl), VRR_v3(ovl),
+                                                 VRR_m4(ovl), VRR_m5(ovl), VRR_rxb(ovl),
+                                                 vch_like_disasm);  goto ok;
    case 0xe7000000009eULL: s390_format_VRR_VVVVMM(s390_irgen_VFNMS, VRRe_v1(ovl),
                                                   VRRe_v2(ovl), VRRe_v3(ovl),
                                                   VRRe_v4(ovl), VRRe_m5(ovl),
-                                                  VRRe_m6(ovl),
-                                                  VRRe_rxb(ovl));  goto ok;
+                                                  VRRe_m6(ovl), VRRe_rxb(ovl),
+                                                  vfms_like_disasm);  goto ok;
    case 0xe7000000009fULL: s390_format_VRR_VVVVMM(s390_irgen_VFNMA, VRRe_v1(ovl),
                                                   VRRe_v2(ovl), VRRe_v3(ovl),
                                                   VRRe_v4(ovl), VRRe_m5(ovl),
-                                                  VRRe_m6(ovl),
-                                                  VRRe_rxb(ovl));  goto ok;
+                                                  VRRe_m6(ovl), VRRe_rxb(ovl),
+                                                  vfms_like_disasm);  goto ok;
    case 0xe700000000a1ULL: s390_format_VRR_VVVM(s390_irgen_VMLH, VRR_v1(ovl),
-                                                VRR_v2(ovl), VRR_r3(ovl),
+                                                VRR_v2(ovl), VRR_v3(ovl),
                                                 VRR_m4(ovl), VRR_rxb(ovl));  goto ok;
    case 0xe700000000a2ULL: s390_format_VRR_VVVM(s390_irgen_VML, VRR_v1(ovl),
-                                                VRR_v2(ovl), VRR_r3(ovl),
+                                                VRR_v2(ovl), VRR_v3(ovl),
                                                 VRR_m4(ovl), VRR_rxb(ovl));  goto ok;
    case 0xe700000000a3ULL: s390_format_VRR_VVVM(s390_irgen_VMH, VRR_v1(ovl),
-                                                VRR_v2(ovl), VRR_r3(ovl),
+                                                VRR_v2(ovl), VRR_v3(ovl),
                                                 VRR_m4(ovl), VRR_rxb(ovl));  goto ok;
    case 0xe700000000a4ULL: s390_format_VRR_VVVM(s390_irgen_VMLE, VRR_v1(ovl),
-                                                VRR_v2(ovl), VRR_r3(ovl),
+                                                VRR_v2(ovl), VRR_v3(ovl),
                                                 VRR_m4(ovl), VRR_rxb(ovl));  goto ok;
    case 0xe700000000a5ULL: s390_format_VRR_VVVM(s390_irgen_VMLO, VRR_v1(ovl),
-                                                VRR_v2(ovl), VRR_r3(ovl),
+                                                VRR_v2(ovl), VRR_v3(ovl),
                                                 VRR_m4(ovl), VRR_rxb(ovl));  goto ok;
    case 0xe700000000a6ULL: s390_format_VRR_VVVM(s390_irgen_VME, VRR_v1(ovl),
-                                                VRR_v2(ovl), VRR_r3(ovl),
+                                                VRR_v2(ovl), VRR_v3(ovl),
                                                 VRR_m4(ovl), VRR_rxb(ovl));  goto ok;
    case 0xe700000000a7ULL: s390_format_VRR_VVVM(s390_irgen_VMO, VRR_v1(ovl),
-                                                VRR_v2(ovl), VRR_r3(ovl),
+                                                VRR_v2(ovl), VRR_v3(ovl),
                                                 VRR_m4(ovl), VRR_rxb(ovl));  goto ok;
    case 0xe700000000a9ULL: s390_format_VRRd_VVVVM(s390_irgen_VMALH, VRRd_v1(ovl),
                                                   VRRd_v2(ovl), VRRd_v3(ovl),
@@ -21898,14 +22790,18 @@ s390_decode_6byte_and_irgen(const UChar *bytes)
                                                   VRRd_v2(ovl), VRRd_v3(ovl),
                                                   VRRd_v4(ovl), VRRd_m5(ovl),
                                                   VRRd_rxb(ovl));  goto ok;
+   case 0xe700000000b0ULL: /* VDL */ goto unimplemented;
+   case 0xe700000000b1ULL: /* VRL */ goto unimplemented;
+   case 0xe700000000b2ULL: /* VD */ goto unimplemented;
+   case 0xe700000000b3ULL: /* VR */ goto unimplemented;
    case 0xe700000000b4ULL: s390_format_VRR_VVVM(s390_irgen_VGFM, VRR_v1(ovl),
-                                                VRR_v2(ovl), VRR_r3(ovl),
+                                                VRR_v2(ovl), VRR_v3(ovl),
                                                 VRR_m4(ovl), VRR_rxb(ovl));  goto ok;
    case 0xe700000000b8ULL: s390_format_VRR_VVVVMM(s390_irgen_VMSL, VRRd_v1(ovl),
                                                   VRRd_v2(ovl), VRRd_v3(ovl),
                                                   VRRd_v4(ovl), VRRd_m5(ovl),
-                                                  VRRd_m6(ovl),
-                                                  VRRd_rxb(ovl));  goto ok;
+                                                  VRRd_m6(ovl), VRRd_rxb(ovl),
+                                                  vmsl_disasm);  goto ok;
    case 0xe700000000b9ULL: s390_format_VRRd_VVVVM(s390_irgen_VACCC, VRRd_v1(ovl),
                                                   VRRd_v2(ovl), VRRd_v3(ovl),
                                                   VRRd_v4(ovl), VRRd_m5(ovl),
@@ -21942,10 +22838,10 @@ s390_decode_6byte_and_irgen(const UChar *bytes)
                                                   VRRa_v2(ovl), VRRa_m3(ovl),
                                                   VRRa_m4(ovl), VRRa_m5(ovl),
                                                   VRRa_rxb(ovl)); goto ok;
-   case 0xe700000000c4ULL: s390_format_VRRa_VVMMM(s390_irgen_VFLL, VRRa_v1(ovl),
-                                                  VRRa_v2(ovl), VRRa_m3(ovl),
-                                                  VRRa_m4(ovl), VRRa_m5(ovl),
-              

Follow ups